Gotowa bibliografia na temat „White-box attack”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „White-box attack”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "White-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Pełny tekst źródła
Streszczenie:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
Style APA, Harvard, Vancouver, ISO itp.
2

Josse, Sébastien. "White-box attack context cryptovirology." Journal in Computer Virology 5, no. 4 (2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed, and V. Manikandan. "Survey on White-Box Attacks and Solutions." Asian Journal of Computer Science and Technology 7, no. 3 (2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Pełny tekst źródła
Streszczenie:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such contexts, here analyzing encryption schemes in imitation of protecting the confidentiality concerning records against white-box attacks. The lightweight encryption, intention in accordance with protecting the confidentiality over data towards white-box attacks is the recently stability good algorithm. In that paper, we read the extraordinary algorithms too.
Style APA, Harvard, Vancouver, ISO itp.
4

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Pełny tekst źródła
Streszczenie:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply affecting the level of accuracy of the model. In this paper, The MLP model was used for the sentiment analysis of the texts taken from the tweets, the effect of applying a white-box adversarial attack on this classifier was studied and a technique was proposed to protect it from the attack. After applying the proposed methodology, we found that the adversarial attack decreases the accuracy of the classifier from 55.17% to 11.11%, and after applying the proposed defense technique, this contributed to an increase in the accuracy of the classifier up to 77.77%, and therefore the proposed plan can be adopted in the face of the adversarial attack. Attacker determines their targets strategically and deliberately depend on vulnerabilities they have ascertained. Organization and individuals mostly try to protect themselves from one occurrence or type on an attack. Still, they have to acknowledge that the attacker may easily move focus to advanced uncovered vulnerabilities. Even if someone successfully tackles several attacks, risks remain, and the need to face threats will happen for the predictable future.
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (2020): 7168. http://dx.doi.org/10.3390/app10207168.

Pełny tekst źródła
Streszczenie:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Style APA, Harvard, Vancouver, ISO itp.
6

Zhou, Jie, Jian Bai, and Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode." Security and Communication Networks 2020 (November 19, 2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Pełny tekst źródła
Streszczenie:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm for the white-box implementation of ECDSA. It uses the lookup table technology and the “cloud plus side” mode to protect the private key. The residue number system (RNS) theory is used to reduce the size of storage. Moreover, the article analyzes the security of the proposed algorithm against an exhaustive search attack, a random number attack, a code lifting attack, and so on. The efficiency of the proposed scheme is compared with that of the classical ECDSA through experiments.
Style APA, Harvard, Vancouver, ISO itp.
7

LIN, Ting-Ting, and Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation." Journal of Software 24, no. 8 (2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Sicheng, Yun Lin, Zhida Bao, and Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks." Security and Communication Networks 2021 (October 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Pełny tekst źródła
Streszczenie:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculation speed. The gradient of the quantized parameter is directly transferred to the original parameter to realize the gradient concealment and achieve the effect of effectively defending against the white box gradient attack. Experimental results show that BMCDN obtains significant immune performance against white box gradient attacks while achieving a scale reduction of 6 times.
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (2019): 2286. http://dx.doi.org/10.3390/app9112286.

Pełny tekst źródła
Streszczenie:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Style APA, Harvard, Vancouver, ISO itp.
10

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Pełny tekst źródła
Streszczenie:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!