Gotowa bibliografia na temat „White-box attack”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „White-box attack”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "White-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Pełny tekst źródła
Streszczenie:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. I
Style APA, Harvard, Vancouver, ISO itp.
2

Josse, Sébastien. "White-box attack context cryptovirology." Journal in Computer Virology 5, no. 4 (2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed, and V. Manikandan. "Survey on White-Box Attacks and Solutions." Asian Journal of Computer Science and Technology 7, no. 3 (2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Pełny tekst źródła
Streszczenie:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such con
Style APA, Harvard, Vancouver, ISO itp.
4

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Pełny tekst źródła
Streszczenie:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply af
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (2020): 7168. http://dx.doi.org/10.3390/app10207168.

Pełny tekst źródła
Streszczenie:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craf
Style APA, Harvard, Vancouver, ISO itp.
6

Zhou, Jie, Jian Bai, and Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode." Security and Communication Networks 2020 (November 19, 2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Pełny tekst źródła
Streszczenie:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm
Style APA, Harvard, Vancouver, ISO itp.
7

LIN, Ting-Ting, and Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation." Journal of Software 24, no. 8 (2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Sicheng, Yun Lin, Zhida Bao, and Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks." Security and Communication Networks 2021 (October 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Pełny tekst źródła
Streszczenie:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculat
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (2019): 2286. http://dx.doi.org/10.3390/app9112286.

Pełny tekst źródła
Streszczenie:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a
Style APA, Harvard, Vancouver, ISO itp.
10

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Pełny tekst źródła
Streszczenie:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!