Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: White-box attack.

Artykuły w czasopismach na temat „White-box attack”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „White-box attack”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Pełny tekst źródła
Streszczenie:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. I
Style APA, Harvard, Vancouver, ISO itp.
2

Josse, Sébastien. "White-box attack context cryptovirology." Journal in Computer Virology 5, no. 4 (2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed, and V. Manikandan. "Survey on White-Box Attacks and Solutions." Asian Journal of Computer Science and Technology 7, no. 3 (2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Pełny tekst źródła
Streszczenie:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such con
Style APA, Harvard, Vancouver, ISO itp.
4

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Pełny tekst źródła
Streszczenie:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply af
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (2020): 7168. http://dx.doi.org/10.3390/app10207168.

Pełny tekst źródła
Streszczenie:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craf
Style APA, Harvard, Vancouver, ISO itp.
6

Zhou, Jie, Jian Bai, and Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode." Security and Communication Networks 2020 (November 19, 2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Pełny tekst źródła
Streszczenie:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm
Style APA, Harvard, Vancouver, ISO itp.
7

LIN, Ting-Ting, and Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation." Journal of Software 24, no. 8 (2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Sicheng, Yun Lin, Zhida Bao, and Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks." Security and Communication Networks 2021 (October 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Pełny tekst źródła
Streszczenie:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculat
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (2019): 2286. http://dx.doi.org/10.3390/app9112286.

Pełny tekst źródła
Streszczenie:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a
Style APA, Harvard, Vancouver, ISO itp.
10

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Pełny tekst źródła
Streszczenie:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the
Style APA, Harvard, Vancouver, ISO itp.
11

Lee, Xian Yeow, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, and Soumik Sarkar. "Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4577–84. http://dx.doi.org/10.1609/aaai.v34i04.5887.

Pełny tekst źródła
Streszczenie:
Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (corresponding to actuators in engineering systems) are equally perverse, but such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent
Style APA, Harvard, Vancouver, ISO itp.
12

Chitic, Raluca, Ali Osman Topal, and Franck Leprévost. "Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box." Applied Sciences 12, no. 14 (2022): 7339. http://dx.doi.org/10.3390/app12147339.

Pełny tekst źródła
Streszczenie:
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class ct≠ca. To achieve a better understanding of the inner workings of adversarial attacks, this study analyzes the adversarial images created by two completely opposite attacks against 10 ImageNet-trained CNNs. A total of 2×437 adversarial images are created by EAtarget,C, a black-box evolutionary algorithm (EA), and by the basic iterative method (
Style APA, Harvard, Vancouver, ISO itp.
13

Dionysiou, Antreas, Vassilis Vassiliades, and Elias Athanasopoulos. "Exploring Model Inversion Attacks in the Black-box Setting." Proceedings on Privacy Enhancing Technologies 2023, no. 1 (2023): 190–206. http://dx.doi.org/10.56553/popets-2023-0012.

Pełny tekst źródła
Streszczenie:
Model Inversion (MI) attacks, that aim to recover semantically meaningful reconstructions for each target class, have been extensively studied and demonstrated to be successful in the white-box setting. On the other hand, black-box MI attacks demonstrate low performance in terms of both effectiveness, i.e., reconstructing samples which are identifiable as their ground-truth, and efficiency, i.e., time or queries required for completing the attack process. Whether or not effective and efficient black-box MI attacks can be conducted on complex targets, such as Convolutional Neural Networks (CNNs
Style APA, Harvard, Vancouver, ISO itp.
14

Du, Xiaohu, Jie Yu, Zibo Yi, et al. "A Hybrid Adversarial Attack for Different Application Scenarios." Applied Sciences 10, no. 10 (2020): 3559. http://dx.doi.org/10.3390/app10103559.

Pełny tekst źródła
Streszczenie:
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with the vulnerability and security of deep learning systems. According to whether the attacker understands the deep learning model structure, the adversarial attack is divided into black-box attack and white-box attack. In this paper, we propose a hybrid adversarial attack for different application scenarios. Firstly, we propose a novel black-box
Style APA, Harvard, Vancouver, ISO itp.
15

Duan, Mingxing, Kenli Li, Jiayan Deng, Bin Xiao, and Qi Tian. "A Novel Multi-Sample Generation Method for Adversarial Attacks." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (2022): 1–21. http://dx.doi.org/10.1145/3506852.

Pełny tekst źródła
Streszczenie:
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample gener
Style APA, Harvard, Vancouver, ISO itp.
16

Fu, Zhongwang, and Xiaohui Cui. "ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification Model." Entropy 25, no. 2 (2023): 215. http://dx.doi.org/10.3390/e25020215.

Pełny tekst źródła
Streszczenie:
The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and network architectures, which is less practical when facing real-world cases. However, black-box adversarial attacks immune to the above limitations and reinforcement learning (RL) seem to be a feasible solution to explore an optimized evasion policy. Unfortunately, existing RL-based works perform worse than expected in the attack success rate. In lig
Style APA, Harvard, Vancouver, ISO itp.
17

Chen, Yiding, and Xiaojin Zhu. "Optimal Attack against Autoregressive Models by Manipulating the Environment." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3545–52. http://dx.doi.org/10.1609/aaai.v34i04.5760.

Pełny tekst źródła
Streszczenie:
We describe an optimal adversarial attack formulation against autoregressive time series forecast using Linear Quadratic Regulator (LQR). In this threat model, the environment evolves according to a dynamical system; an autoregressive model observes the current environment state and predicts its future values; an attacker has the ability to modify the environment state in order to manipulate future autoregressive forecasts. The attacker's goal is to force autoregressive forecasts into tracking a target trajectory while minimizing its attack expenditure. In the white-box setting where the attac
Style APA, Harvard, Vancouver, ISO itp.
18

Tu, Chun-Chen, Paishun Ting, Pin-Yu Chen, et al. "AutoZOOM: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-Box Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 742–49. http://dx.doi.org/10.1609/aaai.v33i01.3301742.

Pełny tekst źródła
Streszczenie:
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may give a false sense of model robustness due to inefficient query designs. To bri
Style APA, Harvard, Vancouver, ISO itp.
19

Usoltsev, Yakov, Balzhit Lodonova, Alexander Shelupanov, Anton Konev, and Evgeny Kostyuchenko. "Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack." Information 13, no. 2 (2022): 77. http://dx.doi.org/10.3390/info13020077.

Pełny tekst źródła
Streszczenie:
Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication
Style APA, Harvard, Vancouver, ISO itp.
20

Fang, Yong, Cheng Huang, Yijia Xu, and Yang Li. "RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning." Future Internet 11, no. 8 (2019): 177. http://dx.doi.org/10.3390/fi11080177.

Pełny tekst źródła
Streszczenie:
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to
Style APA, Harvard, Vancouver, ISO itp.
21

Wei, Zhipeng, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. "Boosting the Transferability of Video Adversarial Examples via Temporal Translation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 2659–67. http://dx.doi.org/10.1609/aaai.v36i3.20168.

Pełny tekst źródła
Streszczenie:
Although deep-learning based video recognition models have achieved remarkable success, they are vulnerable to adversarial examples that are generated by adding human-imperceptible perturbations on clean video samples. As indicated in recent studies, adversarial examples are transferable, which makes it feasible for black-box attacks in real-world applications. Nevertheless, most existing adversarial attack methods have poor transferability when attacking other video models and transfer-based attacks on video models are still unexplored. To this end, we propose to boost the transferability of
Style APA, Harvard, Vancouver, ISO itp.
22

Chang, Heng, Yu Rong, Tingyang Xu, et al. "A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3389–96. http://dx.doi.org/10.1609/aaai.v34i04.5741.

Pełny tekst źródła
Streszczenie:
With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in a more general and
Style APA, Harvard, Vancouver, ISO itp.
23

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (2020): 8079. http://dx.doi.org/10.3390/app10228079.

Pełny tekst źródła
Streszczenie:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversaria
Style APA, Harvard, Vancouver, ISO itp.
24

Won, Jongho, Seung-Hyun Seo, and Elisa Bertino. "A Secure Shuffling Mechanism for White-Box Attack-Resistant Unmanned Vehicles." IEEE Transactions on Mobile Computing 19, no. 5 (2020): 1023–39. http://dx.doi.org/10.1109/tmc.2019.2903048.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Pedersen, Joseph, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu, and Isabelle Guyon. "LTU Attacker for Membership Inference." Algorithms 15, no. 7 (2022): 254. http://dx.doi.org/10.3390/a15070254.

Pełny tekst źródła
Streszczenie:
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We p
Style APA, Harvard, Vancouver, ISO itp.
26

Gomez-Alanis, Alejandro, Jose A. Gonzalez-Lopez, and Antonio M. Peinado. "GANBA: Generative Adversarial Network for Biometric Anti-Spoofing." Applied Sciences 12, no. 3 (2022): 1454. http://dx.doi.org/10.3390/app12031454.

Pełny tekst źródła
Streszczenie:
Automatic speaker verification (ASV) is a voice biometric technology whose security might be compromised by spoofing attacks. To increase the robustness against spoofing attacks, presentation attack detection (PAD) or anti-spoofing systems for detecting replay, text-to-speech and voice conversion-based spoofing attacks are being developed. However, it was recently shown that adversarial spoofing attacks may seriously fool anti-spoofing systems. Moreover, the robustness of the whole biometric system (ASV + PAD) against this new type of attack is completely unexplored. In this work, a new genera
Style APA, Harvard, Vancouver, ISO itp.
27

Croce, Francesco, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, and Matthias Hein. "Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6437–45. http://dx.doi.org/10.1609/aaai.v36i6.20595.

Pełny tekst źródła
Streszczenie:
We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting. Sparse-RS does not rely on substitute models and achieves state-of-the-art success rate and query efficiency for multiple sparse attack models: L0-bounded perturbations, adversarial patches, and adversarial frames. The L0-version of untargeted Sparse-RS outperforms all black-box and even all white-box attacks for different models on MNIST, CIFAR-10, and ImageNet. Moreover, our untargeted Sparse-RS achieves very high success rates even for the chal
Style APA, Harvard, Vancouver, ISO itp.
28

Huang, Yang, Yuling Chen, Xuewei Wang, Jing Yang, and Qi Wang. "Promoting Adversarial Transferability via Dual-Sampling Variance Aggregation and Feature Heterogeneity Attacks." Electronics 12, no. 3 (2023): 767. http://dx.doi.org/10.3390/electronics12030767.

Pełny tekst źródła
Streszczenie:
At present, deep neural networks have been widely used in various fields, but their vulnerability requires attention. The adversarial attack aims to mislead the model by generating imperceptible perturbations on the source model, and although white-box attacks have achieved good success rates, existing adversarial samples exhibit weak migration in the black-box case, especially on some adversarially trained defense models. Previous work for gradient-based optimization either optimizes the image before iteration or optimizes the gradient during iteration, so it results in the generated adversar
Style APA, Harvard, Vancouver, ISO itp.
29

Riadi, Imam, Rusydi Umar, Iqbal Busthomi, and Arif Wirawan Muhammad. "Block-hash of blockchain framework against man-in-the-middle attacks." Register: Jurnal Ilmiah Teknologi Sistem Informasi 8, no. 1 (2021): 1. http://dx.doi.org/10.26594/register.v8i1.2190.

Pełny tekst źródła
Streszczenie:
Payload authentication is vulnerable to Man-in-the-middle (MITM) attack. Blockchain technology offers methods such as peer to peer, block hash, and proof-of-work to secure the payload of authentication process. The implementation uses block hash and proof-of-work methods on blockchain technology and testing is using White-box-testing and security tests distributed to system security practitioners who are competent in MITM attacks. The analyisis results before implementing Blockchain technology show that the authentication payload is still in plain text, so the data confidentiality has not mini
Style APA, Harvard, Vancouver, ISO itp.
30

Combey, Théo, António Loison, Maxime Faucher, and Hatem Hajri. "Probabilistic Jacobian-Based Saliency Maps Attacks." Machine Learning and Knowledge Extraction 2, no. 4 (2020): 558–78. http://dx.doi.org/10.3390/make2040030.

Pełny tekst źródła
Streszczenie:
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or L0 attacks. Effective and fast L0 attacks, such as the widely used Jacobian-based Saliency Map Attack (JSMA) are practical to fool NNCs but also to improve their robustness. In this paper, we show that penalising saliency maps of JSMA by the output probabilities and the input features of the NNC leads to more powerful attack algorithms that better take into account each input’s characteristics. This leads
Style APA, Harvard, Vancouver, ISO itp.
31

Ding, Daizong, Mi Zhang, Fuli Feng, Yuanmin Huang, Erling Jiang, and Min Yang. "Black-Box Adversarial Attack on Time Series Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7358–68. http://dx.doi.org/10.1609/aaai.v37i6.25896.

Pełny tekst źródła
Streszczenie:
With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals the threat of adversarial attack, where the adversary can construct adversarial examples to cause model mistakes. However, existing researches on the adversarial attack of TSC typically adopt an unrealistic white-box setting with model details transparent to the adversary. In this work, we study a more rigorous black-box setting with attack detection applied, which restricts gradient access and requires the adversarial example to be also stealthy. Theoretical analyses reveal that the k
Style APA, Harvard, Vancouver, ISO itp.
32

Jin, Di, Bingdao Feng, Siqi Guo, Xiaobao Wang, Jianguo Wei, and Zhen Wang. "Local-Global Defense against Unsupervised Adversarial Attacks on Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8105–13. http://dx.doi.org/10.1609/aaai.v37i7.25979.

Pełny tekst źródła
Streszczenie:
Unsupervised pre-training algorithms for graph representation learning are vulnerable to adversarial attacks, such as first-order perturbations on graphs, which will have an impact on particular downstream applications. Designing an effective representation learning strategy against white-box attacks remains a crucial open topic. Prior research attempts to improve representation robustness by maximizing mutual information between the representation and the perturbed graph, which is sub-optimal because it does not adapt its defense techniques to the severity of the attack. To address this issue
Style APA, Harvard, Vancouver, ISO itp.
33

Das, Debayan, Santosh Ghosh, Arijit Raychowdhury, and Shreyas Sen. "EM/Power Side-Channel Attack: White-Box Modeling and Signature Attenuation Countermeasures." IEEE Design & Test 38, no. 3 (2021): 67–75. http://dx.doi.org/10.1109/mdat.2021.3065189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Wang, Yixiang, Jiqiang Liu, Xiaolin Chang, Ricardo J. Rodríguez, and Jianhua Wang. "DI-AA: An interpretable white-box attack for fooling deep neural networks." Information Sciences 610 (September 2022): 14–32. http://dx.doi.org/10.1016/j.ins.2022.07.157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Koga, Kazuki, and Kazuhiro Takemoto. "Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification." Algorithms 15, no. 5 (2022): 144. http://dx.doi.org/10.3390/a15050144.

Pełny tekst źródła
Streszczenie:
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we pr
Style APA, Harvard, Vancouver, ISO itp.
36

Das, Debayan, and Shreyas Sen. "Electromagnetic and Power Side-Channel Analysis: Advanced Attacks and Low-Overhead Generic Countermeasures through White-Box Approach." Cryptography 4, no. 4 (2020): 30. http://dx.doi.org/10.3390/cryptography4040030.

Pełny tekst źródła
Streszczenie:
Electromagnetic and power side-channel analysis (SCA) provides attackers a prominent tool to extract the secret key from the cryptographic engine. In this article, we present our cross-device deep learning (DL)-based side-channel attack (X-DeepSCA) which reduces the time to attack on embedded devices, thereby increasing the threat surface significantly. Consequently, with the knowledge of such advanced attacks, we performed a ground-up white-box analysis of the crypto IC to root-cause the source of the electromagnetic (EM) side-channel leakage. Equipped with the understanding that the higher-l
Style APA, Harvard, Vancouver, ISO itp.
37

Yang, Zhifei, Wenmin Li, Fei Gao, and Qiaoyan Wen. "FAPA: Transferable Adversarial Attacks Based on Foreground Attention." Security and Communication Networks 2022 (October 29, 2022): 1–8. http://dx.doi.org/10.1155/2022/4447307.

Pełny tekst źródła
Streszczenie:
Deep learning models are vulnerable to attacks by adversarial examples. However, current studies are mainly limited to generating adversarial examples for specific models, and the migration of adversarial examples between different models is rarely studied. At the same time, in only studies, it is not considered that adding disturbance to the position of the image can improve the migration of adversarial examples better. As the main part of the picture, the model should give more weight to the foreground information in the recognition. Will adding more perturbations to the foreground informati
Style APA, Harvard, Vancouver, ISO itp.
38

Haq, Ijaz Ul, Zahid Younas Khan, Arshad Ahmad, et al. "Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks." Sustainability 13, no. 11 (2021): 5892. http://dx.doi.org/10.3390/su13115892.

Pełny tekst źródła
Streszczenie:
Neural relation extraction (NRE) models are the backbone of various machine learning tasks, including knowledge base enrichment, information extraction, and document summarization. Despite the vast popularity of these models, their vulnerabilities remain unknown; this is of high concern given their growing use in security-sensitive applications such as question answering and machine translation in the aspects of sustainability. In this study, we demonstrate that NRE models are inherently vulnerable to adversarially crafted text that contains imperceptible modifications of the original but can
Style APA, Harvard, Vancouver, ISO itp.
39

Li, Chenwei, Hengwei Zhang, Bo Yang, and Jindong Wang. "Image classification adversarial attack with improved resizing transformation and ensemble models." PeerJ Computer Science 9 (July 25, 2023): e1475. http://dx.doi.org/10.7717/peerj-cs.1475.

Pełny tekst źródła
Streszczenie:
Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentatio
Style APA, Harvard, Vancouver, ISO itp.
40

Lin, Gengyou, Zhisong Pan, Xingyu Zhou, et al. "Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images." Remote Sensing 15, no. 10 (2023): 2699. http://dx.doi.org/10.3390/rs15102699.

Pełny tekst źródła
Streszczenie:
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow features of the model are more capable of reflecting spatial and semantic information such
Style APA, Harvard, Vancouver, ISO itp.
41

Zhang, Chao, and Yu Wang. "Research on the Structure of Authentication Protocol Analysis Based on MSCs/Promela." Advanced Materials Research 989-994 (July 2014): 4698–703. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4698.

Pełny tekst źródła
Streszczenie:
To discover the existing or possibly existing vulnerability of present authentication protocol, we proposed a structure of authentication protocol analysis, which consist of white box analysis, black box analysis and indicator system as the three main functional components. White box analysis makes use of the transformation from MSCs (Message Sequence Charts) to Promela (Process Meta Language), the input language of the remarkable model checker SPIN; black box analysis is based on the attack platform of authentication protocol analysis; indicator system is decided by deducibility restraint met
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Yue, Seong-Yoon Shin, Xujie Tan, and Bin Xiong. "A Self-Adaptive Approximated-Gradient-Simulation Method for Black-Box Adversarial Sample Generation." Applied Sciences 13, no. 3 (2023): 1298. http://dx.doi.org/10.3390/app13031298.

Pełny tekst źródła
Streszczenie:
Deep neural networks (DNNs) have famously been applied in various ordinary duties. However, DNNs are sensitive to adversarial attacks which, by adding imperceptible perturbation samples to an original image, can easily alter the output. In state-of-the-art white-box attack methods, perturbation samples can successfully fool DNNs through the network gradient. In addition, they generate perturbation samples by only considering the sign information of the gradient and by dropping the magnitude. Accordingly, gradients of different magnitudes may adopt the same sign to construct perturbation sample
Style APA, Harvard, Vancouver, ISO itp.
43

Guo, Lu, and Hua Zhang. "A white-box impersonation attack on the FaceID system in the real world." Journal of Physics: Conference Series 1651 (November 2020): 012037. http://dx.doi.org/10.1088/1742-6596/1651/1/012037.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Shi, Yang, Qin Liu, and Qinpei Zhao. "A Secure Implementation of a Symmetric Encryption Algorithm in White-Box Attack Contexts." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/431794.

Pełny tekst źródła
Streszczenie:
In a white-box context, an adversary has total visibility of the implementation of the cryptosystem and full control over its execution platform. As a countermeasure against the threat of key compromise in this context, a new secure implementation of the symmetric encryption algorithm SHARK is proposed. The general approach is to merge several steps of the round function of SHARK into table lookups, blended by randomly generated mixing bijections. We prove the soundness of the implementation of the algorithm and analyze its security and efficiency. The implementation can be used in web hosts,
Style APA, Harvard, Vancouver, ISO itp.
45

Liu, Zhenpeng, Ruilin Li, Dewei Miao, Lele Ren, and Yonggang Zhao. "Membership Inference Defense in Distributed Federated Learning Based on Gradient Differential Privacy and Trust Domain Division Mechanisms." Security and Communication Networks 2022 (July 14, 2022): 1–14. http://dx.doi.org/10.1155/2022/1615476.

Pełny tekst źródła
Streszczenie:
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the trai
Style APA, Harvard, Vancouver, ISO itp.
46

Wang, Fangwei, Yuanyuan Lu, Changguang Wang, and Qingru Li. "Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–9. http://dx.doi.org/10.1155/2021/8736946.

Pełny tekst źródła
Streszczenie:
5G is about to open Pandora’s box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to
Style APA, Harvard, Vancouver, ISO itp.
47

Mao, Junjie, Bin Weng, Tianqiang Huang, Feng Ye, and Liqing Huang. "Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks." Security and Communication Networks 2021 (August 9, 2021): 1–12. http://dx.doi.org/10.1155/2021/3670339.

Pełny tekst źródła
Streszczenie:
Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box a
Style APA, Harvard, Vancouver, ISO itp.
48

Suri, Anshuman, and David Evans. "Formalizing and Estimating Distribution Inference Risks." Proceedings on Privacy Enhancing Technologies 2022, no. 4 (2022): 528–51. http://dx.doi.org/10.56553/popets-2022-0121.

Pełny tekst źródła
Streszczenie:
Distribution inference, sometimes called property inference, infers statistical properties about a training set from access to a model trained on that data. Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning—namely, to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal definition of distribution inference attacks general enough to describe a broad class of atta
Style APA, Harvard, Vancouver, ISO itp.
49

Hwang, Ren-Hung, Jia-You Lin, Sun-Ying Hsieh, Hsuan-Yu Lin, and Chia-Liang Lin. "Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks." Sensors 23, no. 2 (2023): 853. http://dx.doi.org/10.3390/s23020853.

Pełny tekst źródła
Streszczenie:
Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic border control, and face scan payment. However, deep learning models are vulnerable to adversarial attacks conducted by perturbing probe images to generate adversarial examples, or using adversarial patches to generate well-designed perturbations in specific regions of t
Style APA, Harvard, Vancouver, ISO itp.
50

Sun, Jiazheng, Li Chen, Chenxiao Xia, et al. "CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification." Electronics 12, no. 17 (2023): 3665. http://dx.doi.org/10.3390/electronics12173665.

Pełny tekst źródła
Streszczenie:
The vulnerability of deep-learning-based image classification models to erroneous conclusions in the presence of small perturbations crafted by attackers has prompted attention to the question of the models’ robustness level. However, the question of how to comprehensively and fairly measure the adversarial robustness of models with different structures and defenses as well as the performance of different attack methods has never been accurately answered. In this work, we present the design, implementation, and evaluation of Canary, a platform that aims to answer this question. Canary uses a c
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!