To see the other types of publications on this topic, follow the link: Noise attack.

Journal articles on the topic 'Noise attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Noise attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Han, Jaeseung, and Dong-Guk Han. "Improved Side-Channel Attack on CTR DRBG Using a Clustering Algorithm." Sensors 25, no. 13 (2025): 4170. https://doi.org/10.3390/s25134170.

Full text
Abstract:
Deterministic random bit generators (DRBG) play a crucial role in device security because they generate secret information cryptographic systems, e.g., secret keys and parameters. Thus, attacks on DRBGs can result in the exposure of important secret values, which can threaten the entire cryptographic system of the target Internet of Things (IoT) equipment and smart devices. In 2020, Meyer proposed a side-channel attack (SCA) method that recovers the output random bits by analyzing the power consumption traces of the NIST standard AES CTR DRBG. In addition, most algorithmic countermeasures against SCAs also utilize random numbers; thus, such vulnerabilities are more critical than other SCAs on cryptographic modules. Meyer’s attack recovers the secret random number in four stages of the attack using only the power traces, which the CTR DRBG processes in 256 blocks. We present an approach that employs a clustering algorithm to enhance Meyer’s attack. The proposed attack increases the attack success rate and recovers more information using a clustering attack in the first step. In addition, it improves the attack accuracy in the third and fourth steps using the information obtained from the clustering process. These results lead to the possibility of attacks at higher noise levels and increase the diversity of target devices for attacking the CTR DRBG. Experiments were conducted on an Atmel XMEGA128D4 processor to evaluate the effectiveness of the proposed attack method. We also introduced artificial noise into the power traces to compare the proposed attack’s performance at different noise levels. Our results demonstrate that the first step of the proposed attack achieves a higher success rate than Meyer’s attack at all noise levels. For example, at high noise levels, the difference in the success rates is up to 50%. In steps 3 and 4, an average performance improvement of 18.5% greater than Meyer’s proposed method is obtained. The proposed attack effectively extends the target to more noisy environments than previous attacks, thereby increasing the threat of SCA on CTR DRBGs.
APA, Harvard, Vancouver, ISO, and other styles
2

Mike, Hamburg, Hermelink Julius, Primas Robert, et al. "Chosen Ciphertext k-Trace Attacks on Masked CCA2 Secure Kyber." IACR Transactions on Cryptographic Hardware and Embedded Systems 2021, no. 4 (2021): 8–113. https://doi.org/10.46586/tches.v2021.i4.88-113.

Full text
Abstract:
Single-trace attacks are a considerable threat to implementations of classic public-key schemes, and their implications on newer lattice-based schemes are still not well understood. Two recent works have presented successful single-trace attacks targeting the Number Theoretic Transform (NTT), which is at the heart of many lattice-based schemes. However, these attacks either require a quite powerful side-channel adversary or are restricted to specific scenarios such as the encryption of ephemeral secrets. It is still an open question if such attacks can be performed by simpler adversaries while targeting more common public-key scenarios. In this paper, we answer this question positively. First, we present a method for crafting ring/module-LWE ciphertexts that result in sparse polynomials at the input of inverse NTT computations, independent of the used private key. We then demonstrate how this sparseness can be incorporated into a side-channel attack, thereby significantly improving noise resistance of the attack compared to previous works. The effectiveness of our attack is shown on the use-case of CCA2 secure Kyber k-module-LWE, where&nbsp;<em>k</em>&nbsp;&isin; {2, 3, 4}. Our k-trace attack on the long-term secret can handle noise up to a&nbsp;<em>&sigma;</em>&nbsp;&le; 1.2 in the noisy Hamming weight leakage model, also for masked implementations. A 2<em>k</em>-trace variant for Kyber1024 even allows noise&nbsp;<em>&sigma;</em>&nbsp;&le; 2.2 also in the masked case, with more traces allowing us to recover keys up to &sigma; &le; 2.7. Single-trace attack variants have a noise tolerance depending on the Kyber parameter set, ranging from&nbsp;<em>&sigma;</em>&nbsp;&le; 0.5 to&nbsp;<em>&sigma;</em>&nbsp;&le; 0.7. As a comparison, similar previous attacks in the masked setting were only successful with&nbsp;<em>&sigma;</em>&nbsp;&le; 0.5.
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Lin, Teyi Liao, and Jianfeng He. "Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method." Electronics 11, no. 12 (2022): 1814. http://dx.doi.org/10.3390/electronics11121814.

Full text
Abstract:
Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-designed attack data to the model input. Those attacks cause serious problems. Various defense methods have been provided to defend against those attacks by: (1) providing adversarial training according to specific attacks; (2) denoising the input data; (3) preprocessing the input data; and (4) adding noise to various layers of models. Here we provide a simple but effective Noise-Fusion Method (NFM) to defend adversarial attacks against DNN image classification models. Without knowing any details about attacks or models, NFM not only adds noise to the model input at run time, but also to the training data at training time. Two l∞-attacks, the Fast Gradient Signed Method (FGSM) and the Projected Gradient Descent (PGD), and one l1-attack, the Sparse L1 Descent (SLD), are applied to evaluate defense effects of the NFM on various deep neural network models which used MNIST and CIFAR-10 datasets. Various amplitude noises with different statistical distribution are applied to show the defense effects of the NFM in different noise. The NFM also compares with an adversarial training method on MNIST and CIFAR-10 datasets. Results show that adding noise to the input images and the training images not only defends against all three adversarial attacks but also improves robustness of corresponding models. The results indicate possibly generalized defense effects of the NFM which can extend to other adversarial attacks. It also shows potential application of the NFM to models not only with image input but also with voice or audio input.
APA, Harvard, Vancouver, ISO, and other styles
4

Dayananda, Prakyath, Mallikarjunaswamy Srikantaswamy, Sharmila Nagaraju, Rekha Velluri, and Doddananjedevaru Mahesh Kumar. "Efficient detection of faults and false data injection attacks in smart grid using a reconfigurable Kalman filter." International Journal of Power Electronics and Drive Systems (IJPEDS) 13, no. 4 (2022): 2086. http://dx.doi.org/10.11591/ijpeds.v13.i4.pp2086-2097.

Full text
Abstract:
The distribution denial of service (DDoS) attack, fault data injection attack (FDIA) and random attack is reduced. The monitoring and security of smart grid systems are improved using reconfigurable Kalman filter. Methods: A sinusoidal voltage signal with random Gaussian noise is applied to the Reconfigurable Euclidean detector (RED) evaluator. The MATLAB function randn() has been used to produce sequence distribution channel noise with mean value zero to analysed the amplitude variation with respect to evolution state variable. The detector noise rate is analysed with respect to threshold. The detection rate of various attacks such as DDOS, Random and false data injection attacks is also analysed. The proposed mathematical model is effectively reconstructed to frame the original sinusoidal signal from the evaluator state variable using reconfigurable Euclidean detectors.
APA, Harvard, Vancouver, ISO, and other styles
5

Prakyath, Dayananda, Srikantaswamy Mallikarjunaswamy, Nagaraju Sharmila, Velluri Rekha, and Mahesh Kumar Doddananjedevaru. "Efficient detection of faults and false data injection attacks in smart grid using a reconfigurable Kalman filter." International Journal of Power Electronics and Drive Systems 13, no. 4 (2022): 2086~2097. https://doi.org/10.11591/ijpeds.v13.i4.pp2086-2097.

Full text
Abstract:
The distribution denial of service (DDoS) attack, fault data injection attack (FDIA) and random attack is reduced. The monitoring and security of smart grid systems are improved using reconfigurable Kalman filter. Methods: A sinusoidal voltage signal with random Gaussian noise is applied to the Reconfigurable Euclidean detector (RED) evaluator. The MATLAB function randn() has been used to produce sequence distribution channel noise with mean value zero to analysed the amplitude variation with respect to evolution state variable. The detector noise rate is analysed with respect to threshold. The detection rate of various attacks such as DDOS, Random and false data injection attacks is also analysed. The proposed mathematical model is effectively reconstructed to frame the original sinusoidal signal from the evaluator state variable using reconfigurable Euclidean detectors.
APA, Harvard, Vancouver, ISO, and other styles
6

Karpenko, A. P., and V. A. Ovchinnikov. "How to Trick a Neural Network? Synthesising Noise to Reduce the Accuracy of Neural Network Image Classification." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 1 (134) (March 2021): 102–19. http://dx.doi.org/10.18698/0236-3933-2021-1-102-119.

Full text
Abstract:
The study aims to develop an algorithm and then software to synthesise noise that could be used to attack deep learning neural networks designed to classify images. We present the results of our analysis of methods for conducting this type of attacks. The synthesis of attack noise is stated as a problem of multidimensional constrained optimization. The main features of the attack noise synthesis algorithm proposed are as follows: we employ the clip function to take constraints on noise into account; we use the top-1 and top-5 classification error ratings as attack noise efficiency criteria; we train our neural networks using backpropagation and Adam's gradient descent algorithm; stochastic gradient descent is employed to solve the optimisation problem indicated above; neural network training also makes use of the augmentation technique. The software was developed in Python using the Pytorch framework to dynamically differentiate the calculation graph and runs under Ubuntu 18.04 and CentOS 7. Our IDE was Visual Studio Code. We accelerated the computation via CUDA executed on a NVIDIA Titan XP GPU. The paper presents the results of a broad computational experiment in synthesising non-universal and universal attack noise types for eight deep neural networks. We show that the attack algorithm proposed is able to increase the neural network error by eight times
APA, Harvard, Vancouver, ISO, and other styles
7

Sudha, M. S., and T. C. Thanuja. "Effect of different attacks on image watermarking using dual tree complex wavelet transform (DTCWT) and principle component analysis (PCA)." International Journal of Engineering & Technology 7, no. 2.9 (2018): 1. http://dx.doi.org/10.14419/ijet.v7i2-9.9249.

Full text
Abstract:
Perceptibility and robustness are two incongruous requirements demanded by digital image watermarking for digital right management and other applications. A realistic way to concurrently satisfy the two contradictory requirements is to use robust watermark algorithm. The developed algorithm uses DTCWT and PCA techniques to embed watermark signal in host signal. To prove the algorithm robustness without much affecting perceptibility several attacks like noises, cropping, blurring, rotation are applied and tested by varying attack parameters. Parameters like Peak signal noise ratio and Correlation Coefficient are calculated for each attack. Attack percentage is varied and performance parameters are calculated to prove the robustness of the developed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Haigang, Kan He, Yuyang Hao, and Shuyuan Yang. "Quantum tomography with Gaussian noise." Quantum Information and Computation 22, no. 13&14 (2022): 1144–57. http://dx.doi.org/10.26421/qic22.13-14-4.

Full text
Abstract:
In this paper, we propose an estimation of quantum resources necessary for recovering a key using Known Plain Text Attack (KPA) model for SPARKLE family of LWC authenticated block ciphers - SCHWAEMM. The procedure is based on a general attack using Grover's search algorithm with encryption oracle over key space in superposition. The paper explains step by step how to evaluate the cost of each operation type in encryption oracle in terms of various quantum and reversible gates. The result of this paper is an implementation of the simplified version of this cipher using quantum computer and summary table which shows the depth of quantum circuit, the size of quantum register and how many gates of NCT family are required for implementing the ciphers and attacks on them.
APA, Harvard, Vancouver, ISO, and other styles
9

Usenko, Vladyslav C. "Redundancy and Synergy of an Entangling Cloner in Continuous-Variable Quantum Communication." Entropy 24, no. 10 (2022): 1501. http://dx.doi.org/10.3390/e24101501.

Full text
Abstract:
We address minimization of information leakage from continuous-variable quantum channels. It is known, that regime of minimum leakage can be accessible for the modulated signal states with variance equivalent to a shot noise, i.e., vacuum fluctuations, in the case of collective attacks. Here we derive the same condition for the individual attacks and analytically study the properties of the mutual information quantities in and out of this regime. We show that in such regime a joint measurement on the modes of a two-mode entangling cloner, being the optimal individual eavesdropping attack in a noisy Gaussian channel, is no more effective that independent measurements on the modes. Varying variance of the signal out of this regime, we observe the nontrivial statistical effects of either redundancy or synergy between the measurements of two modes of the entangling cloner. The result reveals the non-optimality of entangling cloner individual attack for sub-shot-noise modulated signals. Considering the communication between the cloner modes, we show the advantage of knowing the residual noise after its interaction with the cloner and extend the result to a two-cloner scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Jiaqi, Kangfeng Zheng, Xiujuan Wang, Chunhua Wu, and Bin Wu. "EFSAttack: Edge Noise-Constrained Black-Box Attack Using Artificial Fish Swarm Algorithm." Electronics 13, no. 13 (2024): 2446. http://dx.doi.org/10.3390/electronics13132446.

Full text
Abstract:
Black-box attacks generate adversarial examples by querying the target model and updating the noise according to the feedback. However, the current black-box attack methods require excessive queries to generate adversarial examples, increasing the risk of detection by target defense systems. Furthermore, the current black-box attack methods primarily focus on controlling the magnitude of perturbations while neglecting the impact of perturbation placement on the stealthiness of adversarial examples. To this end, we propose a novel edge noise-constrained black-box attack method using the artificial fish swarm algorithm (EFSAttack). EFSAttack introduces the concept of edge noise constraint to indicate the low-frequency region of the image where perturbations are added and employs edge noise constraint to improve the population initialization and population evolution process. The experiments on CIFAR-10 and MNIST show notable improvements in the success rates, query efficiency, and adversarial example invisibility.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Kangkang; Keliris Christodoulos; Parisini Thomas; Jiang Bin; Polycarpou Marios M. "Passive Attack Detection for a Class of Stealthy Intermittent Integrity Attacks." IEEE/CAA Journal of Automatica Sinica 10, no. 4 (2023): 898–915. https://doi.org/10.1109/JAS.2023.123177.

Full text
Abstract:
This paper proposes a passive methodology for detecting a class of stealthy intermittent integrity attacks in cyber-physical systems subject to process disturbances and measurement noise. A stealthy intermittent integrity attack strategy is first proposed by modifying a zero-dynamics attack model. The stealthiness of the generated attacks is rigorously investigated under the condition that the adversary does not know precisely the system state values. In order to help detect such attacks, a backward-in-time detection residual is proposed based on an equivalent quantity of the system state change, due to the attack, at a time prior to the attack occurrence time. A key characteristic of this residual is that its magnitude increases every time a new attack occurs. To estimate this unknown residual, an optimal fixed-point smoother is proposed by minimizing a piece-wise linear quadratic cost function with a set of specifically designed weighting matrices. The smoother design guarantees robustness with respect to process disturbances and measurement noise, and is also able to maintain sensitivity as time progresses to intermittent integrity attack by resetting the covariance matrix based on the weighting matrices. The adaptive threshold is designed based on the estimated backward-in-time residual, and the attack detectability analysis is rigorously investigated to characterize quantitatively the class of attacks that can be detected by the proposed methodology. Finally, a simulation example is used to demonstrate the effectiveness of the developed methodology.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, ShuaiWei, XiaoYuan Yang, Lin Chen, and Weidong Zhong. "A Highly Effective Data Preprocessing in Side-Channel Attack Using Empirical Mode Decomposition." Security and Communication Networks 2019 (October 30, 2019): 1–10. http://dx.doi.org/10.1155/2019/6124165.

Full text
Abstract:
Side-channel attacks on cryptographic chips in embedded systems have been attracting considerable interest from the field of information security in recent years. Many research studies have contributed to improve the side-channel attack efficiency, in which most of the works assume the noise of the encryption signal has a linear stable Gaussian distribution. However, their performances of noise reduction were moderate. Thus, in this paper, we describe a highly effective data-preprocessing technique for noise reduction based on empirical mode decomposition (EMD) and demonstrate its application for a side-channel attack. EMD is a time-frequency analysis method for nonlinear unstable signal processing, which requires no prior knowledge about the cryptographic chip. During the procedure of data preprocessing, the collected traces will be self-adaptably decomposed into sum of several intrinsic mode functions (IMF) based on their own characteristics. And then, meaningful IMF will be reorganized to reduce its noise and increase the efficiency of key recovering through correlation power analysis attack. This technique decreases the total number of traces for key recovering by 17.7%, compared to traditional attack methods, which is verified by attack efficiency analysis of the SM4 block cipher algorithm on the FPGA power consumption analysis platform.
APA, Harvard, Vancouver, ISO, and other styles
13

Potii, Oleksandr, Olena Kachko, Serhii Kandii, and Yevhenii Kaptol. "Determining the effect of a floating point on the Falcon digital signature algorithm security." Eastern-European Journal of Enterprise Technologies 1, no. 9 (127) (2024): 52–59. http://dx.doi.org/10.15587/1729-4061.2024.295160.

Full text
Abstract:
The object of research is digital signatures. The Falcon digital signature scheme is one of the finalists in the NIST post-quantum cryptography competition. Its distinctive feature is the use of floating-point arithmetic. However, floating-point arithmetic has so-called rounding noise, which accumulates during computations and in some cases may lead to significant changes in the processed values. The work considers the problem of using rounding noise to build attacks on implementation. The main result of the study is a novel attack on implementation, which enables the secret key recovery. This attack differs from existing attacks in using two separately secure implementations with different computation orders. As a result of the analysis, the conditions under which secret key recovery is possible were revealed. The attack requires 300,000 signatures and two implementations to recover key. The probability of successful attack ranges from 70 % to 76 %. This probability is explained by the structure of the Gaussian sampling algorithm used in the Falcon digital signature. At the same time, a necessary condition for conducting an attack is identical seed during signature generation. This condition makes the attack more theoretical than practical since the correct implementation of the Falcon makes probability of two identical seeds negligible. However, the possible usage of floating-point noise shows potential existence of additional attack vectors for the Falcon that should be covered in security models. The results could be used in the construction of digital signature security models and their implementation in existing information and communication systems
APA, Harvard, Vancouver, ISO, and other styles
14

Asghar, Hassan Jameel, and Dali Kaafar. "Averaging Attacks on Bounded Noise-based Disclosure Control Algorithms." Proceedings on Privacy Enhancing Technologies 2020, no. 2 (2020): 358–78. http://dx.doi.org/10.2478/popets-2020-0031.

Full text
Abstract:
AbstractWe describe and evaluate an attack that reconstructs the histogram of any target attribute of a sensitive dataset which can only be queried through a specific class of real-world privacy-preserving algorithms which we call bounded perturbation algorithms. A defining property of such an algorithm is that it perturbs answers to the queries by adding zero-mean noise distributed within a bounded (possibly undisclosed) range. Other key properties of the algorithm include only allowing restricted queries (enforced via an online interface), suppressing answers to queries which are only satisfied by a small group of individuals (e.g., by returning a zero as an answer), and adding the same perturbation to two queries which are satisfied by the same set of individuals (to thwart differencing or averaging attacks). A real-world example of such an algorithm is the one deployed by the Australian Bureau of Statistics’ (ABS) online tool called TableBuilder, which allows users to create tables, graphs and maps of Australian census data [30]. We assume an attacker (say, a curious analyst) who is given oracle access to the algorithm via an interface. We describe two attacks on the algorithm. Both attacks are based on carefully constructing (different) queries that evaluate to the same answer. The first attack finds the hidden perturbation parameter r (if it is assumed not to be public knowledge). The second attack removes the noise to obtain the original answer of some (counting) query of choice. We also show how to use this attack to find the number of individuals in the dataset with a target attribute value a of any attribute A, and then for all attribute values ai ∈ A. None of the attacks presented here depend on any background information. Our attacks are a practical illustration of the (informal) fundamental law of information recovery which states that “overly accurate estimates of too many statistics completely destroys privacy” [9, 15].
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Woonghee, and Younghoon Kim. "Enhancing CT Segmentation Security against Adversarial Attack: Most Activated Filter Approach." Applied Sciences 14, no. 5 (2024): 2130. http://dx.doi.org/10.3390/app14052130.

Full text
Abstract:
This study introduces a deep-learning-based framework for detecting adversarial attacks in CT image segmentation within medical imaging. The proposed methodology includes analyzing features from various layers, particularly focusing on the first layer, and utilizing a convolutional layer-based model with specialized training. The framework is engineered to differentiate between tampered adversarial samples and authentic or noise-altered images, focusing on attack methods predominantly utilized in the medical sector. A significant aspect of the approach is employing a random forest algorithm as a binary classifier to detect attacks. This method has shown efficacy in identifying genuine samples and reducing false positives due to Gaussian noise. The contributions of this work include robust attack detection, layer-specific feature analysis, comprehensive evaluations, physician-friendly visualizations, and distinguishing between adversarial attacks and noise. This research enhances the security and reliability of CT image analysis in diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
16

Balachandran, G., and Praveen Kumar Gupta. "FPGA – Based Electrocardiography Signal Analysis System using (FIR) Filter." International Journal of Advance Research and Innovation 8, no. 1 (2020): 44–48. http://dx.doi.org/10.51976/ijari.812008.

Full text
Abstract:
The cardiovascular attack is a more dangerous than other diseases and it is measured by ECG (Electro cardiograph) signals which is like a noisy signal in real time, especially in the field of telemedicine environment. The noisy ECG signals have more motion artifacts, electrical interference, etc. An adaptive filtering approach based on Discrete Wavelet Transform and an artificial neural network is proposed to reduce the noise in ECG signal. The quality of de-noised signal is improved by SVM algorithm. This suggested approach can successfully take out a broad scope of noise and our method achieve up to almost 82% improvement on the SNR of de-noised signals. The MATLAB simulation results shown clearly about the improvement of ECG signal with SNR value.
APA, Harvard, Vancouver, ISO, and other styles
17

KISH, L. B. "PROTECTION AGAINST THE MAN-IN-THE-MIDDLE-ATTACK FOR THE KIRCHHOFF-LOOP-JOHNSON(-LIKE)-NOISE CIPHER AND EXPANSION BY VOLTAGE-BASED SECURITY." Fluctuation and Noise Letters 06, no. 01 (2006): L57—L63. http://dx.doi.org/10.1142/s0219477506003148.

Full text
Abstract:
It is shown that the original Kirchhoff-loop-Johnson(-like)-noise (KLJN) cipher is naturally protected against the man-in-the-middle (MITM) attack, if the eavesdropper is using resistors and noise voltage generators just like the sender and the receiver. The eavesdropper can extract zero bit of information before she is discovered. However, when the eavesdropper is using noise current generators, though the cipher is protected, the eavesdropper may still be able to extract one bit of information while she is discovered. For enhanced security, we expand the KLJN cipher with the comparison of the instantaneous voltages via the public channel. In this way, the sender and receiver has a full control over the security of measurable physical quantities in the Kirchhoff-loop. We show that when the sender and receiver compare not only their instantaneous current data but also their instantaneous voltage data then the zero-bit security holds even for the noise current generator case. We show that the original KLJN scheme is also zero-bit protected against that type of MITM attack when the eavesdropper uses voltage noise generators, only. In conclusion, within the idealized model scheme, the man-in-the-middle-attack does not provide any advantage compared to the regular attack considered earlier. The remaining possibility is the attack by a short, large current pulse, which described in the original paper as the only efficient type of regular attacks, and that yields the one bit security. In conclusion, the KLJN cipher is superior to known quantum communication schemes in every respect, including speed, robustness, maintenance need, price and its natural immunity against the man-in-the-middle attack.
APA, Harvard, Vancouver, ISO, and other styles
18

Yu, Hongwei, Jiansheng Chen, Xinlong Ding, Yudong Zhang, Ting Tang, and Huimin Ma. "Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 6791–99. http://dx.doi.org/10.1609/aaai.v38i7.28503.

Full text
Abstract:
The high-quality generation results of conditional diffusion models have brought about concerns regarding privacy and copyright issues. As a possible technique for preventing the abuse of diffusion models, the adversarial attack against diffusion models has attracted academic attention recently. In this work, utilizing the phenomenon that diffusion models are highly sensitive to the mean value of the input noise, we propose the Mean Fluctuation Attack (MFA) to introduce mean fluctuations by shifting the mean values of the estimated noises during the reverse process. In addition, we reveal that the vulnerability of different reverse steps against adversarial attacks actually varies significantly. By modeling the step vulnerability and using it as guidance to sample the target steps for generating adversarial examples, the effectiveness of adversarial attacks can be substantially enhanced. Extensive experiments show that our algorithm can steadily cause the mean shift of the predicted noises so as to disrupt the entire reverse generation process and degrade the generation results significantly. We also demonstrate that the step vulnerability is intrinsic to the reverse process by verifying its effectiveness in an attack method other than MFA. Code and Supplementary is available at https://github.com/yuhongwei22/MFA
APA, Harvard, Vancouver, ISO, and other styles
19

Mafu, Mhlambululi, Comfort Sekga, and Makhamisa Senekane. "Security of Bennett–Brassard 1984 Quantum-Key Distribution under a Collective-Rotation Noise Channel." Photonics 9, no. 12 (2022): 941. http://dx.doi.org/10.3390/photonics9120941.

Full text
Abstract:
The security analysis of the Ekert 1991 (E91), Bennett 1992 (B92), six-state protocol, Scarani–Acín–Ribordy–Gisin 2004 (SARG04) quantum key distribution (QKD) protocols, and their variants have been studied in the presence of collective-rotation noise channels. However, besides the Bennett–Brassard 1984 (BB84) being the first proposed, extensively studied, and essential protocol, its security proof under collective-rotation noise is still missing. Thus, we aim to close this gap in the literature. Consequently, we investigate how collective-rotation noise channels affect the security of the BB84 protocol. Mainly, we study scenarios where the eavesdropper, Eve, conducts an intercept-resend attack on the transmitted photons sent via a quantum communication channel shared by Alice and Bob. Notably, we distinguish the impact of collective-rotation noise and that of the eavesdropper. To achieve this, we provide rigorous, yet straightforward numerical calculations. First, we derive a model for the collective-rotation noise for the BB84 protocol and parametrize the mutual information shared between Alice and Eve. This is followed by deriving the quantum bit error rate (QBER) for two intercept-resend attack scenarios. In particular, we demonstrate that, for small rotation angles, one can extract a secure secret key under a collective-rotation noise channel when there is no eavesdropping. We observe that noise induced by rotation of 0.35 radians of the prepared quantum state results in a QBER of 11%, which corresponds to the lower bound on the tolerable error rate for the BB84 QKD protocol against general attacks. Moreover, a rotational angle of 0.53 radians yields a 25% QBER, which corresponds to the error rate bound due to the intercept-resend attack. Finally, we conclude that the BB84 protocol is robust against intercept-resend attacks on collective-rotation noise channels when the rotation angle is varied arbitrarily within particular bounds.
APA, Harvard, Vancouver, ISO, and other styles
20

Ferdous, Shahriar, and Laszlo B. Kish. "Transient attacks against the Kirchhoff–Law–Johnson–Noise (KLJN) secure key exchanger." Applied Physics Letters 122, no. 14 (2023): 143503. http://dx.doi.org/10.1063/5.0146190.

Full text
Abstract:
We demonstrate the security vulnerability of an ideal Kirchhoff–Law–Johnson–Noise key exchanger against transient attacks. Transients start when Alice and Bob connect a wire to their chosen resistor at the beginning of each clock cycle. A transient attack takes place during a short duration of time, before the transients reflected from the end of Alice and Bob mix together. The information leak arises from the fact that Eve (the eavesdropper) monitors the cable and analyzes the transients during this time period. We will demonstrate such a transient attack, and then, we introduce a defense protocol to protect against the attack. Computer simulations demonstrate that after applying the defense method the information leak becomes negligible.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Jing, Yanru Feng, and Mengli Wang. "Semantic Web-Driven Targeted Adversarial Attack on Black Box Automatic Speech Recognition Systems." International Journal on Semantic Web and Information Systems 20, no. 1 (2024): 1–23. http://dx.doi.org/10.4018/ijswis.360651.

Full text
Abstract:
The susceptibility of Deep Neural Networks (DNNs) to adversarial attacks in Automatic Speech Recognition (ASR) systems has drawn significant attention. Most work focuses on white-box methods, but the assumption of full transparency of model architecture and parameters is unrealistic in real-world scenarios. Although several targeted black-box attack methods have been proposed in recent years, due to the complexity of ASR systems, they primarily rely on query-based approaches with limited search capabilities, leading to low success rates and noticeable noise. To address this, we propose DE-gradient, a new black-box approach using differential evolution (DE), a population-based search algorithm. Inspired by Semantic Web ideas, we introduce modulation noise to preserve semantic coherence while enhancing imperceptibility. In experiments on two public datasets, DE-gradient improved attack success rates by 19% and increased the signal-to-noise ratio (SNR) of silent parts from 27 dB to 54 dB, establishing a strong baseline for evaluating black-box adversarial attacks in ASR systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Hongyan. "Digital media zero watermark copyright protection algorithm based on embedded intelligent edge computing detection." Mathematical Biosciences and Engineering 18, no. 5 (2021): 6771–89. http://dx.doi.org/10.3934/mbe.2021336.

Full text
Abstract:
&lt;abstract&gt; &lt;p&gt;With the rapid development of computer technology and network communication technology, copyright protection caused by widely spread digital media has become the focus of attention in various fields. For digital media watermarking technology research emerge in endlessly, but the results are not ideal. In order to better realize the copyright identification and protection, based on the embedded intelligent edge computing detection technology, this paper studies the zero watermark copyright protection algorithm of digital media. Firstly, this paper designs an embedded intelligent edge detection module based on Sobel operator, including image line buffer module, convolution calculation module and threshold processing module. Then, based on the embedded intelligent edge detection module, the Arnold transform of image scrambling technology is used to preprocess the watermark, and finally a zero watermark copyright protection algorithm is constructed. At the same time, the robustness of the proposed algorithm is tested. The image is subjected to different proportion of clipping and scaling attacks, different types of noise, sharpening and blur attacks, and the detection rate and signal-to-noise ratio of each algorithm are calculated respectively. The performance of the watermark image processed by this algorithm is evaluated subjectively and objectively. Experimental data show that the detection rate of our algorithm is the highest, which is 0.89. In scaling attack, the performance of our algorithm is slightly lower than that of Fourier transform domain algorithm, but it is better than the other two algorithms. The Signal to Noise Ratio of the algorithm is 36.854% in P6 multiplicative noise attack, 39.638% in P8 sharpening edge attack and 41.285% in fuzzy attack. This shows that the algorithm is robust to conventional attacks. The subjective evaluation of 33% and 39% of the images is 5 and 4. The mean values of signal to noise ratio, peak signal to noise ratio, mean square error and mean absolute difference are 20.56, 25.13, 37.03 and 27.64, respectively. This shows that the watermark image processed by this algorithm has high quality. Therefore, the digital media zero watermark copyright protection algorithm based on embedded intelligent edge computing detection is more robust, and its watermark invisibility is also very superior, which is worth promoting.&lt;/p&gt; &lt;/abstract&gt;
APA, Harvard, Vancouver, ISO, and other styles
23

Sutanto, Richard Evan, and Sukho Lee. "Real-Time Adversarial Attack Detection with Deep Image Prior Initialized as a High-Level Representation Based Blurring Network." Electronics 10, no. 1 (2020): 52. http://dx.doi.org/10.3390/electronics10010052.

Full text
Abstract:
Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
24

Hou, Hanting, Huan Bao, Kaimin Wei, and Yongdong Wu. "Universal Low-Frequency Noise Black-Box Attack on Visual Object Tracking." Symmetry 17, no. 3 (2025): 462. https://doi.org/10.3390/sym17030462.

Full text
Abstract:
Adversarial attacks on visual object tracking aim to degrade tracking accuracy by introducing imperceptible perturbations into video frames, exploiting vulnerabilities in neural networks. In real-world symmetrical double-blind engagements, both attackers and defenders operate with mutual unawareness of strategic parameters or initiation timing. Black-box attacks based on iterative optimization show excellent applicability in this scenario. However, existing state-of-the-art adversarial attacks based on iterative optimization suffer from high computational costs and limited effectiveness. To address these challenges, this paper proposes the Universal Low-frequency Noise black-box attack method (ULN), which generates perturbations through discrete cosine transform to disrupt structural features critical for tracking while mimicking compression artifacts. Extensive experimentation on four state-of-the-art trackers, including transformer-based models, demonstrates the method’s severe degradation effects. GRM’s expected average overlap drops by 97.77% on VOT2018, while SiamRPN++’s AUC and Precision on OTB100 decline by 76.55% and 78.9%, respectively. The attack achieves real-time performance with a computational cost reduction of over 50% compared to iterative methods, operating efficiently on embedded devices such as Raspberry Pi 4B. By maintaining a structural similarity index measure above 0.84, the perturbations blend seamlessly with common compression artifacts, evading traditional spatial filtering defenses. Cross-platform experiments validate its consistent threat across diverse hardware environments, with attack success rates exceeding 40% even under resource constraints. These results underscore the dual capability of ULN as both a stealthy and practical attack vector, and emphasize the urgent need for robust defenses in safety-critical applications such as autonomous driving and aerial surveillance. The efficiency of the method, when combined with its ability to exploit low-frequency vulnerabilities across architectures, establishes a new benchmark for adversarial robustness in visual tracking systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Lacagnina, Giovanni, Paruchuri Chaitanya, Jung-Hoon Kim, et al. "Leading edge serrations for the reduction of aerofoil self-noise at low angle of attack, pre-stall and post-stall conditions." International Journal of Aeroacoustics 20, no. 1-2 (2021): 130–56. http://dx.doi.org/10.1177/1475472x20978379.

Full text
Abstract:
This paper addresses the usefulness of leading edge serrations for reducing aerofoil self-noise over a wide range of angles of attack. Different serration geometries are studied over a range of Reynolds number [Formula: see text]. Design guidelines are proposed that permit noise reductions over most angles of attack. It is shown that serration geometries reduces the noise but adversely effect the aerodynamic performance suggesting that a trade-off should be sought between these two considerations. The self-noise performance of leading edge serrations has been shown to fall into three angle of attack (AoA) regimes: low angles where the flow is mostly attached, moderate angles where the flow is partially to fully separated, and high angles of attack where the flow is fully separated. Leading edge serrations have been demonstrated to be effective in reducing noise at low and high angles of attack but ineffective at moderate angles. The noise reduction mechanisms are explored in each of three angle regimes.
APA, Harvard, Vancouver, ISO, and other styles
26

Garipay, Fikret, Kemal Uzgören, and İsmail Kaya. "Side channel attack performance of correlation power analysis method in noise." International Journal on Information Technologies and Security 15, no. 1 (2023): 101–11. http://dx.doi.org/10.59035/uwcb7557.

Full text
Abstract:
This article explores the use of artificial noise to defend against power analysis and power analysis-based side-channel attacks on AES encryption. The study covers both hardware and open-source software components for performing power analysis and provides an analysis of attack performance. It also explains how security measures against side-channel attacks can be implemented without disrupting system operation.
APA, Harvard, Vancouver, ISO, and other styles
27

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Full text
Abstract:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply affecting the level of accuracy of the model. In this paper, The MLP model was used for the sentiment analysis of the texts taken from the tweets, the effect of applying a white-box adversarial attack on this classifier was studied and a technique was proposed to protect it from the attack. After applying the proposed methodology, we found that the adversarial attack decreases the accuracy of the classifier from 55.17% to 11.11%, and after applying the proposed defense technique, this contributed to an increase in the accuracy of the classifier up to 77.77%, and therefore the proposed plan can be adopted in the face of the adversarial attack. Attacker determines their targets strategically and deliberately depend on vulnerabilities they have ascertained. Organization and individuals mostly try to protect themselves from one occurrence or type on an attack. Still, they have to acknowledge that the attacker may easily move focus to advanced uncovered vulnerabilities. Even if someone successfully tackles several attacks, risks remain, and the need to face threats will happen for the predictable future.
APA, Harvard, Vancouver, ISO, and other styles
28

Wei, Tianyi, Yong Li, and Yingtao Niu. "Adversarial Sample Generation Method for Modulated Signals Based on Edge-Linear Combination." Electronics 14, no. 7 (2025): 1260. https://doi.org/10.3390/electronics14071260.

Full text
Abstract:
In complex electromagnetic environments, wireless communication system reliability can be compromised by various types of jamming. To address the issue of jammers using deep neural network models to identify communication signal modulation method and apply targeted interference, this paper proposes a method for generating adversarial samples of modulation signals based on the Mixup linear combination approach. The method generates edge-linear combination samples with small perturbations by linearly combining the original signal samples near the decision edges, and then inputs them into the neural network model for identification test, determines the best perturbation signals for each type of signals according to the identification results, and then generates the adversarial samples by selecting the best perturbation signals for each type of modulation during the attack. Simulation results show that, compared to traditional gradient-based adversarial sample generation algorithms, the proposed method performs better under white-box attacks. Under black-box attacks, the proposed method achieves higher attack success rates and lower attack signal-to-noise ratios compared to random noise adversarial samples with the same disturbance coefficient.
APA, Harvard, Vancouver, ISO, and other styles
29

Hassani, Ali, Jon Diedrich, and Hafiz Malik. "Improving Monocular Facial Presentation–Attack–Detection Robustness with Synthetic Noise Augmentations." Sensors 23, no. 21 (2023): 8914. http://dx.doi.org/10.3390/s23218914.

Full text
Abstract:
We present a synthetic augmentation approach towards improving monocular face presentation–attack–detection (PAD) robustness to real-world noise additions. Face PAD algorithms secure authentication systems against spoofing attacks, such as pictures, videos, and 2D-inspired masks. Best-in-class PAD methods typically use 3D imagery, but these can be expensive. To reduce application cost, there is a growing field investigating monocular algorithms that detect facial artifacts. These approaches work well in laboratory conditions, but can be sensitive to the imaging environment (e.g., sensor noise, dynamic lighting, etc.). The ideal solution for noise robustness is training under all expected conditions; however, this is time consuming and expensive. Instead, we propose that physics-informed noise-augmentations can pragmatically achieve robustness. Our toolbox contains twelve sensor and lighting effect generators. We demonstrate that our toolbox generates more robust PAD features than popular augmentation methods in noisy test-evaluations. We also observe that the toolbox improves accuracy on clean test data, suggesting that it inherently helps discern spoof artifacts from imaging artifacts. We validate this hypothesis through an ablation study, where we remove liveliness pairs (e.g., live or spoof imagery only for participants) to identify how much real data can be replaced with synthetic augmentations. We demonstrate that using these noise augmentations allows us to achieve better test accuracy while only requiring 30% of participants to be fully imaged under all conditions. These findings indicate that synthetic noise augmentations are a great way to improve PAD, addressing noise robustness while simplifying data collection.
APA, Harvard, Vancouver, ISO, and other styles
30

Melhem, Mutaz, and Laszlo Kish. "A Static-loop-current Attack Against the Kirchhoff-Law-Johnson-Noise (KLJN) Secure Key Exchange System." Applied Sciences 9, no. 4 (2019): 666. http://dx.doi.org/10.3390/app9040666.

Full text
Abstract:
In this study, a new attack against the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution system is explored. The attack is based on utilizing a parasitic voltage-source in the loop. Relevant situations often exist in the low-frequency limit in practical systems, especially when the communication is over a distance, or between different units within an instrument, due to a ground loop and/or electromagnetic interference (EMI). Our present study investigates the DC ground loop situation when no AC or EMI effects are present. Surprisingly, the usual current/voltage comparison-based defense method that exposes active attacks or parasitic features (such as wire resistance allowing information leaks) does not function here. The attack is successfully demonstrated and proposed defense methods against the attack are shown.
APA, Harvard, Vancouver, ISO, and other styles
31

Sun, Tiankai, Xingyuan Wang, Da Lin, et al. "Medical image security authentication method based on wavelet reconstruction and fractal dimension." International Journal of Distributed Sensor Networks 17, no. 4 (2021): 155014772110141. http://dx.doi.org/10.1177/15501477211014132.

Full text
Abstract:
In this article, based on wavelet reconstruction and fractal dimension, a medical image authentication method is implemented. According to the local and global methods, the regularity of the mutation structure in the carrier information is analyzed through a measurement defined in the medical image transformation domain. To eliminate the redundancy of the reconstructed data, the fractal dimension is used to reduce the attributes of the reconstructed wavelet coefficients. According to the singularity of the fractal dimension of the block information, the key features are extracted and the fractal feature is constructed as the authentication feature of the images. The experimental results show that the authentication scheme has good robustness against attacks, such as JPEG compression, multiplicative noise, salt and pepper noise, Gaussian noise, image rotation, scaling attack, sharpening, clipping attack, median filtering, contrast enhancement, and brightness enhancement.
APA, Harvard, Vancouver, ISO, and other styles
32

Fan, Di, Xiao Zhang, Wenshuo Kang, Huiyuan Zhao, and Yingjun Lv. "Video Watermarking Algorithm Based on NSCT, Pseudo 3D-DCT and NMF." Sensors 22, no. 13 (2022): 4752. http://dx.doi.org/10.3390/s22134752.

Full text
Abstract:
Video watermarking is an important means of video and multimedia copyright protection, but the current watermarking algorithm is difficult to ensure high robustness under various attacks. In this paper, a video watermarking algorithm based on NSCT, pseudo 3D-DCT and NMF has been proposed. Combined with NSCT, 3D-DCT and NMF, the algorithm embeds the encrypted QR code copyright watermark into the NMF base matrix to improve the anti-attack ability of the watermark under the condition of invisibility. The experimental results show that the algorithm ensures the invisibility of the watermark with a high signal-to-noise ratio of the video, and meanwhile has high ability and robustness against common single and combined attacks, such as filtering, noise, compression, shear, rotation and so on. The issue that the video watermarking algorithm has poor resistance to various attacks, especially the shearing attack, has been solved in this paper; thus, it can be used for digital multimedia video copyright protection.
APA, Harvard, Vancouver, ISO, and other styles
33

Rahimi, Parisa, Amit Kumar Singh, and Xiaohang Wang. "Selective Noise Based Power-Efficient and Effective Countermeasure against Thermal Covert Channel Attacks in Multi-Core Systems." Journal of Low Power Electronics and Applications 12, no. 2 (2022): 25. http://dx.doi.org/10.3390/jlpea12020025.

Full text
Abstract:
With increasing interest in multi-core systems, such as any communication systems, infra-structures can become targets for information leakages via covert channel communication. Covert channel attacks lead to leaking secret information and data. To design countermeasures against these threats, we need to have good knowledge about classes of covert channel attacks along with their properties. Temperature–based covert communication channel, known as Thermal Covert Channel (TCC), can pose a threat to the security of critical information and data. In this paper, we present a novel scheme against such TCC attacks. The scheme adds selective noise to the thermal signal so that any possible TCC attack can be wiped out. The noise addition only happens at instances when there are chances of correct information exchange to increase the bit error rate (BER) and keep the power consumption low. Our experiments have illustrated that the BER of a TCC attack can increase to 94% while having similar power consumption as that of state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
34

Ma, Yimin, and Shuli Sun. "Distributed Optimal and Self-Tuning Filters Based on Compressed Data for Networked Stochastic Uncertain Systems with Deception Attacks." Sensors 23, no. 1 (2022): 335. http://dx.doi.org/10.3390/s23010335.

Full text
Abstract:
In this study, distributed security estimation problems for networked stochastic uncertain systems subject to stochastic deception attacks are investigated. In sensor networks, the measurement data of sensor nodes may be attacked maliciously in the process of data exchange between sensors. When the attack rates and noise variances for the stochastic deception attack signals are known, many measurement data received from neighbour nodes are compressed by a weighted measurement fusion algorithm based on the least-squares method at each sensor node. A distributed optimal filter in the linear minimum variance criterion is presented based on compressed measurement data. It has the same estimation accuracy as and lower computational cost than that based on uncompressed measurement data. When the attack rates and noise variances of the stochastic deception attack signals are unknown, a correlation function method is employed to identify them. Then, a distributed self-tuning filter is obtained by substituting the identified results into the distributed optimal filtering algorithm. The convergence of the presented algorithms is analyzed. A simulation example verifies the effectiveness of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Fan, Yuqi, Fuzhi He, and Zibang Nie. "Robustness analysis of Visual Transformer based on adversarial attacks." Applied and Computational Engineering 41, no. 1 (2024): 160–70. http://dx.doi.org/10.54254/2755-2721/41/20230737.

Full text
Abstract:
As model architectures deepen, large models (e.g., Visual Transformer) perform increasingly well on vision tasks. Adversarial attack is an important test to measure the robustness of the model. By adding noise information to the data, it interferes with the discriminative ability of the model. Previous studies have found that adversarial attacks significantly impact small models (e.g., VGG16, ResNet18), while further tests are needed for interference on large models. This paper conducts three experiments to examine the performance of Visual Transformer (ViT) models against adversarial attacks. In Experiment 1, this paper uses three different attack methods (FGSM, I-FGSM, and MI-FGSM) to test the performance of ViT and some small models. In Experiment 2, this paper tests whether the ViT could distinguish noisy data successfully attacked on the small models. In Experiment 3, this paper examines the defense performance of the ViT, and VGG16 retrained on noisy data. The results show that (1) compared to small models, ViT does have a more vital ability to resist noisy data; (2) the performance improvement of ViT could be better than that of the small model after retraining.
APA, Harvard, Vancouver, ISO, and other styles
36

Sadi, Mehdi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty, and Md Tauhidur Rahman. "Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation." Information 14, no. 9 (2023): 516. http://dx.doi.org/10.3390/info14090516.

Full text
Abstract:
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Jiao, Jiajia, Ling Jiang, Quan Zhou, and Ran Wen. "Evaluating Large Language Model Application Impacts on Evasive Spectre Attack Detection." Electronics 14, no. 7 (2025): 1384. https://doi.org/10.3390/electronics14071384.

Full text
Abstract:
This paper investigates the impact of different Large Language Models (DeepSeek, Kimi and Doubao) on the attack detection success rate of evasive Spectre attacks while accessing text, image, and code tasks. By running different Large Language Models (LLMs) tasks concurrently with evasive Spectre attacks, a unique dataset with LLMs noise was constructed. Subsequently, clustering algorithms were employed to reduce the dimension of the data and filter out representative samples for the test set. Finally, based on a random forest detection model, the study systematically evaluated the impact of different task types on the attack detection success rate. The experimental results indicate that the attack detection success rate follows the pattern of “code &gt; text &gt; image” in both the evasive Spectre memory attack and the evasive Spectre nop attack. To further assess the influence of different architectures on evasive Spectre attacks, additional experiments were conducted on an NVIDIA RTX 3060 GPU. The results reveal that, on the RTX 3060, the attack detection success rate for code tasks decreased, while those for text and image tasks increased compared to the 2080 Ti. This finding suggests that architectural differences impact the manifestation of Hardware Performance Counters (HPCs), influencing the attack detection success rate.
APA, Harvard, Vancouver, ISO, and other styles
38

Bhavana Sharma. "Image Quality Enhancement using Deep learning-based Convolution Residual Networks Technique." Journal of Information Systems Engineering and Management 10, no. 40s (2025): 1246–66. https://doi.org/10.52783/jisem.v10i40s.7876.

Full text
Abstract:
A new approach for cleaning encrypted images is developed using Deep Convolutional Residual Network (Deep ConvResNet) as the proposed method. The aim of this research is to protect encrypted images from noise attacks by utilizing ResNet denoising capabilities. It has been proven that ResNets are successful at cleaning up noise while maintaining the important picture characteristics. This research employs multiple datasets for training and performs a detailed comparative study using peak signal to noise ratio and structure similarity index as well as noise and occlusion and blur attack metrics. Results of the simulation show that the suggested cryptosystem is resistant to the familiar attacks. Filtering based denoising techniques and CNNs are worse than ResNets because ResNets have better efficiency and resilience to occlusion and noise and blur attacks. The graphs of training loss vs. epoch show the convergence pattern of the model during training. Considering its potential use, this methodology is applicable in secure image transmission in different domains such as healthcare and multimedia transmission.
APA, Harvard, Vancouver, ISO, and other styles
39

Jackson, Beren R., and Sam M. Dakka. "Computational fluid dynamics investigation into flow behavior and acoustic mechanisms at the trailing edge of an airfoil." Noise & Vibration Worldwide 49, no. 1 (2018): 20–31. http://dx.doi.org/10.1177/0957456517751455.

Full text
Abstract:
Airfoil self-noise or trailing edge noise and shear noise were investigated computationally for a NACA 0012 airfoil section, focusing on noise mechanisms at the trailing edge to identify and understand sources of noise production using ANSYS Fluent. A two-dimensional computational fluid dynamics simulation has been performed for 0°, 8°, and 16° airfoil angles of attack capturing surface pressure contours, contours of turbulent intensity, contours of surface acoustic power level, vorticity magnitude levels across the airfoil profile, and x- and y-directional self-noise and shear noise sources across the airfoil profile. The results indicate that pressure gradients at the upper surface do increase as the angle of attack increases, which is a measure of vortices near the surface of the trailing edge associated with turbulence cease as the boundary layer begins to separate. Comparison of the turbulent intensity contours with surface acoustic power level contours demonstrated direct correlation between the energy contributed by turbulent structures (i.e. vortices) and the level of noise measured at the surface and within the boundary layer of the airfoil. As angle of attack is increased, both x and y sources have the same trends; however, y sources (perpendicular to the free-stream flow) appear to have a bigger impact as angle of attack is increased. Furthermore, as the angle of attack increased, shear noise contributes less and less energy further downstream of the airfoil and becomes dominated by noise energy from vortical structures within turbulence. The two-dimensional computational fluid dynamics simulation revealed that pressure, turbulent intensity, and surface acoustic power contours further corroborated the previously tested noise observations phenomena at the trailing edge of the airfoil.
APA, Harvard, Vancouver, ISO, and other styles
40

Kanai, Sekitoshi, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, and Shuichi Adachi. "Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4394–403. http://dx.doi.org/10.1609/aaai.v34i04.5865.

Full text
Abstract:
We propose Absum, which is a regularization method for improving adversarial robustness of convolutional neural networks (CNNs). Although CNNs can accurately recognize images, recent studies have shown that the convolution operations in CNNs commonly have structural sensitivity to specific noise composed of Fourier basis functions. By exploiting this sensitivity, they proposed a simple black-box adversarial attack: Single Fourier attack. To reduce structural sensitivity, we can use regularization of convolution filter weights since the sensitivity of linear transform can be assessed by the norm of the weights. However, standard regularization methods can prevent minimization of the loss function because they impose a tight constraint for obtaining high robustness. To solve this problem, Absum imposes a loose constraint; it penalizes the absolute values of the summation of the parameters in the convolution layers. Absum can improve robustness against single Fourier attack while being as simple and efficient as standard regularization methods (e.g., weight decay and L1 regularization). Our experiments demonstrate that Absum improves robustness against single Fourier attack more than standard regularization methods. Furthermore, we reveal that robust CNNs with Absum are more robust against transferred attacks due to decreasing the common sensitivity and against high-frequency noise than standard regularization methods. We also reveal that Absum can improve robustness against gradient-based attacks (projected gradient descent) when used with adversarial training.
APA, Harvard, Vancouver, ISO, and other styles
41

Yin, Heng, Hengwei Zhang, Jindong Wang, and Ruiyu Dou. "Boosting Adversarial Attacks on Neural Networks with Better Optimizer." Security and Communication Networks 2021 (June 7, 2021): 1–9. http://dx.doi.org/10.1155/2021/9983309.

Full text
Abstract:
Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems. Sophisticated adversarial examples with strong attack performance can also be used as a tool to evaluate the robustness of a model. However, the success rate of adversarial attacks can be further improved in black-box environments. Therefore, this study combines a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam iterative fast gradient method is then used to improve the transferability of adversarial examples. Extensive experiments on ImageNet showed that the proposed method offers a higher attack success rate than existing iterative methods. By extending our method, we achieved a state-of-the-art attack success rate of 95.0% on defense models.
APA, Harvard, Vancouver, ISO, and other styles
42

Yuan, Ye, Liji Wu, Yijun Yang, and Xiangmin Zhang. "A Novel Multiple-Bits Collision Attack Based on Double Detection with Error-Tolerant Mechanism." Security and Communication Networks 2018 (June 5, 2018): 1–13. http://dx.doi.org/10.1155/2018/2483619.

Full text
Abstract:
Side-channel collision attacks are more powerful than traditional side-channel attack without knowing the leakage model or establishing the model. Most attack strategies proposed previously need quantities of power traces with high computational complexity and are sensitive to mistakes, which restricts the attack efficiency seriously. In this paper, we propose a multiple-bits side-channel collision attack based on double distance voting detection (DDVD) and also an improved version, involving the error-tolerant mechanism, which can find all 120 relations among 16 key bytes when applied to AES (Advanced Encryption Standard) algorithm. In addition, we compare our collision detection method called DDVD with the Euclidean distance and the correlation-enhanced collision method under different intensity of noise, which indicates that our detection technique performs better in the circumstances of noise. Furthermore, 4-bit model of our collision detection method is proven to be optimal in theory and in practice. Meanwhile the corresponding practical attack experiments are also performed on a hardware implementation of AES-128 on FPGA board successfully. Results show that our strategy needs less computation time but more traces than LDPC method and the online time for our strategy is about 90% less than CECA and 96% less than BCA with 90% success rate.
APA, Harvard, Vancouver, ISO, and other styles
43

Priyanka, Mishra, and Ahuja Rakesh. "Highly Robust and Imperceptible Digital Watermark Technique by Exploiting Various Noise Attacks." International Journal of Innovative Science and Research Technology 7, no. 3 (2022): 987–93. https://doi.org/10.5281/zenodo.6433971.

Full text
Abstract:
Due to expeditious use of multimedia entities in the network, there is issue to protect images from piracy. So this is required for implementation of copyright on high priority that can preserve ownership for copyright protection and protected images can be claimed by actual owner only. Digital image watermarking is the technology invented for securing images from illicit utilize. There are many watermarking techniques in spatial as well as in frequency domain like Discrete Wavelet Transform and Discrete Cosine Transform. These techniques are widely known to be more usual, robust and imperceptible. In the paper, literature review work give out about analysis of different attacks like Gaussian Blur, Salt &amp; Pepper, Gaussian Noise, Geometric and Joint attacks on images. Also these are implemented on DWT and DCT watermarked image using Python code and after attack the performances are measured by Peak Signal to Noise Ratio(PSNR) and Normalized Correlation(NC) for surveying about robustness of images before and after attack.
APA, Harvard, Vancouver, ISO, and other styles
44

Zimba, Aaron, and Mumbi Chishimba. "Exploitation of DNS Tunneling for Optimization of Data Exfiltration in Malware-free APT Intrusions." Zambia ICT Journal 1, no. 1 (2017): 51–56. http://dx.doi.org/10.33260/zictjournal.v1i1.26.

Full text
Abstract:
One of the main goals of targeted attacks include data exfiltration. Attackers penetrate systems using various forms of attack vectors but the hurdle comes in exfiltrating the data. APT attackers even reside in a host for long periods of time whilst seeking the best option to exfiltrate data. Most data exfiltration techniques are prone to detection by intrusion detection system. Therefore, data exfiltration methodologies that generate little noise if any at all are attractive to attackers and can go undetected for long periods owing the low threshold of generated noise in form network traffic and system calls. In this paper, we present malware-free intrusion, an attack methodology which does not explicitly use malware to exfiltrate data. Our attack structure exploits the use of system services and resources not limited to RDP, PowerShell, Windows accessibility backdoor and DNS tunneling. Results show that it’s possible to exfiltrate data from vulnerable hosts using malwarefree intrusion as an infection vector and DNS tunneling as a data exfiltration technique. We test the attack on both Windows and Linux system over different networks. Mitigation techniques are suggested based on traffic analysis captured from the established secure DNS tunnels on the network.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Minji, Youngho Cho, Hweerang Park, and Gang Qu. "ASIGM: An Innovative Adversarial Stego Image Generation Method for Fooling Convolutional Neural Network-Based Image Steganalysis Models." Electronics 14, no. 4 (2025): 764. https://doi.org/10.3390/electronics14040764.

Full text
Abstract:
To defeat AI-based steganalysis systems, various techniques using adversarial example attack methods have been reported. In these techniques, adversarial stego images are generated using adversarial attack algorithms and steganography embedding algorithms sequentially and independently. However, this approach can be inefficient because both algorithms independently insert perturbations into a cover image, and the steganography embedding algorithm could significantly lower the undetectability or indistinguishability of adversarial attacks. To address this issue, we propose an innovative adversarial stego image generation method (ASIGM) that fully integrates the two separate algorithms by using the Jacobian-based Saliency Map Attack (JSMA). JSMA, one of the representative l0 norm-based adversarial example attack methods, is used to compute a set of pixels in the cover image that increases the probability of being classified as the non-stego class by the steganalysis model. The reason for this calculation is that if a secret message is inserted into the limited set of pixels in such a way, noise is only required for message embedding, and even misclassification of the target steganalysis model can be achieved without additional noise insertion. The experimental results demonstrate that our proposed ASIGM outperforms two representative steganography methods (WOW and ADS-WOW).
APA, Harvard, Vancouver, ISO, and other styles
46

Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Full text
Abstract:
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Hai, Jinming Wu, Xinling Tang, Shilei Zhao, Zhiwei Liu, and Bin Yu. "Deep learning-based improved side-channel attacks using data denoising and feature fusion." PLOS ONE 20, no. 4 (2025): e0315340. https://doi.org/10.1371/journal.pone.0315340.

Full text
Abstract:
Deep learning, as a high-performance data analysis method, has demonstrated superior efficiency and accuracy in side-channel attacks compared to traditional methods. However, many existing models enhance accuracy by stacking network layers, leading to increased algorithmic and computational complexity, overfitting, low training efficiency, and limited feature extraction capabilities. Moreover, deep learning methods rely on data correlation, and the presence of noise tends to reduce this correlation, increasing the difficulty of attacks. To address these challenges, this paper proposes the application of an InceptionNet-based network structure for side-channel attacks. This network utilizes fewer training parameters. achieves faster convergence and demonstrates improved attack efficiency through parallel processing of input data. Additionally, a LU-Net-based network structure is proposed for denoising side-channel datasets. This network captures the characteristics of input signals through an encoder, reconstructs denoised signals using a decoder, and utilizes LSTM layers and skip connections to preserve the temporal coherence and spatial details of the signals, thereby achi-eving the purpose of denoising. Experimental evaluations were conducted on the ASCAD dataset and the DPA Contest v4 dataset for comparative studies. The results indicate that the deep learning attack model proposed in this paper effectively enhances side-channel attack performance. On the ASCAD dataset, the recovery of keys requires only 30 traces, and on the DPA Contest v4 dataset, only 1 trace is needed for key recovery. Furthermore, the proposed deep learning denoising model significantly reduces the impact of noise on side-channel attack performance, thereby improving efficiency.
APA, Harvard, Vancouver, ISO, and other styles
48

Gao, Wei, Yunqing Liu, Yi Zeng, Quanyang Liu, and Qi Li. "SAR Image Ship Target Detection Adversarial Attack and Defence Generalization Research." Sensors 23, no. 4 (2023): 2266. http://dx.doi.org/10.3390/s23042266.

Full text
Abstract:
The synthetic aperture radar (SAR) image ship detection system needs to adapt to an increasingly complicated actual environment, and the requirements for the stability of the detection system continue to increase. Adversarial attacks deliberately add subtle interference to input samples and cause models to have high confidence in output errors. There are potential risks in a system, and input data that contain confrontation samples can be easily used by malicious people to attack the system. For a safe and stable model, attack algorithms need to be studied. The goal of traditional attack algorithms is to destroy models. When defending against attack samples, a system does not consider the generalization ability of the model. Therefore, this paper introduces an attack algorithm which can improve the generalization of models by based on the attributes of Gaussian noise, which is widespread in actual SAR systems. The attack data generated by this method have a strong effect on SAR ship detection models and can greatly reduce the accuracy of ship recognition models. While defending against attacks, filtering attack data can effectively improve the model defence capabilities. Defence training greatly improves the anti-attack capacity, and the generalization capacity of the model is improved accordingly.
APA, Harvard, Vancouver, ISO, and other styles
49

Kachko, Olena, Yurii Gorbenko, Serhii Kandii, and Yevhenii Kaptol. "Improving protection of falcon electronic signature software implementations against attacks based on floating point noise." Eastern-European Journal of Enterprise Technologies 4, no. 9 (130) (2024): 6–17. http://dx.doi.org/10.15587/1729-4061.2024.310521.

Full text
Abstract:
The object of this study is digital signatures. The Falcon digital signature scheme is one of the finalists in the NIST post-quantum cryptography competition. Its distinctive feature is the use of floating-point arithmetic, which leads to the possibility of a key recovery attack with two non-matching signatures formed under special conditions. The work considers the task to improve the Falcon in order to prevent such attacks, as well as the use of fixed-point calculations instead of floating-point calculations in the Falcon scheme. The main results of the work are proposals for methods on improving Falcon's security against attacks based on the use of floating-point calculations. These methods for improving security differ from others in the use of fixed-point calculations with specific experimentally determined orders of magnitude in one case and proposals for modifying procedures during the execution of which the conditions for performing an attack on implementation level arise in the second case. As a result of the analysis, the probability of a successful attack on the recovery of the secret key for the reference implementation of the Falcon was clarified. Specific places in the code that make the attack possible have been localized and code modifications have been suggested that make the attack impossible. In addition, the necessary scale for fixed-point calculations was determined, at which it is possible to completely get rid of floating-point calculations. The results could be used to qualitatively improve the security of existing digital signatures. This will make it possible to design more reliable and secure information systems using digital signatures. In addition, the results could be implemented in existing systems to ensure their resistance to modern threats
APA, Harvard, Vancouver, ISO, and other styles
50

Muñoz, Ernesto Cadena, Gustavo Chica Pedraza, Rafael Cubillos-Sánchez, Alexander Aponte-Moreno, and Mónica Espinosa Buitrago. "PUE Attack Detection by Using DNN and Entropy in Cooperative Mobile Cognitive Radio Networks." Future Internet 15, no. 6 (2023): 202. http://dx.doi.org/10.3390/fi15060202.

Full text
Abstract:
The primary user emulation (PUE) attack is one of the strongest attacks in mobile cognitive radio networks (MCRN) because the primary users (PU) and secondary users (SU) are unable to communicate if a malicious user (MU) is present. In the literature, some techniques are used to detect the attack. However, those techniques do not explore the cooperative detection of PUE attacks using deep neural networks (DNN) in one MCRN network and with experimental results on software-defined radio (SDR). In this paper, we design and implement a PUE attack in an MCRN, including a countermeasure based on the entropy of the signals, DNN, and cooperative spectrum sensing (CSS) to detect the attacks. A blacklist is included in the fusion center (FC) to record the data of the MU. The scenarios are simulated and implemented on the SDR testbed. Results show that this solution increases the probability of detection (PD) by 20% for lower signal noise ratio (SNR) values, allowing the detection of the PUE attack and recording the data for future reference by the attacker, sharing the data for all the SU.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!