To see the other types of publications on this topic, follow the link: Sparse deep neural networks.

Journal articles on the topic 'Sparse deep neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse deep neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Scardapane, Simone, Danilo Comminiello, Amir Hussain, and Aurelio Uncini. "Group sparse regularization for deep neural networks." Neurocomputing 241 (June 2017): 81–89. http://dx.doi.org/10.1016/j.neucom.2017.02.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bi, Jia, and Steve R. Gunn. "Sparse Deep Neural Network Optimization for Embedded Intelligence." International Journal on Artificial Intelligence Tools 29, no. 03n04 (2020): 2060002. http://dx.doi.org/10.1142/s0218213020600027.

Full text
Abstract:
Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoreticall
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Kailun, Yiwen Guo, and Changshui Zhang. "Compressing Deep Neural Networks With Sparse Matrix Factorization." IEEE Transactions on Neural Networks and Learning Systems 31, no. 10 (2020): 3828–38. http://dx.doi.org/10.1109/tnnls.2019.2946636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zang, Ke, Wenqi Wu, and Wei Luo. "Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks." Sensors 21, no. 19 (2021): 6410. http://dx.doi.org/10.3390/s21196410.

Full text
Abstract:
Deep learning models, especially recurrent neural networks (RNNs), have been successfully applied to automatic modulation classification (AMC) problems recently. However, deep neural networks are usually overparameterized, i.e., most of the connections between neurons are redundant. The large model size hinders the deployment of deep neural networks in applications such as Internet-of-Things (IoT) networks. Therefore, reducing parameters without compromising the network performance via sparse learning is often desirable since it can alleviates the computational and storage burdens of deep lear
APA, Harvard, Vancouver, ISO, and other styles
5

Tartaglione, Enzo, Andrea Bragagnolo, Attilio Fiandrotti, and Marco Grangetto. "LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks." Neural Networks 146 (February 2022): 230–37. http://dx.doi.org/10.1016/j.neunet.2021.11.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Rongrong, Jianyu Miao, Lingfeng Niu та Peng Zhang. "Transformed ℓ1 regularization for learning sparse deep neural networks". Neural Networks 119 (листопад 2019): 286–98. http://dx.doi.org/10.1016/j.neunet.2019.08.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Jin, and Licheng Jiao. "Fast Sparse Deep Neural Networks: Theory and Performance Analysis." IEEE Access 7 (2019): 74040–55. http://dx.doi.org/10.1109/access.2019.2920688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Yoonsang, Inseo Kim, Jinsung Kim, and Gordon Euhyun Moon. "Tensor Core-Adapted Sparse Matrix Multiplication for Accelerating Sparse Deep Neural Networks." Electronics 13, no. 20 (2024): 3981. http://dx.doi.org/10.3390/electronics13203981.

Full text
Abstract:
Sparse matrix–matrix multiplication (SpMM) is essential for deep learning models and scientific computing. Recently, Tensor Cores (TCs) on GPUs, originally designed for dense matrix multiplication with mixed precision, have gained prominence. However, utilizing TCs for SpMM is challenging due to irregular memory access patterns and a varying number of non-zero elements in a sparse matrix. To improve data locality, previous studies have proposed reordering sparse matrices before multiplication, but this adds computational overhead. In this paper, we propose Tensor Core-Adapted SpMM (TCA-SpMM),
APA, Harvard, Vancouver, ISO, and other styles
9

Gallicchio, Claudio, and Alessio Micheli. "Fast and Deep Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3898–905. http://dx.doi.org/10.1609/aaai.v34i04.5803.

Full text
Abstract:
We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture o
APA, Harvard, Vancouver, ISO, and other styles
10

Jasmin, Praful Bharadiya. "Convolutional Neural Networks for Image Classification." International Journal of Innovative Science and Research Technology 8, no. 5 (2023): 673–77. https://doi.org/10.5281/zenodo.8020781.

Full text
Abstract:
Deep learning has recently been applied to scene labelling, object tracking, pose estimation, text detection and recognition, visual saliency detection, and image categorization. Deep learning typically uses models like Auto Encoder, Sparse Coding, Restricted Boltzmann Machine, Deep Belief Networks, and Convolutional Neural Networks. Convolutional neural networks have exhibited good performance in picture categorization when compared to other types of models. A straightforward Convolutional neural network for image categorization was built in this paper. The image classification was finished b
APA, Harvard, Vancouver, ISO, and other styles
11

Ohn, Ilsang, and Yongdai Kim. "Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality." Neural Computation 34, no. 2 (2022): 476–517. http://dx.doi.org/10.1162/neco_a_01457.

Full text
Abstract:
Abstract Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems. However, the sparsity constraint requires knowing certain properties of the true model, which are not available in practice. Moreover, computation is difficult due to the discrete nature of the sparsity constraint. In this letter, we propose a novel penalized estimation method for sparse DNNs that resolves the problems existing in the sparsity constraint. We
APA, Harvard, Vancouver, ISO, and other styles
12

Kong, Fanqiang, Zhijie Lv, Kun Wang, Xu Fang, Yuhan Zheng, and Shengjie Yu. "A Variable-Iterative Fully Convolutional Neural Network for Sparse Unmixing." Photogrammetric Engineering & Remote Sensing 90, no. 11 (2024): 699–706. http://dx.doi.org/10.14358/pers.24-00038r2.

Full text
Abstract:
Neural networks have greatly promoted the development of hyperspectral unmixing (HU). Most data-driven deep networks extract features of hyperspectral images (HSIs) by stacking convolutional layers to achieve endmember extraction and abundance estimation. Some model-driven networks have strong interpretability but fail to mine the deep feature. We propose a variable-iterative fully convolutional neural network (VIFCNN) for sparse unmixing, combining the characteristics of these two networks. Under the model-driven iterative framework guided by sparse unmixing by variable splitting and augmente
APA, Harvard, Vancouver, ISO, and other styles
13

Wan, Xinyue, Bofeng Zhang, Guobing Zou, and Furong Chang. "Sparse Data Recommendation by Fusing Continuous Imputation Denoising Autoencoder and Neural Matrix Factorization." Applied Sciences 9, no. 1 (2018): 54. http://dx.doi.org/10.3390/app9010054.

Full text
Abstract:
In recent years, although deep neural networks have yielded immense success in solving various recognition and classification problems, the exploration of deep neural networks in recommender systems has received relatively less attention. Meanwhile, the inherent sparsity of data is still a challenging problem for deep neural networks. In this paper, firstly, we propose a new CIDAE (Continuous Imputation Denoising Autoencoder) model based on the Denoising Autoencoder to alleviate the problem of data sparsity. CIDAE performs regular continuous imputation on the missing parts of the original data
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Sangkyun, and Jeonghyun Lee. "Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems." Applied Sciences 9, no. 8 (2019): 1669. http://dx.doi.org/10.3390/app9081669.

Full text
Abstract:
Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using ℓ 1 -based sparse coding while trai
APA, Harvard, Vancouver, ISO, and other styles
15

Petschenig, Horst, and Robert Legenstein. "Quantized rewiring: hardware-aware training of sparse deep neural networks." Neuromorphic Computing and Engineering 3, no. 2 (2023): 024006. http://dx.doi.org/10.1088/2634-4386/accd8f.

Full text
Abstract:
Abstract Mixed-signal and fully digital neuromorphic systems have been of significant interest for deploying spiking neural networks in an energy-efficient manner. However, many of these systems impose constraints in terms of fan-in, memory, or synaptic weight precision that have to be considered during network design and training. In this paper, we present quantized rewiring (Q-rewiring), an algorithm that can train both spiking and non-spiking neural networks while meeting hardware constraints during the entire training process. To demonstrate our approach, we train both feedforward and recu
APA, Harvard, Vancouver, ISO, and other styles
16

Kaur, Mandeep, and Pradip Kumar Yadava. "A Review on Classification of Images with Convolutional Neural Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (2023): 658–63. http://dx.doi.org/10.22214/ijraset.2023.54704.

Full text
Abstract:
Abstract: Deep learning has recently been applied to scene labelling, object tracking, pose estimation, text detection and recognition, visual saliency detection, and image categorization. Deep learning typically uses models like Auto Encoder, Sparse Coding, Restricted Boltzmann Machine, Deep Belief Networks, and Convolutional Neural Networks. Convolutional neural networks have exhibited good performance in picture categorization when compared to other types of models. A straightforward Convolutional neural network for image categorization was built in this paper. The image classification was
APA, Harvard, Vancouver, ISO, and other styles
17

Gangopadhyay, Briti, Pallab Dasgupta, and Soumyajit Dey. "Safety Aware Neural Pruning for Deep Reinforcement Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 16212–13. http://dx.doi.org/10.1609/aaai.v37i13.26966.

Full text
Abstract:
Neural network pruning is a technique of network compression by removing weights of lower importance from an optimized neural network. Often, pruned networks are compared in terms of accuracy, which is realized in terms of rewards for Deep Reinforcement Learning (DRL) networks. However, networks that estimate control actions for safety-critical tasks, must also adhere to safety requirements along with obtaining rewards. We propose a methodology to iteratively refine the weights of a pruned neural network such that we get a sparse high-performance network without significant side effects on saf
APA, Harvard, Vancouver, ISO, and other styles
18

Belay, Kaleab. "Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 13126–27. http://dx.doi.org/10.1609/aaai.v36i11.21699.

Full text
Abstract:
Deep Neural Networks have memory and computational demands that often render them difficult to use in low-resource environments. Also, highly dense networks are over-parameterized and thus prone to overfitting. To address these problems, we introduce a novel algorithm that prunes (sparsifies) weights from the network by taking into account their magnitudes and gradients taken against a validation dataset. Unlike existing pruning methods, our method does not require the network model to be retrained once initial training is completed. On the CIFAR-10 dataset, our method reduced the number of pa
APA, Harvard, Vancouver, ISO, and other styles
19

Gong, Maoguo, Jia Liu, Hao Li, Qing Cai, and Linzhi Su. "A Multiobjective Sparse Feature Learning Model for Deep Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 26, no. 12 (2015): 3263–77. http://dx.doi.org/10.1109/tnnls.2015.2469673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Boo, Yoonho, and Wonyong Sung. "Compression of Deep Neural Networks with Structured Sparse Ternary Coding." Journal of Signal Processing Systems 91, no. 9 (2018): 1009–19. http://dx.doi.org/10.1007/s11265-018-1418-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Karim, Ahmad M., Mehmet S. Güzel, Mehmet R. Tolun, Hilal Kaya, and Fatih V. Çelebi. "A New Generalized Deep Learning Framework Combining Sparse Autoencoder and Taguchi Method for Novel Data Classification and Processing." Mathematical Problems in Engineering 2018 (June 7, 2018): 1–13. http://dx.doi.org/10.1155/2018/3145947.

Full text
Abstract:
Deep autoencoder neural networks have been widely used in several image classification and recognition problems, including hand-writing recognition, medical imaging, and face recognition. The overall performance of deep autoencoder neural networks mainly depends on the number of parameters used, structure of neural networks, and the compatibility of the transfer functions. However, an inappropriate structure design can cause a reduction in the performance of deep autoencoder neural networks. A novel framework, which primarily integrates the Taguchi Method to a deep autoencoder based system wit
APA, Harvard, Vancouver, ISO, and other styles
22

Mousavi, Hamid, Mohammad Loni, Mina Alibeigi, and Masoud Daneshtalab. "DASS: Differentiable Architecture Search for Sparse Neural Networks." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–21. http://dx.doi.org/10.1145/3609385.

Full text
Abstract:
The deployment of Deep Neural Networks (DNNs) on edge devices is hindered by the substantial gap between performance requirements and available computational power. While recent research has made significant strides in developing pruning methods to build a sparse network for reducing the computing overhead of DNNs, there remains considerable accuracy loss, especially at high pruning ratios. We find that the architectures designed for dense networks by differentiable architecture search methods are ineffective when pruning mechanisms are applied to them. The main reason is that the current meth
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Runjie, Qionggui Zhang, Yuankang Zhang, Rui Zhang, and Tao Meng. "Deep Learning-Based Transmitter Localization in Sparse Wireless Sensor Networks." Sensors 24, no. 16 (2024): 5335. http://dx.doi.org/10.3390/s24165335.

Full text
Abstract:
In the field of wireless communication, transmitter localization technology is crucial for achieving accurate source tracking. However, the extant methodologies for localization face numerous challenges in wireless sensor networks (WSNs), particularly due to the constraints posed by the sparse distribution of sensors across large areas. We present DSLoc, a deep learning-based approach for transmitter localization in sparse WSNs. Our method is based on an improved high-resolution network model in neural networks. To address localization in sparse wireless sensor networks, we design efficient fe
APA, Harvard, Vancouver, ISO, and other styles
24

Avgerinos, Christos, Nicholas Vretos, and Petros Daras. "Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks." Sensors 23, no. 3 (2023): 1325. http://dx.doi.org/10.3390/s23031325.

Full text
Abstract:
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, and therefore, be able to complete the specific task that they have been assigned to. Feeding a deep neural network with vast amounts of data usually ensures efficiency, but may, however, harm the network’s ability to generalize. To tackle this, numerous regularization techniques have be
APA, Harvard, Vancouver, ISO, and other styles
25

Hao, Yutong, Yunpeng Liu, Jinmiao Zhao, and Chuang Yu. "Dual-Domain Prior-Driven Deep Network for Infrared Small-Target Detection." Remote Sensing 15, no. 15 (2023): 3827. http://dx.doi.org/10.3390/rs15153827.

Full text
Abstract:
In recent years, data-driven deep networks have demonstrated remarkable detection performance for infrared small targets. However, continuously increasing the depth of neural networks to enhance performance has proven impractical. Consequently, the integration of prior physical knowledge related to infrared small targets within deep neural networks has become crucial. It aims to improve the models’ awareness of inherent physical characteristics. In this paper, we propose a novel dual-domain prior-driven deep network (DPDNet) for infrared small-target detection. Our method integrates the advant
APA, Harvard, Vancouver, ISO, and other styles
26

Qiao, Chen, Yan Shi, Yu-Xian Diao, Vince D. Calhoun, and Yu-Ping Wang. "Log-sum enhanced sparse deep neural network." Neurocomputing 407 (September 2020): 206–20. http://dx.doi.org/10.1016/j.neucom.2020.04.118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ao, Ren, Zhang Tao, Wang Yuhao, et al. "DARB: A Density-Adaptive Regular-Block Pruning for Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5495–502. http://dx.doi.org/10.1609/aaai.v34i04.6000.

Full text
Abstract:
The rapidly growing parameter volume of deep neural networks (DNNs) hinders the artificial intelligence applications on resource constrained devices, such as mobile and wearable devices. Neural network pruning, as one of the mainstream model compression techniques, is under extensive study to reduce the model size and thus the amount of computation. And thereby, the state-of-the-art DNNs are able to be deployed on those devices with high runtime energy efficiency. In contrast to irregular pruning that incurs high index storage and decoding overhead, structured pruning techniques have been prop
APA, Harvard, Vancouver, ISO, and other styles
28

Wan, Lulu, Tao Chen, Antonio Plaza, and Haojie Cai. "Hyperspectral Unmixing Based on Spectral and Sparse Deep Convolutional Neural Networks." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 11669–82. http://dx.doi.org/10.1109/jstars.2021.3126755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Khattak, Muhammad Irfan, Nasir Saleem, Jiechao Gao, Elena Verdu, and Javier Parra Fuente. "Regularized sparse features for noisy speech enhancement using deep neural networks." Computers and Electrical Engineering 100 (May 2022): 107887. http://dx.doi.org/10.1016/j.compeleceng.2022.107887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xie, Zhihua, Yi Li, Jieyi Niu, Ling Shi, Zhipeng Wang, and Guoyu Lu. "Hyperspectral face recognition based on sparse spectral attention deep neural networks." Optics Express 28, no. 24 (2020): 36286. http://dx.doi.org/10.1364/oe.404793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Wei, Yue Yang, and Longsheng Wei. "Weather Recognition of Street Scene Based on Sparse Deep Neural Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 3 (2017): 403–8. http://dx.doi.org/10.20965/jaciii.2017.p0403.

Full text
Abstract:
Recognizing different weather conditions is a core component of many different applications of outdoor video analysis and computer vision. Street analysis performance, including detecting street objects, detecting road lines, recognizing street sign and etc., varies greatly with weather, so modeling based on weather recognition is the key resolution in this field. Features derived from intrinsic properties of different weather conditions contribute to successful classification. We first propose using deep learning features from convolutional neural networks (CNN) for fine recognition. In order
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, Yao, Qingsong Liu, He Tian, Bingo Wing-Kuen Ling, and Zhe Zhang. "DeepRED Based Sparse SAR Imaging." Remote Sensing 16, no. 2 (2024): 212. http://dx.doi.org/10.3390/rs16020212.

Full text
Abstract:
The integration of deep neural networks into sparse synthetic aperture radar (SAR) imaging is explored to enhance SAR imaging performance and reduce the system’s sampling rate. However, the scarcity of training samples and mismatches between the training data and the SAR system pose significant challenges to the method’s further development. In this paper, we propose a novel SAR imaging approach based on deep image prior powered by RED (DeepRED), enabling unsupervised SAR imaging without the need for additional training data. Initially, DeepRED is introduced as the regularization technique wit
APA, Harvard, Vancouver, ISO, and other styles
33

El-Yabroudi, Mohammad Z., Ikhlas Abdel-Qader, Bradley J. Bazuin, Osama Abudayyeh, and Rakan C. Chabaan. "Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications." Sensors 22, no. 24 (2022): 9578. http://dx.doi.org/10.3390/s22249578.

Full text
Abstract:
Pixel-level depth information is crucial to many applications, such as autonomous driving, robotics navigation, 3D scene reconstruction, and augmented reality. However, depth information, which is usually acquired by sensors such as LiDAR, is sparse. Depth completion is a process that predicts missing pixels’ depth information from a set of sparse depth measurements. Most of the ongoing research applies deep neural networks on the entire sparse depth map and camera scene without utilizing any information about the available objects, which results in more complex and resource-demanding networks
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Xiao, Wenbin Li, Jing Huo, Lili Yao, and Yang Gao. "Layerwise Sparse Coding for Pruned Deep Neural Networks with Extreme Compression Ratio." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4900–4907. http://dx.doi.org/10.1609/aaai.v34i04.5927.

Full text
Abstract:
Deep neural network compression is important and increasingly developed especially in resource-constrained environments, such as autonomous drones and wearable devices. Basically, we can easily and largely reduce the number of weights of a trained deep model by adopting a widely used model compression technique, e.g., pruning. In this way, two kinds of data are usually preserved for this compressed model, i.e., non-zero weights and meta-data, where meta-data is employed to help encode and decode these non-zero weights. Although we can obtain an ideally small number of non-zero weights through
APA, Harvard, Vancouver, ISO, and other styles
35

Kohjima, Masahiro. "Shuffled Deep Regression." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13238–45. http://dx.doi.org/10.1609/aaai.v38i12.29224.

Full text
Abstract:
Shuffled regression is the problem of learning regression models from shuffled data that consists of a set of input features and a set of target outputs where the correspondence between the input and output is unknown. This study proposes a new deep learning method for shuffled regression called Shuffled Deep Regression (SDR). We derive the sparse and stochastic variant of the Expectation-Maximization algorithm for SDR that iteratively updates discrete latent variables and the parameters of neural networks. The effectiveness of the proposal is confirmed by benchmark data experiments.
APA, Harvard, Vancouver, ISO, and other styles
36

Östling, Robert. "Part of Speech Tagging: Shallow or Deep Learning?" Northern European Journal of Language Technology 5 (June 19, 2018): 1–15. http://dx.doi.org/10.3384/nejlt.2000-1533.1851.

Full text
Abstract:
Deep neural networks have advanced the state of the art in numerous fields, but they generally suffer from low computational efficiency and the level of improvement compared to more efficient machine learning models is not always significant. We perform a thorough PoS tagging evaluation on the Universal Dependencies treebanks, pitting a state-of-the-art neural network approach against UDPipe and our sparse structured perceptron-based tagger, efselab. In terms of computational efficiency, efselab is three orders of magnitude faster than the neural network model, while being more accurate than e
APA, Harvard, Vancouver, ISO, and other styles
37

Phan, Huy, Miao Yin, Yang Sui, Bo Yuan, and Saman Zonouz. "CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (2023): 2065–73. http://dx.doi.org/10.1609/aaai.v37i2.25299.

Full text
Abstract:
Model compression and model defense for deep neural networks (DNNs) have been extensively and individually studied. Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks. However, the structured sparse models obtained by the existing works suffer severe performance degradation for both benign and robust accuracy, thereby causing a challenging dilemma between robustness and structuredness of compact DNNs. To address this problem, in this paper, we propose
APA, Harvard, Vancouver, ISO, and other styles
38

Yao, Zhongtian, Kejie Huang, Haibin Shen, and Zhaoyan Ming. "Deep Neural Network Acceleration With Sparse Prediction Layers." IEEE Access 8 (2020): 6839–48. http://dx.doi.org/10.1109/access.2020.2963941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Rueckauer, Bodo, Connor Bybee, Ralf Goettsche, Yashwardhan Singh, Joyesh Mishra, and Andreas Wild. "NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi." ACM Journal on Emerging Technologies in Computing Systems 18, no. 3 (2022): 1–22. http://dx.doi.org/10.1145/3501770.

Full text
Abstract:
Spiking Neural Networks (SNNs) is a promising paradigm for efficient event-driven processing of spatio-temporally sparse data streams. Spiking Neural Networks (SNNs) have inspired the design of and can take advantage of the emerging class of neuromorphic processors like Intel Loihi. These novel hardware architectures expose a variety of constraints that affect firmware, compiler, and algorithm development alike. To enable rapid and flexible development of SNN algorithms on Loihi, we developed NxTF: a programming interface derived from Keras and compiler optimized for mapping deep convolutional
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Yuanyuan, and Zhang Yi. "Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks." Neurocomputing 450 (August 2021): 354–61. http://dx.doi.org/10.1016/j.neucom.2021.04.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Jiayu, Xiang Li, Vince D. Calhoun, et al. "Sparse deep neural networks on imaging genetics for schizophrenia case–control classification." Human Brain Mapping 42, no. 8 (2021): 2556–68. http://dx.doi.org/10.1002/hbm.25387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Jurdana, Vedran. "Deep Neural Networks for Estimating Regularization Parameter in Sparse Time–Frequency Reconstruction." Technologies 12, no. 12 (2024): 251. https://doi.org/10.3390/technologies12120251.

Full text
Abstract:
Time–frequency distributions (TFDs) are crucial for analyzing non-stationary signals. Compressive sensing (CS) in the ambiguity domain offers an approach for TFD reconstruction with high performance, but selecting the optimal regularization parameter for various signals remains challenging. Traditional methods for parameter selection, including manual and experimental approaches, as well as existing optimization procedures, can be imprecise and time-consuming. This study introduces a novel approach using deep neural networks (DNNs) to predict regularization parameters based on Wigner–Ville dis
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Jingjing, Lingjin Huang, Manlong Feng, Aiying Guo, Luqiao Yin, and Jianhua Zhang. "IESSP: Information Extraction-Based Sparse Stripe Pruning Method for Deep Neural Networks." Sensors 25, no. 7 (2025): 2261. https://doi.org/10.3390/s25072261.

Full text
Abstract:
Network pruning is a deep learning model compression technique aimed at reducing model storage requirements and decreasing computational resource consumption. However, mainstream pruning techniques often encounter challenges such as limited precision in feature selection and a diminished feature extraction capability. To address these issues, we propose an information extraction-based sparse stripe pruning (IESSP) method. This method introduces an information extraction module (IEM), which enhances stripe selection through a mask-based mechanism, promoting inter-layer interactions and directin
APA, Harvard, Vancouver, ISO, and other styles
44

Andrade, Jeffery, George Alvarez, and Talia Konkle. "Dissecting sparse circuits to high-level visual categories in deep neural networks." Journal of Vision 25, no. 9 (2025): 2305. https://doi.org/10.1167/jov.25.9.2305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

.., Vani, and Piyush Kumar Pareek. "Deep Multiple Instance Learning Approach for Classification in Clinical Decision Support Systems." American Journal of Business and Operations Research 10, no. 2 (2023): 52–60. http://dx.doi.org/10.54216/ajbor.100206.

Full text
Abstract:
To get around the drawbacks of conventional classification algorithms that required manual feature extraction and the high computational cost of neural networks, this paper introduces a deep convolutional neural network with multiple instance learning approaches, namely dynamic max pooling and sparse representation. For the categorization of tuberculosis lung illness, this model combines deep convolutional neural networks and multiple instance learning. The design was composed of four phases: pre-processing, instance production, feature extraction, and classification. To perform feature extrac
APA, Harvard, Vancouver, ISO, and other styles
46

Gupta, Rajat, and Rakesh Jindal. "Impact of Too Many Neural Network Layers on Overfitting." International Journal of Computer Science and Mobile Computing 14, no. 5 (2025): 1–14. https://doi.org/10.47760/ijcsmc.2025.v14i05.001.

Full text
Abstract:
Deep neural networks have revolutionized artificial intelligence by enabling models to learn intricate data representations. However, when these networks become too deep, they risk overfitting—memorizing training data rather than learning patterns that generalize well to new inputs. Excessive complexity can lead models to capture irrelevant noise, and issues such as vanishing/exploding gradients, high computational costs, and the curse of dimensionality further complicate training deep architectures. This paper explores how neural network layers function in learning and the challenges that ari
APA, Harvard, Vancouver, ISO, and other styles
47

Abdullah Jaber Al Hamadani, Rihab, Mahdi Mosleh, Ali Hashim Abbas Al-Sallami, and Rasool Sadeghi. "Improvement of Network Traffic Prediction in Beyond 5G Network using Sparse Decomposition and BiLSTM Neural Network." Qubahan Academic Journal 5, no. 2 (2025): 156–76. https://doi.org/10.48161/qaj.v5n2a1690.

Full text
Abstract:
Companies providing telecommunication services, especially in Beyond 5G networks, are increasingly interested in traffic forecasting to improve the services provided to their users. However, forecasting network traffic is challenging due to traffic data's dynamic and non-stationary nature. This study proposes an effective deep learning-based traffic prediction technique using BiLSTM (Bidirectional Long Short-Term Memory). The proposed method begins with preprocessing using K-SVD (K-means Singular Value Decomposition) to reduce dimensionality and enhance data representation. Next, sparse featur
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Chun-Hui, Cheng-Jian Lin, Yu-Chi Li, and Shyh-Hau Wang. "Using Generative Adversarial Networks and Parameter Optimization of Convolutional Neural Networks for Lung Tumor Classification." Applied Sciences 11, no. 2 (2021): 480. http://dx.doi.org/10.3390/app11020480.

Full text
Abstract:
Cancer is the leading cause of death worldwide. Lung cancer, especially, caused the most death in 2018 according to the World Health Organization. Early diagnosis and treatment can considerably reduce mortality. To provide an efficient diagnosis, deep learning is overtaking conventional machine learning techniques and is increasingly being used in computer-aided design systems. However, a sparse medical data set and network parameter tuning process cause network training difficulty and cost longer experimental time. In the present study, the generative adversarial network was proposed to gener
APA, Harvard, Vancouver, ISO, and other styles
49

Lui, Hugo F. S., and William R. Wolf. "Construction of reduced-order models for fluid flows using deep feedforward neural networks." Journal of Fluid Mechanics 872 (June 14, 2019): 963–94. http://dx.doi.org/10.1017/jfm.2019.358.

Full text
Abstract:
We present a numerical methodology for construction of reduced-order models (ROMs) of fluid flows through the combination of flow modal decomposition and regression analysis. Spectral proper orthogonal decomposition is applied to reduce the dimensionality of the model and, at the same time, filter the proper orthogonal decomposition temporal modes. The regression step is performed by a deep feedforward neural network (DNN), and the current framework is implemented in a context similar to the sparse identification of nonlinear dynamics algorithm. A discussion on the optimization of the DNN hype
APA, Harvard, Vancouver, ISO, and other styles
50

Caleb Isaac and Kourosh Zareinia. "Effect of excessive neural network layers on overfitting." World Journal of Advanced Research and Reviews 16, no. 2 (2022): 1246–57. https://doi.org/10.30574/wjarr.2022.16.2.1247.

Full text
Abstract:
Artificial intelligence has been transformed through deep neural networks into models that can learn complex representations from our data. But, an excessive amount of layers in the neural networks may create overfitting; that is, the model can memorize training data instead of generalizing to newer inputs. Networks that are too complex tend to 'overfit' on noise in addition to meaning patterns. Vanishing and exploding gradients, increased computational cost and the curse of dimensionality are the factors that make this issue worse and deeper networks harder to train effectively. In this paper
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!