Academic literature on the topic 'Neural Network Pruning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural Network Pruning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural Network Pruning"

1

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Full text
Abstract:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the
APA, Harvard, Vancouver, ISO, and other styles
2

Ganguli, Tushar, and Edwin K. P. Chong. "Activation-Based Pruning of Neural Networks." Algorithms 17, no. 1 (2024): 48. http://dx.doi.org/10.3390/a17010048.

Full text
Abstract:
We present a novel technique for pruning called activation-based pruning to effectively prune fully connected feedforward neural networks for multi-object classification. Our technique is based on the number of times each neuron is activated during model training. We compare the performance of activation-based pruning with a popular pruning method: magnitude-based pruning. Further analysis demonstrated that activation-based pruning can be considered a dimensionality reduction technique, as it leads to a sparse low-rank matrix approximation for each hidden layer of the neural network. We also d
APA, Harvard, Vancouver, ISO, and other styles
3

Koene, Randal A., and Yoshio Takane. "Discriminant Component Pruning: Regularization and Interpretation of Multilayered Backpropagation Networks." Neural Computation 11, no. 3 (1999): 783–802. http://dx.doi.org/10.1162/089976699300016665.

Full text
Abstract:
Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (DCP), a method of pruning matrices of summed contributions between layers of a neural network. Attempting to interpret the underlying functions learned by the network can be aided by pruning the network. Generalization performance should be maintained at its optimal level following
APA, Harvard, Vancouver, ISO, and other styles
4

Ling, Xing. "Summary of Deep Neural Network Pruning Algorithms." Applied and Computational Engineering 8, no. 1 (2023): 352–61. http://dx.doi.org/10.54254/2755-2721/8/20230182.

Full text
Abstract:
As deep learning has rapidly progressed in the 21st century, artificial neural networks have been continuously enhanced with deeper structures and larger parameter sets to tackle increasingly complex problems. However, this development also brings about the drawbacks of high computational and storage costs, which limit the application of neural networks in some practical scenarios. As a result, in recent years, more researchers have suggested and implemented network pruning techniques to decrease neural networks' computational and storage expenses while retaining the same level of accuracy. Th
APA, Harvard, Vancouver, ISO, and other styles
5

Gong, Ziyi, Huifu Zhang, Hao Yang, Fangjun Liu, and Fan Luo. "A Review of Neural Network Lightweighting Techniques." Innovation & Technology Advances 1, no. 2 (2024): 1–16. http://dx.doi.org/10.61187/ita.v1i2.36.

Full text
Abstract:
The application of portable devices based on deep learning has become increasingly widespread, which has made the deployment of complex neural networks on embedded devices a hot research topic. Neural network lightweighting is one of the key technologies for applying neural networks to embedded devices. This paper elaborates and analyzes neural network lightweighting techniques from two aspects: model pruning and network structure design. For model pruning, a comparison of methods from different periods is conducted, highlighting their advantages and limitations. Regarding network structure de
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Changyi, and Ping Li. "Hybrid Pruning Method Based on Convolutional Neural Network Sensitivity and Statistical Threshold." Journal of Physics: Conference Series 2171, no. 1 (2022): 012055. http://dx.doi.org/10.1088/1742-6596/2171/1/012055.

Full text
Abstract:
Abstract The hybrid pruning algorithm can not only ensure the precision of the network, but also achieve a good balance between pruning ratio and computation. However, traditional pruning algorithms use coarse-grained or fine-grained pruning networks, which have the tradeoff problem between pruning rate and computation amount. To this end, this paper presents. A hybrid pruning method of sensitivity and statistical threshold. Firstly, coarse-grained pruning is carried out on the network, and a fast sensitivity test is conducted on the convolutional layer of the network to determine the channels
APA, Harvard, Vancouver, ISO, and other styles
7

Zou, Yunhuan. "Research On Pruning Methods for Mobilenet Convolutional Neural Network." Highlights in Science, Engineering and Technology 81 (January 26, 2024): 232–36. http://dx.doi.org/10.54097/a742e326.

Full text
Abstract:
This paper comprehensively reviews pruning methods for MobileNet convolutional neural networks. MobileNet is a lightweight convolutional neural network suitable for resource-constrained environments such as mobile devices.Various pruning methods can be applied to reduce the model's storage space and computational complexity, including channel pruning, kernel pruning, and weight pruning. Channel pruning removes unimportant channels to reduce redundant parameters and computations in the model, while kernel pruning reduces redundant calculations by pruning convolutional kernels. Weight pruning in
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Ling, Lei Deng, Yueling Zeng, et al. "Crossbar-Aware Neural Network Pruning." IEEE Access 6 (2018): 58324–37. http://dx.doi.org/10.1109/access.2018.2874823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tsai, Feng-Sheng, Yi-Li Shih, Chin-Tzong Pang, and Sheng-Yi Hsu. "Formulation of Pruning Maps with Rhythmic Neural Firing." Mathematics 7, no. 12 (2019): 1247. http://dx.doi.org/10.3390/math7121247.

Full text
Abstract:
Rhythmic neural firing is thought to underlie the operation of neural function. This triggers the construction of dynamical network models to investigate how the rhythms interact with each other. Recently, an approach concerning neural path pruning has been proposed in a dynamical network system, in which critical neuronal connections are identified and adjusted according to the pruning maps, enabling neurons to produce rhythmic, oscillatory activity in simulation. Here, we construct a sort of homomorphic functions based on different rhythms of neural firing in network dynamics. Armed with the
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Miao, Xu Yang, Yunchong Qian, et al. "Adaptive Neural Network Structure Optimization Algorithm Based on Dynamic Nodes." Current Issues in Molecular Biology 44, no. 2 (2022): 817–32. http://dx.doi.org/10.3390/cimb44020056.

Full text
Abstract:
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural Network Pruning"

1

Scalco, Alberto <1993&gt. "Feature Selection Using Neural Network Pruning." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14382.

Full text
Abstract:
Feature selection is a well known technique for data prepossessing with the purpose of removing redundant and irrelevant information with the benefits, among others, of an improved generalization and a decreased curse of dimensionality. This paper investigates an approach based on a trained neural network model, where features are selected by iteratively removing a node in the input layer. This pruning process, comprise a node selection criterion and a subsequent weight correction: after a node elimination, the remaining weights are adjusted in a way that the overall network behaviour do not w
APA, Harvard, Vancouver, ISO, and other styles
2

Labarge, Isaac E. "Neural Network Pruning for ECG Arrhythmia Classification." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2136.

Full text
Abstract:
Convolutional Neural Networks (CNNs) are a widely accepted means of solving complex classification and detection problems in imaging and speech. However, problem complexity often leads to considerable increases in computation and parameter storage costs. Many successful attempts have been made in effectively reducing these overheads by pruning and compressing large CNNs with only a slight decline in model accuracy. In this study, two pruning methods are implemented and compared on the CIFAR-10 database and an ECG arrhythmia classification task. Each pruning method employs a pruning phase inter
APA, Harvard, Vancouver, ISO, and other styles
3

Brantley, Kiante. "BCAP| An Artificial Neural Network Pruning Technique to Reduce Overfitting." Thesis, University of Maryland, Baltimore County, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140605.

Full text
Abstract:
<p> Determining the optimal size of a neural network is complicated. Neural networks, with many free parameters, can be used to solve very complex problems. However, these neural networks are susceptible to overfitting. BCAP (Brantley-Clark Artificial Neural Network Pruning Technique) addresses overfitting by combining duplicate neurons in a neural network hidden layer, thereby forcing the network to learn more distinct features. We compare hidden units using the cosine similarity, and combine those that are similar with each other within a threshold &epsiv;. By doing so the co-adaption of the
APA, Harvard, Vancouver, ISO, and other styles
4

Hubens, Nathan. "Towards lighter and faster deep neural networks with parameter pruning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS025.

Full text
Abstract:
Depuis leur résurgence en 2012, les réseaux de neurones profonds sont devenus omniprésents dans la plupart des disciplines de l'intelligence artificielle, comme la reconnaissance d'images, le traitement de la parole et le traitement du langage naturel. Cependant, au cours des dernières années, les réseaux de neurones sont devenus exponentiellement profonds, faisant intervenir de plus en plus de paramètres. Aujourd'hui, il n'est pas rare de rencontrer des architectures impliquant plusieurs milliards de paramètres, alors qu'elles en contenaient le plus souvent des milliers il y a moins de dix an
APA, Harvard, Vancouver, ISO, and other styles
5

Santacroce, Michael. "Neural Classification of Malware-As-Video with Considerations for In-Hardware Inferencing." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554216974556897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dupont, Robin. "Deep Neural Network Compression for Visual Recognition." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS565.

Full text
Abstract:
Grâce à la miniaturisation de l'électronique, les dispositifs embarqués sont devenus omniprésents depuis les années 2010, réalisant diverses tâches autour de nous. À mesure que leur utilisation augmente, la demande pour des dispositifs traitant les données et prenant des décisions complexes de manière efficace s'intensifie. Les réseaux de neurones profonds sont puissants pour cet objectif, mais souvent trop lourds pour les appareils embarqués. Il est donc impératif de compresser ces réseaux sans compromettre leur performance. Cette thèse introduit deux méthodes innovantes centrées sur l'élagag
APA, Harvard, Vancouver, ISO, and other styles
7

PRONO, LUCIANO. "Methods and Applications for Low-power Deep Neural Networks on Edge Devices." Doctoral thesis, Politecnico di Torino, 2023. https://hdl.handle.net/11583/2976593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

ZULLICH, MARCO. "Un'analisi delle Tecniche di Potatura in Reti Neurali Profonde: Studi Sperimentali ed Applicazioni." Doctoral thesis, Università degli Studi di Trieste, 2023. https://hdl.handle.net/11368/3041099.

Full text
Abstract:
La potatura, nel contesto dell'Apprendimento Automatico, denota l'atto di rimuovere parametri da modelli parametrici come modelli lineari, alberi decisionali e Reti Neurali Artificiali (ANN). La potatura di un modello può essere motivata da numerose esigenze, primo fra tutti la riduzione in dimensione e l'occupazione di memoria, possibilmente senza inficiare l'accuratezza finale del modello. L'interesse della comunità scientifica riguardo alla potatura delle ANN è aumentato in maniera sostanziosa nell'ultimo decennio a causa dell'altrettanto cospicua crescita nella dimensione di tali modelli.
APA, Harvard, Vancouver, ISO, and other styles
9

Yvinec, Edouard. "Efficient Neural Networks : Post Training Pruning and Quantization." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS581.

Full text
Abstract:
Les réseaux de neurones profonds sont devenus les modèles les plus utilisés, que ce soit en vision par ordinateur ou en traitement du langage. Depuis le sursaut provoqué par l'utilisation des ordinateurs modernes, en 2012, la taille de ces modèles n'a fait qu'augmenter, aussi bien en matière de taille mémoire qu'en matière de coût de calcul. Ce phénomène a grandement limité le déploiement industriel de ces modèles. Spécifiquement, le cas de l'IA générative, et plus particulièrement des modèles de langue tels que GPT, a fait atteindre une toute nouvelle dimension à ce problème. En effet, ces ré
APA, Harvard, Vancouver, ISO, and other styles
10

Brigandì, Camilla. "Utilizzo della omologia persistente nelle reti neurali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
Lo scopo di questa tesi è quello di introdurre alcune applicazioni della topologia algebrica, e in particolare della teoria dell’omologia persistente, alle reti neurali. A tal fine, nel primo capitolo dell’elaborato vengono introdotti i concetti di neurone e rete neurale artificiale. Viene posta particolare attenzione sull’addestramento di una rete, spiegando anche delle problematiche e delle caratteristiche ad esso legate, come il problema dell’overfitting e la capacità di generalizzazione. All’interno dello stesso capitolo vengono anche esposti il concetto di similarità tra due reti e il con
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural Network Pruning"

1

Hong, X. A Givens rotation based fast backward elimination algorithm for RBF neural network pruning. University of Sheffield, Dept. of Automatic Control and Systems Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

C, Jorgensen Charles, and Ames Research Center, eds. Toward a more robust pruning procedure for MLP networks. National Aeronautics and Space Administration, Ames Research Center, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Multiple Comparison Pruning of Neural Networks. Storming Media, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural Network Pruning"

1

Chen, Jinting, Zhaocheng Zhu, Cheng Li, and Yuming Zhao. "Self-Adaptive Network Pruning." In Neural Information Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gridin, Ivan. "Model Pruning." In Automated Deep Learning Using Neural Network Intelligence. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8149-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pei, Songwen, Jie Luo, and Sheng Liang. "DRP:Discrete Rank Pruning for Neural Network." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21395-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Widmann, Thomas, Florian Merkle, Martin Nocker, and Pascal Schöttle. "Pruning for Power: Optimizing Energy Efficiency in IoT with Neural Network Pruning." In Engineering Applications of Neural Networks. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34204-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gong, Saijun, Lin Chen, and Zhicheng Dong. "Neural Network Pruning via Genetic Wavelet Channel Search." In Neural Information Processing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wenrui, and Jo Plested. "Pruning Convolutional Neural Network with Distinctiveness Approach." In Communications in Computer and Information Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36802-9_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Jia-Liang, Haopu Shang, Wenjing Hong, and Chao Qian. "Robust Neural Network Pruning by Cooperative Coevolution." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14714-2_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yang, and Baoliang Lu. "Structure Pruning Strategies for Min-Max Modular Network." In Advances in Neural Networks — ISNN 2005. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427391_103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Feifei, Tielin Zhang, Yi Zeng, and Bo Xu. "Towards a Brain-Inspired Developmental Neural Network by Adaptive Synaptic Pruning." In Neural Information Processing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pei, Songwen, Yusheng Wu, and Meikang Qiu. "Neural Network Compression and Acceleration by Federated Pruning." In Algorithms and Architectures for Parallel Processing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60239-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural Network Pruning"

1

Shang, Haopu, Jia-Liang Wu, Wenjing Hong, and Chao Qian. "Neural Network Pruning by Cooperative Coevolution." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/667.

Full text
Abstract:
Neural network pruning is a popular model compression method which can significantly reduce the computing cost with negligible loss of accuracy. Recently, filters are often pruned directly by designing proper criteria or using auxiliary modules to measure their importance, which, however, requires expertise and trial-and-error. Due to the advantage of automation, pruning by evolutionary algorithms (EAs) has attracted much attention, but the performance is limited for deep neural networks as the search space can be quite large. In this paper, we propose a new filter pruning algorithm CCEP by co
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Huan, Can Qin, Yue Bai, Yulun Zhang, and Yun Fu. "Recent Advances on Neural Network Pruning at Initialization." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/786.

Full text
Abstract:
Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to prune a randomly initialized network. This paper offers the first survey concentrated on this emerging pruning fashion. We first introduce a generic formulation of neural network pruning, followed by the major classic pruning topics. Then, as the main body of this paper, a thorough and structured literature review of PaI methods is presented, consisting of two major tracks (sparse training and sparse selection). Finally, w
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Chenglong, Bingbing Ni, Jian Zhang, Qiwei Zhao, Wenjun Zhang, and Qi Tian. "Variational Convolutional Neural Network Pruning." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Xingyu, Jinfeng Yi, Fan Zhang, and Sanguthevar Rajasekaran. "Adversarial Structured Neural Network Pruning." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. ACM, 2019. http://dx.doi.org/10.1145/3357384.3358150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Chih-Chia, Chia-Yin Liu, Chih-Hsuan Yen, Tei-Wei Kuo, and Pi-Cheng Hsiu. "Intermittent-Aware Neural Network Pruning." In 2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023. http://dx.doi.org/10.1109/dac56929.2023.10247825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shahhosseini, Sina, Ahmad Albaqsami, Masoomeh Jasemi, and Nader Bagherzadeh. "Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks." In 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). IEEE, 2020. http://dx.doi.org/10.1109/pdp50117.2020.00053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jeong, Taehee, Ehsam Ghasemi, Jorn Tuyls, Elliott Delaye, and Ashish Sirasao. "Neural network pruning and hardware acceleration." In 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020. http://dx.doi.org/10.1109/ucc48980.2020.00069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Sheng, Anran Huang, Lei Chen, and Baochang Zhang. "Convolutional Neural Network Pruning: A Survey." In 2020 39th Chinese Control Conference (CCC). IEEE, 2020. http://dx.doi.org/10.23919/ccc50068.2020.9189610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Molchanov, Pavlo, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. "Importance Estimation for Neural Network Pruning." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Setiono, R., and A. Gaweda. "Neural network pruning for function approximation." In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.859435.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural Network Pruning"

1

Guan, Hui, Xipeng Shen, Seung-Hwan Lim, and Robert M. Patton. Composability-Centered Convolutional Neural Network Pruning. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1427608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!