Academic literature on the topic 'Deep neural networks architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep neural networks architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep neural networks architecture"

1

Laveglia, Vincenzo, and Edmondo Trentin. "Downward-Growing Neural Networks." Entropy 25, no. 5 (2023): 733. http://dx.doi.org/10.3390/e25050733.

Full text
Abstract:
A major issue in the application of deep learning is the definition of a proper architecture for the learning machine at hand, in such a way that the model is neither excessively large (which results in overfitting the training data) nor too small (which limits the learning and modeling capabilities of the automatic learner). Facing this issue boosted the development of algorithms for automatically growing and pruning the architectures as part of the learning process. The paper introduces a novel approach to growing the architecture of deep neural networks, called downward-growing neural netwo
APA, Harvard, Vancouver, ISO, and other styles
2

Suk-Hwan, Jung, and Chung Yong-Joo. "Sound event detection using deep neural networks." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (2020): 2587~2596. https://doi.org/10.12928/TELKOMNIKA.v18i5.14246.

Full text
Abstract:
We applied various architectures of deep neural networks for sound event detection and compared their performance using two different datasets. Feed forward neural network (FNN), convolutional neural network (CNN), recurrent neural network (RNN) and convolutional recurrent neural network (CRNN) were implemented using hyper-parameters optimized for each architecture and dataset. The results show that the performance of deep neural networks varied significantly depending on the learning rate, which can be optimized by conducting a series of experiments on the validation data over predetermined r
APA, Harvard, Vancouver, ISO, and other styles
3

Svitlana, Shapovalova, and Moskalenko Yurii. "METHODS FOR INCREASING THE CLASSIFICATION ACCURACY BASED ON MODIFICATIONS OF THE BASIC ARCHITECTURE OF CONVOLUTIONAL NEURAL NETWORKS." ScienceRise 6 (December 30, 2020): 10–16. https://doi.org/10.21303/2313-8416.2020.001550.

Full text
Abstract:
<strong>Object of research:</strong>&nbsp;basic architectures of deep learning neural networks. <strong>Investigated problem:</strong>&nbsp;insufficient accuracy of solving the classification problem based on the basic architectures of deep learning neural networks. An increase in accuracy requires a significant complication of the architecture, which, in turn, leads to an increase in the required computing resources, as well as the consumption of video memory and the cost of learning/output time. Therefore, the problem arises of determining such methods for modifying basic architectures that
APA, Harvard, Vancouver, ISO, and other styles
4

Christy, Ntambwe Kabamba, Mpuekela .N Lucie, Ntumba .B Simon, and Mbuyi .M Eugene. "Convolutional Neural Networks and Pattern Recognition: Application to Image Classification." International Journal of Computer Science Issues 16, no. 6 (2019): 10–18. https://doi.org/10.5281/zenodo.3987070.

Full text
Abstract:
This research study focuses on pattern recognition using convolutional neural network. Deep neural network has been choosing as the best option for the training process because it produced a high percentage of accuracy. We designed different architectures of convolutional neural network in order to find the one with high accuracy of image classification and optimum bias. We used CIFAR-10 data set that contains 60000 Images to train our model on architectures. The best architecture was able to classify images with 95.55% of accuracy and an error of 0.32% using cross validation method. We note t
APA, Harvard, Vancouver, ISO, and other styles
5

Gallicchio, Claudio, and Alessio Micheli. "Fast and Deep Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3898–905. http://dx.doi.org/10.1609/aaai.v34i04.5803.

Full text
Abstract:
We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture o
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Xinwei, Yong Wu, Jingjing Miao, and Yang Chen. "LiteGaze: Neural architecture search for efficient gaze estimation." PLOS ONE 18, no. 5 (2023): e0284814. http://dx.doi.org/10.1371/journal.pone.0284814.

Full text
Abstract:
Gaze estimation plays a critical role in human-centered vision applications such as human–computer interaction and virtual reality. Although significant progress has been made in automatic gaze estimation by deep convolutional neural networks, it is still difficult to directly deploy deep learning based gaze estimation models across different edge devices, due to the high computational cost and various resource constraints. This work proposes LiteGaze, a deep learning framework to learn architectures for efficient gaze estimation via neural architecture search (NAS). Inspired by the once-for-a
APA, Harvard, Vancouver, ISO, and other styles
7

Паршин, А. И., М. Н. Аралов, В. Ф. Барабанов, and Н. И. Гребенникова. "RANDOM MULTI-MODAL DEEP LEARNING IN THE PROBLEM OF IMAGE RECOGNITION." ВЕСТНИК ВОРОНЕЖСКОГО ГОСУДАРСТВЕННОГО ТЕХНИЧЕСКОГО УНИВЕРСИТЕТА, no. 4 (October 20, 2021): 21–26. http://dx.doi.org/10.36622/vstu.2021.17.4.003.

Full text
Abstract:
Задача распознавания изображений - одна из самых сложных в машинном обучении, требующая от исследователя как глубоких знаний, так и больших временных и вычислительных ресурсов. В случае использования нелинейных и сложных данных применяются различные архитектуры глубоких нейронных сетей, но при этом сложным вопросом остается проблема выбора нейронной сети. Основными архитектурами, используемыми повсеместно, являются свёрточные нейронные сети (CNN), рекуррентные нейронные сети (RNN), глубокие нейронные сети (DNN). На основе рекуррентных нейронных сетей (RNN) были разработаны сети с долгой кратко
APA, Harvard, Vancouver, ISO, and other styles
8

Ghimire, Deepak, Dayoung Kil, and Seong-heum Kim. "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Electronics 11, no. 6 (2022): 945. http://dx.doi.org/10.3390/electronics11060945.

Full text
Abstract:
Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and r
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, Wenqi, Yangyi Zhao, Yunfan Chen, Jinhong Park, and Hyunchul Shin. "Hardware Architecture Exploration for Deep Neural Networks." Arabian Journal for Science and Engineering 46, no. 10 (2021): 9703–12. http://dx.doi.org/10.1007/s13369-021-05455-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gottapu, Ram Deepak, and Cihan H. Dagli. "Efficient Architecture Search for Deep Neural Networks." Procedia Computer Science 168 (2020): 19–25. http://dx.doi.org/10.1016/j.procs.2020.02.246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Deep neural networks architecture"

1

Heuillet, Alexandre. "Exploring deep neural network differentiable architecture design." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG069.

Full text
Abstract:
L'intelligence artificielle (IA) a gagné en popularité ces dernières années, principalement en raison de ses applications réussies dans divers domaines tels que l'analyse de données textuelles, la vision par ordinateur et le traitement audio. La résurgence des techniques d'apprentissage profond a joué un rôle central dans ce succès. L'article révolutionnaire de Krizhevsky et al., AlexNet, a réduit l'écart entre les performances humaines et celles des machines dans les tâches de classification d'images. Des articles ultérieurs tels que Xception et ResNet ont encore renforcé l'apprentissage prof
APA, Harvard, Vancouver, ISO, and other styles
2

Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.

Full text
Abstract:
Les architectures neuronales profondes ont démontré des résultats remarquables dans diverses tâches de vision par ordinateur. Cependant, leur performance extraordinaire se fait au détriment de l'interprétabilité. En conséquence, le domaine de l'IA explicable a émergé pour comprendre réellement ce que ces modèles apprennent et pour découvrir leurs sources d'erreur. Cette thèse explore les algorithmes explicables afin de révéler les biais et les variables utilisés par ces modèles de boîte noire dans le contexte de la classification d'images. Par conséquent, nous divisons cette thèse en quatre pa
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yanxi. "Efficient Neural Architecture Search with an Active Performance Predictor." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24092.

Full text
Abstract:
This thesis searches for the optimal neural architecture by minimizing a proxy of validation loss. Existing neural architecture search (NAS) methods used to discover the optimal neural architecture that best fits the validation examples given the up-to-date network weights. However, back propagation with a number of validation examples could be time consuming, especially when it needs to be repeated many times in NAS. Though these intermediate validation results are invaluable, they would be wasted if we cannot use them to predict the future from the past. In this thesis, we propose to approxi
APA, Harvard, Vancouver, ISO, and other styles
4

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Full text
Abstract:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequ
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Yao. "Vehicle Detection in Deep Learning." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91375.

Full text
Abstract:
Computer vision techniques are becoming increasingly popular. For example, face recognition is used to help police find criminals, vehicle detection is used to prevent drivers from serious traffic accidents, and written word recognition is used to convert written words into printed words. With the rapid development of vehicle detection given the use of deep learning techniques, there are still concerns about the performance of state-of-the-art vehicle detection techniques. For example, state-of-the-art vehicle detectors are restricted by the large variation of scales. People working on vehicle
APA, Harvard, Vancouver, ISO, and other styles
6

Fayyazifar, Najmeh. "Deep learning and neural architecture search for cardiac arrhythmias classification." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2553.

Full text
Abstract:
Cardiovascular disease (CVD) is the primary cause of mortality worldwide. Among people with CVD, cardiac arrhythmias (changes in the natural rhythm of the heart), are a leading cause of death. The clinical routine for arrhythmia diagnosis includes acquiring an electrocardiogram (ECG) and manually reviewing the ECG trace to identify the arrhythmias. However, due to the varying expertise level of clinicians, accurate diagnosis of arrhythmias with similar visual characteristics (that naturally exists in some different types of arrhythmias) can be challenging for some front-line clinicians. In add
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yu-Hsin Ph D. Massachusetts Institute of Technology. "Architecture design for highly flexible and energy-efficient deep neural network accelerators." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117838.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 141-147).<br>Deep neural networks (DNNs) are the backbone of modern artificial intelligence (AI). However, due to their high computational complexity and diverse shapes and sizes, dedicated accelerators that can achieve high
APA, Harvard, Vancouver, ISO, and other styles
8

Vukotic, Verdran. "Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0015/document.

Full text
Abstract:
La thèse porte sur le développement d'architectures neuronales profondes permettant d'analyser des contenus textuels ou visuels, ou la combinaison des deux. De manière générale, le travail tire parti de la capacité des réseaux de neurones à apprendre des représentations abstraites. Les principales contributions de la thèse sont les suivantes: 1) Réseaux récurrents pour la compréhension de la parole: différentes architectures de réseaux sont comparées pour cette tâche sur leurs facultés à modéliser les observations ainsi que les dépendances sur les étiquettes à prédire. 2) Prédiction d’image et
APA, Harvard, Vancouver, ISO, and other styles
9

Marti, Marco Ros. "Deep Convolutional Neural Network for Effective Image Analysis : DESIGN AND IMPLEMENTATION OF A DEEP PIXEL-WISE SEGMENTATION ARCHITECTURE." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227851.

Full text
Abstract:
This master thesis presents the process of designing and implementing a CNN-based architecture for image recognition included in a larger project in the field of fashion recommendation with deep learning. Concretely, the presented network aims to perform localization and segmentation tasks. Therefore, an accurate analysis of the most well-known localization and segmentation networks in the state of the art has been performed. Afterwards, a multi-task network performing RoI pixel-wise segmentation has been created. This proposal solves the detected weaknesses of the pre-existing networks in the
APA, Harvard, Vancouver, ISO, and other styles
10

Bhattarai, Smrity. "Digital Architecture for real-time face detection for deep video packet inspection systems." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1492787219112947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Deep neural networks architecture"

1

Alsuhli, Ghada, Vasilis Sakellariou, Hani Saleh, Mahmoud Al-Qutayri, Baker Mohammad, and Thanos Stouraitis. Number Systems for Deep Neural Network Architectures. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-38133-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aggarwal, Charu C. Neural Networks and Deep Learning. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aggarwal, Charu C. Neural Networks and Deep Learning. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moolayil, Jojo. Learn Keras for Deep Neural Networks. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

1947-, Holden Arun V., and Kri͡ukov V. I. 1935-, eds. Neural networks: Theory and architecture. Manchester University Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Caterini, Anthony L., and Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Razaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [publisher not identified], 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fingscheidt, Tim, Hanno Gottschalk, and Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Modrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Iba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Deep neural networks architecture"

1

Wang, Liang, and Jianxin Zhao. "Deep Neural Networks." In Architecture of Advanced Numerical Analysis Systems. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_5.

Full text
Abstract:
AbstractThere are many articles teaching people how to build intelligent applications using different frameworks such as TensorFlow, PyTorch, etc. However, except those very professional research papers, very few articles can give us a comprehensive understanding on how to develop such frameworks. In this chapter, rather than just “casting spells,” we focus on explaining how to make the magic work in the first place. We will dissect the deep neural network module in Owl, then demonstrate how to assemble different building blocks to build a working framework. Owl’s neural network module is a full-featured DNN framework. You can define a neural network in a very compact and elegant way thanks to OCaml’s expressiveness. The DNN applications built on Owl can achieve state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Calin, Ovidiu. "Neural Networks." In Deep Learning Architectures. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Yanan, Gary G. Yen, and Mengjie Zhang. "Deep Neural Networks." In Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and Recent Advances. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16868-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calin, Ovidiu. "Recurrent Neural Networks." In Deep Learning Architectures. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wüthrich, Mario V., and Michael Merz. "Deep Learning." In Springer Actuarial. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_7.

Full text
Abstract:
AbstractThe core of this book are deep learning methods and neural networks. This chapter considers deep feed-forward neural (FN) networks. We introduce the generic architecture of deep FN networks, and we discuss universality theorems of FN networks. We present network fitting, back-propagation, embedding layers for categorical variables and insurance-specific issues such as the balance property in network fitting, as well as network ensembling to reduce model uncertainty. This chapter is complemented by many examples on non-life insurance pricing, but also on mortality modeling, as well as tools that help to explain deep FN network regression results.
APA, Harvard, Vancouver, ISO, and other styles
6

Weston, Kevin, Vahid Janfaza, Abhishek Taur, et al. "Post-Silicon Customization Using Deep Neural Networks." In Architecture of Computing Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42785-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shanthini, A., Gunasekaran Manogaran, and G. Vadivu. "Deep Convolutional Neural Network Architecture." In Series in BioEngineering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3877-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maheswari, S. "Web Service User Diagnostics with Deep Learning Architectures." In Recurrent Neural Networks. CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pavlitskaya, Svetlana, Christian Hubschneider, and Michael Weber. "Evaluating Mixture-of-Experts Architectures for Network Aggregation." In Deep Neural Networks and Data for Automated Driving. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_11.

Full text
Abstract:
AbstractThe mixture-of-experts (MoE) architecture is an approach to aggregate several expert components via an additional gating module, which learns to predict the most suitable distribution of the expert’s outputs for each input. An MoE thus not only relies on redundancy for increased robustness—we also demonstrate how this architecture can provide additional interpretability, while retaining performance similar to a standalone network. As an example, we train expert networks to perform semantic segmentation of the traffic scenes and combine them into an MoE with an additional gating network. Our experiments with two different expert model architectures ( and ) reveal that the MoE is able to reach, and for certain data subsets even surpass, the baseline performance and also outperforms a simple aggregation via ensembling. A further advantage of an MoE is the increased interpretability—a comparison of pixel-wise predictions of the whole MoE model and the participating experts’ help to identify regions of high uncertainty in an input.
APA, Harvard, Vancouver, ISO, and other styles
10

Koh, Immanuel. "Associative Synthesis with Deep Neural Networks for Architectural Design." In Formal Methods in Architecture. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2217-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep neural networks architecture"

1

Nossier, Soha A., and Mhd Saeed Sharif. "Gender-Specific Speech Enhancement Architecture for Improving Deep Neural Networks Learning." In 2024 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT). IEEE, 2024. https://doi.org/10.1109/3ict64318.2024.10824570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patil, Kavita, Rohit Patil, Vedanti Koyande, Amaya Singh Thakur, and Kshitij Kadam. "Analyzing Chatbot Architectures Utilising Deep Neural Networks." In 2024 IEEE 6th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA). IEEE, 2024. https://doi.org/10.1109/icccmla63077.2024.10871275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Bin, Xiaofeng Wang, Wenzhuo Han, et al. "SecureVeil: A Modular Architecture with Deep Cosine Transformation and Secure Key Fusion for Face Template Protection." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zheng, Xuan Rao, Shaojie Liu, Bo Zhao, and Derong Liu. "ENAO: Evolutionary Neural Architecture Optimization in the Approximate Continuous Latent Space of a Deep Generative Model." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yao, Yuan, Xiaoyue Chen, Hannah Atmer, and Stefanos Kaxiras. "TangramFP: Energy-Efficient, Bit-Parallel, Multiply-Accumulate for Deep Neural Networks." In 2024 IEEE 36th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). IEEE, 2024. http://dx.doi.org/10.1109/sbac-pad63648.2024.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Logeshwaran, J., Durgesh Srivastava, Manoj Pal, S. Dhanasekaran, Anuradha S. Nigade, and Keshav Kaushik. "Optimal Network Architecture and Inference Dependencies for Efficient Training of Deep Neural Networks in Bioinformatics." In 2024 Eighth International Conference on Parallel, Distributed and Grid Computing (PDGC). IEEE, 2024. https://doi.org/10.1109/pdgc64653.2024.10984312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Jie, Liujuan Cao, Tong Tong, et al. "Architecture Disentanglement for Deep Neural Networks." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

La Malfa, Emanuele, Gabriele La Malfa, Giuseppe Nicosia, and Vito Latora. "Deep Neural Networks via Complex Network Theory: A Perspective." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/482.

Full text
Abstract:
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures. However, classic works adapt CNT metrics that only permit a topological analysis as they do not account for the effect of the input data. In addition, CNT metrics have been applied to a limited range of architectures, mainly including Fully Connected neural networks. In this work, we
APA, Harvard, Vancouver, ISO, and other styles
9

Lopes, Eduardo José Costa, and Reinaldo Augusto da Costa Bianchi. "Short-term prediction for Ethereum with Deep Neural Networks." In Brazilian Workshop on Artificial Intelligence in Finance. Sociedade Brasileira de Computação, 2022. http://dx.doi.org/10.5753/bwaif.2022.222629.

Full text
Abstract:
The main contribution of this research is to investigate whether an Artificial Neural Network is an option to predict Ethereum cryptocurrency close price on a time constrained scenario. The ANN training time and time lagged data availability are considered as constraints on finding the fastest and the most accurate regression model using ARIMA results as a baseline. As part of the study, hourly aggregated data is processed to generate a step-ahead forecast and then processing time is compared for each architecture. Previous work related to cryptocurrency forecasting usually focus the analysis
APA, Harvard, Vancouver, ISO, and other styles
10

Elsayed, Nelly, Zag ElSayed, and Anthony S. Maida. "LiteLSTM Architecture for Deep Recurrent Neural Networks." In 2022 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2022. http://dx.doi.org/10.1109/iscas48785.2022.9937585.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep neural networks architecture"

1

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-worl
APA, Harvard, Vancouver, ISO, and other styles
2

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, compa
APA, Harvard, Vancouver, ISO, and other styles
3

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capa
APA, Harvard, Vancouver, ISO, and other styles
4

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and f
APA, Harvard, Vancouver, ISO, and other styles
5

Pasupuleti, Murali Krishna. Quantum-Enhanced Machine Learning: Harnessing Quantum Computing for Next-Generation AI Systems. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv125.

Full text
Abstract:
Abstract Quantum-enhanced machine learning (QML) represents a paradigm shift in artificial intelligence by integrating quantum computing principles to solve complex computational problems more efficiently than classical methods. By leveraging quantum superposition, entanglement, and parallelism, QML has the potential to accelerate deep learning training, optimize combinatorial problems, and enhance feature selection in high-dimensional spaces. This research explores foundational quantum computing concepts relevant to AI, including quantum circuits, variational quantum algorithms, and quantum k
APA, Harvard, Vancouver, ISO, and other styles
6

Pettit, Chris, and D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41034.

Full text
Abstract:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condit
APA, Harvard, Vancouver, ISO, and other styles
7

Panta, Manisha, Md Tamjidul Hoque, Kendall Niles, Joe Tom, Mahdi Abdelguerfi, and Maik Flanagin. Deep learning approach for accurate segmentation of sand boils in levee systems. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49460.

Full text
Abstract:
Sand boils can contribute to the liquefaction of a portion of the levee, leading to levee failure. Accurately detecting and segmenting sand boils is crucial for effectively monitoring and maintaining levee systems. This paper presents SandBoilNet, a fully convolutional neural network with skip connections designed for accurate pixel-level classification or semantic segmentation of sand boils from images in levee systems. In this study, we explore the use of transfer learning for fast training and detecting sand boils through semantic segmentation. By utilizing a pretrained CNN model with ResNe
APA, Harvard, Vancouver, ISO, and other styles
8

Koh, Christopher Fu-Chai, and Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1557202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shevitski, Brian, Yijing Watkins, Nicole Man, and Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/1984848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Landon, Nicholas. A survey of repair strategies for deep neural networks. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!