Literatura académica sobre el tema "Neural network accelerator"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Neural network accelerator".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Neural network accelerator"

1

Eliahu, Adi, Ronny Ronen, Pierre-Emmanuel Gaillardon, and Shahar Kvatinsky. "multiPULPly." ACM Journal on Emerging Technologies in Computing Systems 17, no. 2 (2021): 1–27. http://dx.doi.org/10.1145/3432815.

Texto completo
Resumen
Computationally intensive neural network applications often need to run on resource-limited low-power devices. Numerous hardware accelerators have been developed to speed up the performance of neural network applications and reduce power consumption; however, most focus on data centers and full-fledged systems. Acceleration in ultra-low-power systems has been only partially addressed. In this article, we present multiPULPly, an accelerator that integrates memristive technologies within standard low-power CMOS technology, to accelerate multiplication in neural network inference on ultra-low-pow
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hong, JiUn, Saad Arslan, TaeGeon Lee, and HyungWon Kim. "Design of Power-Efficient Training Accelerator for Convolution Neural Networks." Electronics 10, no. 7 (2021): 787. http://dx.doi.org/10.3390/electronics10070787.

Texto completo
Resumen
To realize deep learning techniques, a type of deep neural network (DNN) called a convolutional neural networks (CNN) is among the most widely used models aimed at image recognition applications. However, there is growing demand for light-weight and low-power neural network accelerators, not only for inference but also for training process. In this paper, we propose a training accelerator that provides low power and compact chip size targeted for mobile and edge computing applications. It accelerates to achieve the real-time processing of both inference and training using concurrent floating-p
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cho, Jaechan, Yongchul Jung, Seongjoo Lee, and Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme." Electronics 10, no. 3 (2021): 230. http://dx.doi.org/10.3390/electronics10030230.

Texto completo
Resumen
Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable reso
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Noskova, E. S., I. E. Zakharov, Y. N. Shkandybin, and S. G. Rykovanov. "Towards energy-efficient neural network calculations." Computer Optics 46, no. 1 (2022): 160–66. http://dx.doi.org/10.18287/2412-6179-co-914.

Texto completo
Resumen
Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has announced the NVDLA project, which allows one to design neural network accelerators based on an open-source code. This work describes a full cycle of creating a prototype NVDLA accelerator, as well as testing the resulting solution by running the resnet-50 neural network on it. Finally, an assessmen
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fan, Yuxiao. "Design and research of high-performance convolutional neural network accelerator based on Chipyard." Journal of Physics: Conference Series 2858, no. 1 (2024): 012001. http://dx.doi.org/10.1088/1742-6596/2858/1/012001.

Texto completo
Resumen
Abstract Neural network accelerator performs well in the research and verification of neural network models. In this paper, a convolutional neural network accelerator system composed of RISC-V processor core and Gemmini array accelerator is designed in Chisel language within the Chipyard framework, and the acceleration effect of different Gemmini array configurations for different input matrices is further investigated. The result shows that the accelerator system can achieve thousands of times acceleration compared with a single processor for large matrix calculations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, Jia, Han Pu, and Dong Wang. "Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection." Micromachines 16, no. 1 (2024): 22. https://doi.org/10.3390/mi16010022.

Texto completo
Resumen
Reconfigurable processor-based acceleration of deep convolutional neural network (DCNN) algorithms has emerged as a widely adopted technique, with particular attention on sparse neural network acceleration as an active research area. However, many computing devices that claim high computational power still struggle to execute neural network algorithms with optimal efficiency, low latency, and minimal power consumption. Consequently, there remains significant potential for further exploration into improving the efficiency, latency, and power consumption of neural network accelerators across div
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ferianc, Martin, Hongxiang Fan, Divyansh Manocha, et al. "Improving Performance Estimation for Design Space Exploration for Convolutional Neural Network Accelerators." Electronics 10, no. 4 (2021): 520. http://dx.doi.org/10.3390/electronics10040520.

Texto completo
Resumen
Contemporary advances in neural networks (NNs) have demonstrated their potential in different applications such as in image classification, object detection or natural language processing. In particular, reconfigurable accelerators have been widely used for the acceleration of NNs due to their reconfigurability and efficiency in specific application instances. To determine the configuration of the accelerator, it is necessary to conduct design space exploration to optimize the performance. However, the process of design space exploration is time consuming because of the slow performance evalua
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sunny, Febin P., Asif Mirza, Mahdi Nikdast, and Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator." ACM Transactions on Embedded Computing Systems 20, no. 5s (2021): 1–24. http://dx.doi.org/10.1145/3476988.

Texto completo
Resumen
Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we pres
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Tang, Wenkai, and Peiyong Zhang. "GPGCN: A General-Purpose Graph Convolution Neural Network Accelerator Based on RISC-V ISA Extension." Electronics 11, no. 22 (2022): 3833. http://dx.doi.org/10.3390/electronics11223833.

Texto completo
Resumen
In the past two years, various graph convolution neural networks (GCNs) accelerators have emerged, each with their own characteristics, but their common disadvantage is that the hardware architecture is not programmable and it is optimized for a specific network and dataset. They may not support acceleration for different GCNs and may not achieve optimal hardware resource utilization for datasets of different sizes. Therefore, given the above shortcomings, and according to the development trend of traditional neural network accelerators, this paper proposes and implements GPGCN: a general-purp
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Xia, Chengpeng, Yawen Chen, Haibo Zhang, Hao Zhang, Fei Dai, and Jigang Wu. "Efficient neural network accelerators with optical computing and communication." Computer Science and Information Systems, no. 00 (2022): 66. http://dx.doi.org/10.2298/csis220131066x.

Texto completo
Resumen
Conventional electronic Artificial Neural Networks (ANNs) accelerators focus on architecture design and numerical computation optimization to improve the training efficiency. However, these approaches have recently encountered bottlenecks in terms of energy efficiency and computing performance, which leads to an increase interest in photonic accelerator. Photonic architectures with low energy consumption, high transmission speed and high bandwidth have been considered as an important role for generation of computing architectures. In this paper, to provide a better understanding of optical tec
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Neural network accelerator"

1

Tianxu, Yue. "Convolutional Neural Network FPGA-accelerator on Intel DE10-Standard FPGA." Thesis, Linköpings universitet, Elektroniska Kretsar och System, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178174.

Texto completo
Resumen
Convolutional neural networks (CNNs) have been extensively used in many aspects, such as face and speech recognition, image searching and classification, and automatic drive. Hence, CNN accelerators have become a trending research. Generally, Graphics processing units (GPUs) are widely applied in CNNaccelerators. However, Field-programmable gate arrays (FPGAs) have higher energy and resource efficiency compared with GPUs, moreover, high-level synthesis tools based on Open Computing Language (OpenCL) can reduce the verification and implementation period for FPGAs. In this project, PipeCNN[1] is
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Oudrhiri, Ali. "Performance of a Neural Network Accelerator Architecture and its Optimization Using a Pipeline-Based Approach." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS658.pdf.

Texto completo
Resumen
Ces dernières années, les réseaux de neurones ont gagné en popularité en raison de leur polyvalence et de leur efficacité dans la résolution d'une grande variété de tâches complexes. Cependant, à mesure que les réseaux neuronaux continuent de trouver des applications dans une gamme toujours croissante de domaines, leurs importantes exigences en matière de calcul deviennent un défi pressant. Cette demande en calcul est particulièrement problématique lors du déploiement de réseaux neuronaux sur des dispositifs embarqués aux ressources limitées, en particulier dans le contexte du calcul en périph
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Maltoni, Pietro. "Progetto di un acceleratore hardware per layer di convoluzioni depthwise in applicazioni di Deep Neural Network." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24205/.

Texto completo
Resumen
Il progressivo sviluppo tecnologico e il costante monitoraggio, controllo e analisi della realtà circostante ha condotto allo sviluppo di dispositivi IoT sempre più performanti, per questo si è iniziato a parlare di Edge Computing. In questi dispositivi sono presenti le risorse per elaborare i dati dai sensori direttamente in locale. Questa tecnologia si adatta bene alle CNN, reti neurali per l'analisi e il riconoscimento di immagini. Le Separable Convolution rappresentano una nuova frontiera perchè permettono di diminuire in modo massiccio la quantità di operazioni da eseguire su tensori di d
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Xu, Hongjie. "Energy-Efficient On-Chip Cache Architectures and Deep Neural Network Accelerators Considering the Cost of Data Movement." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263786.

Texto completo
Resumen
付記する学位プログラム名: 京都大学卓越大学院プログラム「先端光・電子デバイス創成学」<br>京都大学<br>新制・課程博士<br>博士(情報学)<br>甲第23325号<br>情博第761号<br>京都大学大学院情報学研究科通信情報システム専攻<br>(主査)教授 小野寺 秀俊, 教授 大木 英司, 教授 佐藤 高史<br>学位規則第4条第1項該当<br>Doctor of Informatics<br>Kyoto University<br>DFAM
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Pradels, Léo. "Efficient CNN inference acceleration on FPGAs : a pattern pruning-driven approach." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS087.

Texto completo
Resumen
Les modèles d'apprentissage profond basés sur les CNNs offrent des performances de pointe dans les tâches de traitement d'images et de vidéos, en particulier pour l'amélioration ou la classification d'images. Cependant, ces modèles sont lourds en calcul et en empreinte mémoire, ce qui les rend inadaptés aux contraintes de temps réel sur des FPGA embarqués. Il est donc essentiel de compresser ces CNNs et de concevoir des architectures d'accélérateurs pour l'inférence qui intègrent la compression dans une approche de co-conception matérielle et logicielle. Bien que des optimisations logicielles
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Riera, Villanueva Marc. "Low-power accelerators for cognitive computing." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669828.

Texto completo
Resumen
Deep Neural Networks (DNNs) have achieved tremendous success for cognitive applications, and are especially efficient in classification and decision making problems such as speech recognition or machine translation. Mobile and embedded devices increasingly rely on DNNs to understand the world. Smartphones, smartwatches and cars perform discriminative tasks, such as face or object recognition, on a daily basis. Despite the increasing popularity of DNNs, running them on mobile and embedded systems comes with several main challenges: delivering high accuracy and performance with a small memory an
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Khan, Muhammad Jazib. "Programmable Address Generation Unit for Deep Neural Network Accelerators." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-271884.

Texto completo
Resumen
The Convolutional Neural Networks are getting more and more popular due to their applications in revolutionary technologies like Autonomous Driving, Biomedical Imaging, and Natural Language Processing. With this increase in adoption, the complexity of underlying algorithms is also increasing. This trend entails implications for the computation platforms as well, i.e. GPUs, FPGA, or ASIC based accelerators, especially for the Address Generation Unit (AGU), which is responsible for the memory access. Existing accelerators typically have Parametrizable Datapath AGUs, which have minimal adaptabili
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jalasutram, Rommel. "Acceleration of spiking neural networks on multicore architectures." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252424720/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Han, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chen, Yu-Hsin Ph D. Massachusetts Institute of Technology. "Architecture design for highly flexible and energy-efficient deep neural network accelerators." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117838.

Texto completo
Resumen
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 141-147).<br>Deep neural networks (DNNs) are the backbone of modern artificial intelligence (AI). However, due to their high computational complexity and diverse shapes and sizes, dedicated accelerators that can achieve high
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "Neural network accelerator"

1

Whitehead, P. A. Design considerations for a hardware accelerator for Kohonen unsupervised learning in artificial neural networks. UMIST, 1997.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jones, Steven P. Neural network models of simple mechanical systems illustrating the feasibility of accelerated life testing. National Aeronautics and Space Administration, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

A, Daglis I., ed. Effects of space weather on technology infrastructure. Kluwer Academic Publishers, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kong, Joonho, and Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kong, Joonho, and Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kong, Joonho, and Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Munir. Accelerators for Convolutional Neural Networks. Wiley & Sons, Limited, John, 2023.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Accelerated training for large feedforward neural networks. National Aeronautics and Space Administration, Ames Research Center, 1998.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Raff, Lionel, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam. Neural Networks in Chemical Reaction Dynamics. Oxford University Press, 2012. http://dx.doi.org/10.1093/oso/9780199765652.001.0001.

Texto completo
Resumen
This monograph presents recent advances in neural network (NN) approaches and applications to chemical reaction dynamics. Topics covered include: (i) the development of ab initio potential-energy surfaces (PES) for complex multichannel systems using modified novelty sampling and feedforward NNs; (ii) methods for sampling the configuration space of critical importance, such as trajectory and novelty sampling methods and gradient fitting methods; (iii) parametrization of interatomic potential functions using a genetic algorithm accelerated with a NN; (iv) parametrization of analytic interatomic
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

AI Ladder: Accelerate Your Journey to AI. O'Reilly Media, Incorporated, 2020.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Neural network accelerator"

1

Huang, Hantao, and Hao Yu. "Distributed-Solver for Networked Neural Network." In Compact and Fast Machine Learning Accelerator for IoT Devices. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nakajima, Toshiya. "Architecture of the Neural Network Simulation Accelerator NEUROSIM/L." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_61.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Reagen, Brandon, Robert Adolf, Paul Whatmough, Gu-Yeon Wei, and David Brooks. "Neural Network Accelerator Optimization: A Case Study." In Deep Learning for Computer Architects. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01756-8_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Huang, Hantao, and Hao Yu. "Tensor-Solver for Deep Neural Network." In Compact and Fast Machine Learning Accelerator for IoT Devices. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ae, Tadashi, and Reiji Aibara. "A Neural Network for 3-D VLSI Accelerator." In The Kluwer International Series in Engineering and Computer Science. Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-1619-0_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Hantao, and Hao Yu. "Least-Squares-Solver for Shallow Neural Network." In Compact and Fast Machine Learning Accelerator for IoT Devices. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hu, Lili. "Frameworks for Efficient Convolutional Neural Network Accelerator on FPGA." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8944-2_75.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ravikumar, B., B. Chandrababu Naik, Muhsin Jaber Jweeg, et al. "FPGA Realization of Neural Network Accelerator for Image Classification." In Studies in Systems, Decision and Control. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-84628-1_43.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Cheung, Kit, Simon R. Schultz, and Wayne Luk. "A Large-Scale Spiking Neural Network Accelerator for FPGA Systems." In Artificial Neural Networks and Machine Learning – ICANN 2012. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wu, Jin, Xiangyang Shi, Wenting Pang, and Yu Wang. "Research on FPGA Accelerator Optimization Based on Graph Neural Network." In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20738-9_61.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Neural network accelerator"

1

Fatima, Eeman, Muhammad Fahad, Hiba Abrar, Haroon-ur-Rashid, and Haroon Waris. "FPGA Based Artificial Neural Network Accelerator." In 2024 26th International Multitopic Conference (INMIC). IEEE, 2024. https://doi.org/10.1109/inmic64792.2024.11004346.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Mengxuan, and Chang Wu. "Layer Pipelined Neural Network Accelerator Design on 2.5D FPGAs." In 2024 IEEE 17th International Conference on Solid-State & Integrated Circuit Technology (ICSICT). IEEE, 2024. https://doi.org/10.1109/icsict62049.2024.10831139.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Denghui, Jianrui He, Xuyu Jing, and Xibiao Hou. "DNN performance optimization based on Gemmini neural network hardware accelerator." In Fourth International Conference on Advanced Algorithms and Neural Networks (AANN 2024), edited by Qinghua Lu and Weishan Zhang. SPIE, 2024. http://dx.doi.org/10.1117/12.3049564.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wen, Fangxin, Zhongzu Zhou, Jiang Zhao, et al. "Fault Identification Method Based on BP Neural Network in Accelerator Distribution Network." In 2025 2nd International Conference on Smart Grid and Artificial Intelligence (SGAI). IEEE, 2025. https://doi.org/10.1109/sgai64825.2025.11009449.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Shiflett, Kyle, Dylan Wright, Avinash Karanth, and Ahmed Louri. "PIXEL: Photonic Neural Network Accelerator." In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00046.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, David, A. Barış Özgüler, Giuseppe Di Guglielmo, et al. "Neural network accelerator for quantum control." In Neural network accelerator for quantum control. US DOE, 2023. http://dx.doi.org/10.2172/1959815.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Yang, Zunming, Zhanzhuang He, Jing Yang, and Zhong Ma. "An LSTM Acceleration Method Based on Embedded Neural Network Accelerator." In ACAI'21: 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. ACM, 2021. http://dx.doi.org/10.1145/3508546.3508649.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yi, Qian. "FPGA Implementation of Neural Network Accelerator." In 2018 2nd IEEE Advanced Information Management,Communicates, Electronic and Automation Control Conference (IMCEC). IEEE, 2018. http://dx.doi.org/10.1109/imcec.2018.8469659.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Vogt, Michael C. "Neural network-based sensor signal accelerator." In Intelligent Systems and Smart Manufacturing, edited by Peter E. Orban and George K. Knopf. SPIE, 2001. http://dx.doi.org/10.1117/12.417242.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, Hong, Xiao Zhang, Dehui Kong, et al. "Convolutional Neural Network Accelerator on FPGA." In 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). IEEE, 2019. http://dx.doi.org/10.1109/icta48799.2019.9012821.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Neural network accelerator"

1

Aimone, James, Christopher Bennett, Suma Cardwell, Ryan Dellana, and Tianyao Xiao. Mosaic The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1673175.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Meni, Mackenzie, Ryan White, Michael Mayo, and Kevin Pilkiewicz. Entropy-based guidance of deep neural networks for accelerated convergence and improved performance. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49805.

Texto completo
Resumen
Neural networks have dramatically increased our capacity to learn from large, high-dimensional datasets across innumerable disciplines. However, their decisions are not easily interpretable, their computational costs are high, and building and training them are not straightforward processes. To add structure to these efforts, we derive new mathematical results to efficiently measure the changes in entropy as fully-connected and convolutional neural networks process data. By measuring the change in entropy as networks process data effectively, patterns critical to a well-performing network can
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Morgan, Nelson, Jerome Feldman, and John Wawrzynek. Accelerator Systems for Neural Networks, Speech, and Related Applications. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada298954.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Garg, Raveesh, Eric Qin, Francisco Martinez, et al. Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1821960.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Pasupuleti, Murali Krishna. Quantum-Enhanced Machine Learning: Harnessing Quantum Computing for Next-Generation AI Systems. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv125.

Texto completo
Resumen
Abstract Quantum-enhanced machine learning (QML) represents a paradigm shift in artificial intelligence by integrating quantum computing principles to solve complex computational problems more efficiently than classical methods. By leveraging quantum superposition, entanglement, and parallelism, QML has the potential to accelerate deep learning training, optimize combinatorial problems, and enhance feature selection in high-dimensional spaces. This research explores foundational quantum computing concepts relevant to AI, including quantum circuits, variational quantum algorithms, and quantum k
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pasupuleti, Murali Krishna. Quantum Semiconductors for Scalable and Fault-Tolerant Computing. National Education Services, 2025. https://doi.org/10.62311/nesx/rr825.

Texto completo
Resumen
Abstract: Quantum semiconductors are revolutionizing computing by enabling scalable, fault-tolerant quantum processors that overcome the limitations of classical computing. As quantum technologies advance, superconducting qubits, silicon spin qubits, topological qubits, and hybrid quantum-classical architectures are emerging as key solutions for achieving high-fidelity quantum operations and long-term coherence. This research explores the materials, device engineering, and fabrication challenges associated with quantum semiconductors, focusing on quantum error correction, cryogenic control sys
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wideman, Jr., Robert F., Nicholas B. Anthony, Avigdor Cahaner, Alan Shlosberg, Michel Bellaiche, and William B. Roush. Integrated Approach to Evaluating Inherited Predictors of Resistance to Pulmonary Hypertension Syndrome (Ascites) in Fast Growing Broiler Chickens. United States Department of Agriculture, 2000. http://dx.doi.org/10.32747/2000.7575287.bard.

Texto completo
Resumen
Background PHS (pulmonary hypertension syndrome, ascites syndrome) is a serious cause of loss in the broiler industry, and is a prime example of an undesirable side effect of successful genetic development that may be deleteriously manifested by factors in the environment of growing broilers. Basically, continuous and pinpointed selection for rapid growth in broilers has led to higher oxygen demand and consequently to more frequent manifestation of an inherent potential cardiopulmonary incapability to sufficiently oxygenate the arterial blood. The multifaceted causes and modifiers of PHS make
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

DEEP LEARNING DAMAGE IDENTIFICATION METHOD FOR STEEL- FRAME BRACING STRUCTURES USING TIME–FREQUENCY ANALYSIS AND CONVOLUTIONAL NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.8.

Texto completo
Resumen
Lattice bracing, commonly used in steel construction systems, is vulnerable to damage and failure when subjected to horizontal seismic pressure. To identify damage, manual examination is the conventional method applied. However, this approach is time-consuming and typically unable to detect damage in its early stage. Determining the exact location of damage has been problematic for researchers. Nevertheless, detecting the failure of lateral supports in various parts of a structure using time–frequency analysis and deep learning methods, such as convolutional neural networks, is possible. Then,
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!