Добірка наукової літератури з теми "Sparse deep neural networks"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Sparse deep neural networks".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Sparse deep neural networks"
Scardapane, Simone, Danilo Comminiello, Amir Hussain, and Aurelio Uncini. "Group sparse regularization for deep neural networks." Neurocomputing 241 (June 2017): 81–89. http://dx.doi.org/10.1016/j.neucom.2017.02.029.
Повний текст джерелаZang, Ke, Wenqi Wu, and Wei Luo. "Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks." Sensors 21, no. 19 (September 25, 2021): 6410. http://dx.doi.org/10.3390/s21196410.
Повний текст джерелаWu, Kailun, Yiwen Guo, and Changshui Zhang. "Compressing Deep Neural Networks With Sparse Matrix Factorization." IEEE Transactions on Neural Networks and Learning Systems 31, no. 10 (October 2020): 3828–38. http://dx.doi.org/10.1109/tnnls.2019.2946636.
Повний текст джерелаGangopadhyay, Briti, Pallab Dasgupta, and Soumyajit Dey. "Safety Aware Neural Pruning for Deep Reinforcement Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16212–13. http://dx.doi.org/10.1609/aaai.v37i13.26966.
Повний текст джерелаPetschenig, Horst, and Robert Legenstein. "Quantized rewiring: hardware-aware training of sparse deep neural networks." Neuromorphic Computing and Engineering 3, no. 2 (May 26, 2023): 024006. http://dx.doi.org/10.1088/2634-4386/accd8f.
Повний текст джерелаBelay, Kaleab. "Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13126–27. http://dx.doi.org/10.1609/aaai.v36i11.21699.
Повний текст джерелаKaur, Mandeep, and Pradip Kumar Yadava. "A Review on Classification of Images with Convolutional Neural Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (July 31, 2023): 658–63. http://dx.doi.org/10.22214/ijraset.2023.54704.
Повний текст джерелаBi, Jia, and Steve R. Gunn. "Sparse Deep Neural Network Optimization for Embedded Intelligence." International Journal on Artificial Intelligence Tools 29, no. 03n04 (June 2020): 2060002. http://dx.doi.org/10.1142/s0218213020600027.
Повний текст джерелаGallicchio, Claudio, and Alessio Micheli. "Fast and Deep Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3898–905. http://dx.doi.org/10.1609/aaai.v34i04.5803.
Повний текст джерелаTartaglione, Enzo, Andrea Bragagnolo, Attilio Fiandrotti, and Marco Grangetto. "LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks." Neural Networks 146 (February 2022): 230–37. http://dx.doi.org/10.1016/j.neunet.2021.11.029.
Повний текст джерелаДисертації з теми "Sparse deep neural networks"
Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.
Повний текст джерелаThis document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.
The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.
Le, Quoc Tung. "Algorithmic and theoretical aspects of sparse deep neural networks." Electronic Thesis or Diss., Lyon, École normale supérieure, 2023. http://www.theses.fr/2023ENSL0105.
Повний текст джерелаSparse deep neural networks offer a compelling practical opportunity to reduce the cost of training, inference and storage, which are growing exponentially in the state of the art of deep learning. In this presentation, we will introduce an approach to study sparse deep neural networks through the lens of another related problem: sparse matrix factorization, i.e., the problem of approximating a (dense) matrix by the product of (multiple) sparse factors. In particular, we identify and investigate in detail some theoretical and algorithmic aspects of a variant of sparse matrix factorization named fixed support matrix factorization (FSMF) in which the set of non-zero entries of sparse factors are known. Several fundamental questions of sparse deep neural networks such as the existence of optimal solutions of the training problem or topological properties of its function space can be addressed using the results of (FSMF). In addition, by applying the results of (FSMF), we also study the butterfly parametrization, an approach that consists of replacing (large) weight matrices by the products of extremely sparse and structured ones in sparse deep neural networks
Hoori, Ammar O. "MULTI-COLUMN NEURAL NETWORKS AND SPARSE CODING NOVEL TECHNIQUES IN MACHINE LEARNING." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5743.
Повний текст джерелаVekhande, Swapnil Sudhir. "Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.
Повний текст джерелаMaster of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.
Повний текст джерелаIn recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Повний текст джерелаIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Thom, Markus [Verfasser]. "Sparse neural networks / Markus Thom." Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2015. http://d-nb.info/1067496319/34.
Повний текст джерелаLiu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Повний текст джерелаSquadrani, Lorenzo. "Deep neural networks and thermodynamics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Знайти повний текст джерелаMancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Повний текст джерелаКниги з теми "Sparse deep neural networks"
A, Renzetti N., and Jet Propulsion Laboratory (U.S.), eds. The Deep Space Network as an instrument for radio science research: Power system stability applications of artificial neural networks. Pasadena, Calif: National Aeronautics and Space Administration, Jet Propulsion Laboratory, California Institute of Technology, 1993.
Знайти повний текст джерелаAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Повний текст джерелаAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Повний текст джерелаMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Повний текст джерелаCaterini, Anthony L., and Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Повний текст джерелаRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Знайти повний текст джерелаFingscheidt, Tim, Hanno Gottschalk, and Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Повний текст джерелаModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Повний текст джерелаIba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.
Повний текст джерелаLu, Le, Yefeng Zheng, Gustavo Carneiro, and Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.
Повний текст джерелаЧастини книг з теми "Sparse deep neural networks"
Moons, Bert, Daniel Bankman, and Marian Verhelst. "ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing." In Embedded Deep Learning, 115–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99223-5_5.
Повний текст джерелаWang, Xin, Zhiqiang Hou, Wangsheng Yu, and Zefenfen Jin. "Online Fast Deep Learning Tracker Based on Deep Sparse Neural Networks." In Lecture Notes in Computer Science, 186–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71607-7_17.
Повний текст джерелаHuang, Zehao, and Naiyan Wang. "Data-Driven Sparse Structure Selection for Deep Neural Networks." In Computer Vision – ECCV 2018, 317–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01270-0_19.
Повний текст джерелаFakhfakh, Mohamed, Bassem Bouaziz, Lotfi Chaari, and Faiez Gargouri. "Efficient Bayesian Learning of Sparse Deep Artificial Neural Networks." In Lecture Notes in Computer Science, 78–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01333-1_7.
Повний текст джерелаDey, Sourya, Yinan Shao, Keith M. Chugg, and Peter A. Beerel. "Accelerating Training of Deep Neural Networks via Sparse Edge Processing." In Artificial Neural Networks and Machine Learning – ICANN 2017, 273–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68600-4_32.
Повний текст джерелаHuang, Junzhou, and Zheng Xu. "Cell Detection with Deep Learning Accelerated by Sparse Kernel." In Deep Learning and Convolutional Neural Networks for Medical Image Computing, 137–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_9.
Повний текст джерелаMatsumoto, Wataru, Manabu Hagiwara, Petros T. Boufounos, Kunihiko Fukushima, Toshisada Mariyama, and Zhao Xiongxin. "A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices." In Neural Information Processing, 397–404. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46681-1_48.
Повний текст джерелаXu, Ting, Bo Zhang, Baoju Zhang, Taekon Kim, and Yi Wang. "Sparse Deep Neural Network Based Directional Modulation Design." In Lecture Notes in Electrical Engineering, 503–11. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-7545-7_51.
Повний текст джерелаDai, Qionghai, and Yue Gao. "Neural Networks on Hypergraph." In Artificial Intelligence: Foundations, Theory, and Algorithms, 121–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_7.
Повний текст джерелаMarinò, Giosuè Cataldo, Gregorio Ghidoli, Marco Frasca, and Dario Malchiodi. "Reproducing the Sparse Huffman Address Map Compression for Deep Neural Networks." In Reproducible Research in Pattern Recognition, 161–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76423-4_12.
Повний текст джерелаТези доповідей конференцій з теми "Sparse deep neural networks"
Keyvanrad, Mohammad Ali, and Mohammad Mehdi Homayounpour. "Normal sparse Deep Belief Network." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280688.
Повний текст джерелаHuang, Sitao, Carl Pearson, Rakesh Nagi, Jinjun Xiong, Deming Chen, and Wen-mei Hwu. "Accelerating Sparse Deep Neural Networks on FPGAs." In 2019 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2019. http://dx.doi.org/10.1109/hpec.2019.8916419.
Повний текст джерелаObmann, Daniel, Johannes Schwab, and Markus Haltmeier. "Sparse synthesis regularization with deep neural networks." In 2019 13th International conference on Sampling Theory and Applications (SampTA). IEEE, 2019. http://dx.doi.org/10.1109/sampta45681.2019.9030953.
Повний текст джерелаWen, Weijing, Fan Yang, Yangfeng Su, Dian Zhou, and Xuan Zeng. "Learning Sparse Patterns in Deep Neural Networks." In 2019 IEEE 13th International Conference on ASIC (ASICON). IEEE, 2019. http://dx.doi.org/10.1109/asicon47005.2019.8983429.
Повний текст джерелаBi, Jia, and Steve R. Gunn. "Sparse Deep Neural Networks for Embedded Intelligence." In 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2018. http://dx.doi.org/10.1109/ictai.2018.00016.
Повний текст джерелаJing, How, and Yu Tsao. "Sparse maximum entropy deep belief nets." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706749.
Повний текст джерелаXu, Lie, Chiu-Sing Choy, and Yi-Wen Li. "Deep sparse rectifier neural networks for speech denoising." In 2016 IEEE International Workshop on Acoustic Signal Enhancement (IWAENC). IEEE, 2016. http://dx.doi.org/10.1109/iwaenc.2016.7602891.
Повний текст джерелаToth, Laszlo. "Phone recognition with deep sparse rectifier neural networks." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639016.
Повний текст джерелаPironkov, Gueorgui, Stephane Dupont, and Thierry Dutoit. "Investigating sparse deep neural networks for speech recognition." In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, 2015. http://dx.doi.org/10.1109/asru.2015.7404784.
Повний текст джерелаMitsuno, Kakeru, Junichi Miyao, and Takio Kurita. "Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207531.
Повний текст джерелаЗвіти організацій з теми "Sparse deep neural networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Повний текст джерелаKoh, Christopher Fu-Chai, and Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557202.
Повний текст джерелаShevitski, Brian, Yijing Watkins, Nicole Man, and Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), April 2023. http://dx.doi.org/10.2172/1984848.
Повний текст джерелаLandon, Nicholas. A survey of repair strategies for deep neural networks. Ames (Iowa): Iowa State University, August 2022. http://dx.doi.org/10.31274/cc-20240624-93.
Повний текст джерелаTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1366924.
Повний текст джерелаArmstrong, Derek Elswick, and Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1623398.
Повний текст джерелаThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), June 2019. http://dx.doi.org/10.2172/1525811.
Повний текст джерелаEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), October 2020. http://dx.doi.org/10.2172/1677521.
Повний текст джерелаEllis, Austin, Lenz Fielder, Gabriel Popoola, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-Temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), June 2021. http://dx.doi.org/10.2172/1817970.
Повний текст джерелаChronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell, and Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, July 2023. http://dx.doi.org/10.26509/frbc-wp-202315.
Повний текст джерела