Добірка наукової літератури з теми "Recurrent neural networks BLSTM"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Recurrent neural networks BLSTM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Recurrent neural networks BLSTM"

1

Guo, Yanbu, Bingyi Wang, Weihua Li, and Bei Yang. "Protein secondary structure prediction improved by recurrent neural networks integrated with two-dimensional convolutional neural networks." Journal of Bioinformatics and Computational Biology 16, no. 05 (2018): 1850021. http://dx.doi.org/10.1142/s021972001850021x.

Повний текст джерела
Анотація:
Protein secondary structure prediction (PSSP) is an important research field in bioinformatics. The representation of protein sequence features could be treated as a matrix, which includes the amino-acid residue (time-step) dimension and the feature vector dimension. Common approaches to predict secondary structures only focus on the amino-acid residue dimension. However, the feature vector dimension may also contain useful information for PSSP. To integrate the information on both dimensions of the matrix, we propose a hybrid deep learning framework, two-dimensional convolutional bidirectiona
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhong, Cheng, Zhonglian Jiang, Xiumin Chu, and Lei Liu. "Inland Ship Trajectory Restoration by Recurrent Neural Network." Journal of Navigation 72, no. 06 (2019): 1359–77. http://dx.doi.org/10.1017/s0373463319000316.

Повний текст джерела
Анотація:
The quality of Automatic Identification System (AIS) data is of fundamental importance for maritime situational awareness and navigation risk assessment. To improve operational efficiency, a deep learning method based on Bi-directional Long Short-Term Memory Recurrent Neural Networks (BLSTM-RNNs) is proposed and applied in AIS trajectory data restoration. Case studies have been conducted in two distinct reaches of the Yangtze River and the capability of the proposed method has been evaluated. Comparisons have been made between the BLSTM-RNNs-based method and the linear method and classic Artif
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Reddy, K. Jeevan. "Text To Speech Synthesis with Bidirectional LSTM based Recurrent Neural Networks." International Journal for Research in Applied Science and Engineering Technology 13, no. 7 (2025): 270–76. https://doi.org/10.22214/ijraset.2025.72981.

Повний текст джерела
Анотація:
According to recent studies, feed-forward Deep neural networks (DNNs) perform better than text-to-speech (TTS) systems that use decision-tree clustered context-dependent hidden Markov models (HMMs). The feed-forward aspect of DNNbased models makes it difficult to incorporate the long-span contextual influence into spoken utterances. Another typical strategy in HMM-based TTS for establishing a continuous speech trajectory is using the dynamic characteristics to constrain the production of speech parameters. Parametric time-to-speech synthesis is used in this study by capturing the co-occurrence
Стилі APA, Harvard, Vancouver, ISO та ін.
4

R.Ankush, Banger1 Mansi Singh1 Kirnesh Sharma1 Satvik Singla1 Mrs.Shikha Rastogi2. "HINDI LANGUAGE RECOGNITION SYSTEM USING NEURAL NETWORKS." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 7, no. 5 (2018): 98–103. https://doi.org/10.5281/zenodo.1241407.

Повний текст джерела
Анотація:
In this paper, we propose a recognition scheme for the Indian script of Hindi. Recognition accuracy of Hindi script is not yet comparable to its Roman counterparts. This is mainly due to the complexity of the script, writing style etc. Our solution uses a Recurrent Neural Network known as Bidirectional Long Short Term Memory (BLSTM). Our approach does not require word to character segmentation, which is one of the most common reason for high word error rate. We report a reduction of more than 20% in word error rate and over 9% reduction in character error rate while comparing with the best ava
Стилі APA, Harvard, Vancouver, ISO та ін.
5

KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.

Повний текст джерела
Анотація:
AbstractNeural Network-based approaches have recently produced good performances in Natural language tasks, such as Supertagging. In the supertagging task, a Supertag (Lexical category) is assigned to each word in an input sequence. Combinatory Categorial Grammar Supertagging is a more challenging problem than various sequence-tagging problems, such as part-of-speech (POS) tagging and named entity recognition due to the large number of the lexical categories. Specifically, simple Recurrent Neural Network (RNN) has shown to significantly outperform the previous state-of-the-art feed-forward neu
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shchetinin, E. Yu. "EMOTIONS RECOGNITION IN HUMAN SPEECH USING DEEP NEURAL NETWORKS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 199 (January 2021): 44–51. http://dx.doi.org/10.14489/vkit.2021.01.pp.044-051.

Повний текст джерела
Анотація:
The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotion
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dutta, Aparajita, Kusum Kumari Singh, and Ashish Anand. "SpliceViNCI: Visualizing the splicing of non-canonical introns through recurrent neural networks." Journal of Bioinformatics and Computational Biology 19, no. 04 (2021): 2150014. http://dx.doi.org/10.1142/s0219720021500141.

Повний текст джерела
Анотація:
Most of the current computational models for splice junction prediction are based on the identification of canonical splice junctions. However, it is observed that the junctions lacking the consensus dimers GT and AG also undergo splicing. Identification of such splice junctions, called the non-canonical splice junctions, is also essential for a comprehensive understanding of the splicing phenomenon. This work focuses on the identification of non-canonical splice junctions through the application of a bidirectional long short-term memory (BLSTM) network. Furthermore, we apply a back-propagatio
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Ansi, Honglei Wang, Shaobo Li, et al. "Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation." Applied Sciences 8, no. 12 (2018): 2416. http://dx.doi.org/10.3390/app8122416.

Повний текст джерела
Анотація:
Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Janardhanan, Jitha, and S. Umamaheswari. "Exploration of Deep Learning Models for Video Based Multiple Human Activity Recognition." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 8s (2023): 422–28. http://dx.doi.org/10.17762/ijritcc.v11i8s.7222.

Повний текст джерела
Анотація:
Human Activity Recognition (HAR) with Deep Learning is a challenging and a highly demanding classification task. Complexity of the activity detection and the number of subjects are the main issues. Data mining approaches improved decision-making performance. This work presents one such model for Human activity recognition for multiple subjects carrying out multiple activities. Involving real time datasets, the work developed a rapid algorithm for minimizing the problems of neural networks classifier. An optimal feature extraction happens and develops a multi-modal classification technique and
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rathika, M., P. Sivakumar, K. Ramash Kumar, and Ilhan Garip. "Cooperative Communications Based on Deep Learning Using a Recurrent Neural Network in Wireless Communication Networks." Mathematical Problems in Engineering 2022 (December 21, 2022): 1–12. http://dx.doi.org/10.1155/2022/1864290.

Повний текст джерела
Анотація:
In recent years, cooperative communication (CC) technology has emerged as a hotspot for testing wireless communication networks (WCNs), and it will play an important role in the spectrum utilization of future wireless communication systems. Instead of running node transmissions at full capacity, this design will distribute available paths across multiple relay nodes to increase the overall throughput. The modeling WCNs coordination processes, as a recurrent mechanism and recommending a deep learning-based transfer choice, propose a recurrent neural network (RNN) process-based relay selection i
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Recurrent neural networks BLSTM"

1

Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.

Повний текст джерела
Анотація:
Mes travaux de thèse s'intéressent à l'utilisation de nouvelles technologies d'intelligence artificielle appliquées à la problématique de la classification automatique des séquences audios selon l'état émotionnel du client au cours d'une conversation avec un téléconseiller. En 2016, l'idée est de se démarquer des prétraitements de données et modèles d'apprentissage automatique existant au sein du laboratoire, et de proposer un modèle qui soit le plus performant possible sur la base de données audios IEMOCAP. Nous nous appuyons sur des travaux existants sur les modèles de réseaux de neurones pr
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Morillot, Olivier. "Reconnaissance de textes manuscrits par modèles de Markov cachés et réseaux de neurones récurrents : application à l'écriture latine et arabe." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0002.

Повний текст джерела
Анотація:
La reconnaissance d’écriture manuscrite est une composante essentielle de l’analyse de document. Une tendance actuelle de ce domaine est de passer de la reconnaissance de mots isolés à celle d’une séquence de mots. Notre travail consiste donc à proposer un système de reconnaissance de lignes de texte sans segmentation explicite de la ligne en mots. Afin de construire un modèle performant, nous intervenons à plusieurs niveaux du système de reconnaissance. Tout d’abord, nous introduisons deux méthodes de prétraitement originales : un nettoyage des images de lignes de texte et une correction loca
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Żbikowski, Rafal Waclaw. "Recurrent neural networks some control aspects /." Connect to electronic version, 1994. http://hdl.handle.net/1905/180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Повний текст джерела
Анотація:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorpor
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zbikowski, Rafal Waclaw. "Recurrent neural networks : some control aspects." Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jacobsson, Henrik. "Rule extraction from recurrent neural networks." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Знайти повний текст джерела
Анотація:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Повний текст джерела
Анотація:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequ
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Brax, Christoffer. "Recurrent neural networks for time-series prediction." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-480.

Повний текст джерела
Анотація:
<p>Recurrent neural networks have been used for time-series prediction with good results. In this dissertation recurrent neural networks are compared with time-delayed feed forward networks, feed forward networks and linear regression models on a prediction task. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns. It is evaluated if more accurate predictions can be made when only using the campaign segments of the data.</p><p>Throughout the entire proje
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Повний текст джерела
Анотація:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Recurrent neural networks BLSTM"

1

Salem, Fathi M. Recurrent Neural Networks. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tyagi, Amit Kumar, and Ajith Abraham. Recurrent Neural Networks. CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. InTech, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hammer, Barbara. Learning with recurrent neural networks. Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

ElHevnawi, Mahmoud, and Mohamed Mysara. Recurrent neural networks and soft computing. InTech, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

R, Medsker L., and Jain L. C, eds. Recurrent neural networks: Design and applications. CRC Press, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

K, Tan K., ed. Convergence analysis of recurrent neural networks. Kluwer Academic Publishers, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Частини книг з теми "Recurrent neural networks BLSTM"

1

Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning. Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5571-3_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning. Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Calin, Ovidiu. "Recurrent Neural Networks." In Deep Learning Architectures. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Salvaris, Mathew, Danielle Dean, and Wee Hyong Tok. "Recurrent Neural Networks." In Deep Learning with Azure. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3679-6_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Siegelmann, Hava T. "Recurrent neural networks." In Computer Science Today. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0015235.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Marhon, Sajid A., Christopher J. F. Cameron, and Stefan C. Kremer. "Recurrent Neural Networks." In Intelligent Systems Reference Library. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36657-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kamath, Uday, John Liu, and James Whitaker. "Recurrent Neural Networks." In Deep Learning for NLP and Speech Recognition. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14596-5_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Skansi, Sandro. "Recurrent Neural Networks." In Undergraduate Topics in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ketkar, Nikhil. "Recurrent Neural Networks." In Deep Learning with Python. Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2766-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Aggarwal, Charu C. "Recurrent Neural Networks." In Neural Networks and Deep Learning. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Recurrent neural networks BLSTM"

1

Brueckner, Raymond, and Bjorn Schulter. "Social signal classification using deep blstm recurrent neural networks." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854518.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zheng, Changyan, Xiongwei Zhang, Meng Sun, Jibin Yang, and Yibo Xing. "A Novel Throat Microphone Speech Enhancement Framework Based on Deep BLSTM Recurrent Neural Networks." In 2018 IEEE 4th International Conference on Computer and Communications (ICCC). IEEE, 2018. http://dx.doi.org/10.1109/compcomm.2018.8780872.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Bin, Jianhua Tao, Dawei Zhang, and Yibin Zheng. "A novel pitch extraction based on jointly trained deep BLSTM Recurrent Neural Networks with bottleneck features." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952173.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Kai, Zhi-Jie Yan, and Qiang Huo. "A context-sensitive-chunk BPTT approach to training deep LSTM/BLSTM recurrent neural networks for offline handwriting recognition." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Bin, and Jianhua Tao. "A Novel Research to Artificial Bandwidth Extension Based on Deep BLSTM Recurrent Neural Networks and Exemplar-Based Sparse Representation." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ni, Zhaoheng, Rutuja Ubale, Yao Qian, et al. "Unusable Spoken Response Detection with BLSTM Neural Networks." In 2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2018. http://dx.doi.org/10.1109/iscslp.2018.8706635.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Singh, Harpreet, Na Helian, Roderick Adams, and Yi Sun. "Sentiment Analysis using BLSTM-ResNet on Textual Images." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892883.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ding, Chuang, Pengcheng Zhu, and Lei Xie. "BLSTM neural networks for speech driven head motion synthesis." In Interspeech 2015. ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kuo, Che-Yu, and Jen-Tzung Chien. "MARKOV RECURRENT NEURAL NETWORKS." In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2018. http://dx.doi.org/10.1109/mlsp.2018.8517074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Diao, Enmao, Jie Ding, and Vahid Tarokh. "Restricted Recurrent Neural Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Recurrent neural networks BLSTM"

1

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada219114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1366924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.1354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lin, Linyu, Joomyung Lee, Bikash Poudel, Timothy McJunkin, Nam Dinh, and Vivek Agarwal. Enhancing the Operational Resilience of Advanced Reactors with Digital Twins by Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1835892.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Повний текст джерела
Анотація:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capa
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Повний текст джерела
Анотація:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detecti
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yu, Nanpeng, Koji Yamashita, Brandon Foggo, et al. Final Project Report: Discovery of Signatures, Anomalies, and Precursors in Synchrophasor Data with Matrix Profile and Deep Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1874793.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!