Добірка наукової літератури з теми "Recurrent neural networks BLSTM"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Recurrent neural networks BLSTM".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Recurrent neural networks BLSTM":
Guo, Yanbu, Bingyi Wang, Weihua Li, and Bei Yang. "Protein secondary structure prediction improved by recurrent neural networks integrated with two-dimensional convolutional neural networks." Journal of Bioinformatics and Computational Biology 16, no. 05 (October 2018): 1850021. http://dx.doi.org/10.1142/s021972001850021x.
Zhong, Cheng, Zhonglian Jiang, Xiumin Chu, and Lei Liu. "Inland Ship Trajectory Restoration by Recurrent Neural Network." Journal of Navigation 72, no. 06 (May 17, 2019): 1359–77. http://dx.doi.org/10.1017/s0373463319000316.
KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (September 4, 2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.
Shchetinin, E. Yu. "EMOTIONS RECOGNITION IN HUMAN SPEECH USING DEEP NEURAL NETWORKS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 199 (January 2021): 44–51. http://dx.doi.org/10.14489/vkit.2021.01.pp.044-051.
Dutta, Aparajita, Kusum Kumari Singh, and Ashish Anand. "SpliceViNCI: Visualizing the splicing of non-canonical introns through recurrent neural networks." Journal of Bioinformatics and Computational Biology 19, no. 04 (June 4, 2021): 2150014. http://dx.doi.org/10.1142/s0219720021500141.
Zhang, Ansi, Honglei Wang, Shaobo Li, Yuxin Cui, Zhonghao Liu, Guanci Yang, and Jianjun Hu. "Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation." Applied Sciences 8, no. 12 (November 28, 2018): 2416. http://dx.doi.org/10.3390/app8122416.
Li, Yue, Xutao Wang, and Pengjian Xu. "Chinese Text Classification Model Based on Deep Learning." Future Internet 10, no. 11 (November 20, 2018): 113. http://dx.doi.org/10.3390/fi10110113.
Xuan, Wenjing, Ning Liu, Neng Huang, Yaohang Li, and Jianxin Wang. "CLPred: a sequence-based protein crystallization predictor using BLSTM neural network." Bioinformatics 36, Supplement_2 (December 2020): i709—i717. http://dx.doi.org/10.1093/bioinformatics/btaa791.
Brocki, Łukasz, and Krzysztof Marasek. "Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition." Archives of Acoustics 40, no. 2 (June 1, 2015): 191–95. http://dx.doi.org/10.1515/aoa-2015-0021.
Varshney, Abhishek, Samit Kumar Ghosh, Sibasankar Padhy, Rajesh Kumar Tripathy, and U. Rajendra Acharya. "Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals." Electronics 10, no. 9 (May 2, 2021): 1079. http://dx.doi.org/10.3390/electronics10091079.
Дисертації з теми "Recurrent neural networks BLSTM":
Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.
This thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood
Morillot, Olivier. "Reconnaissance de textes manuscrits par modèles de Markov cachés et réseaux de neurones récurrents : application à l'écriture latine et arabe." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0002.
Handwriting recognition is an essential component of document analysis. One of the popular trends is to go from isolated word to word sequence recognition. Our work aims to propose a text-line recognition system without explicit word segmentation. In order to build an efficient model, we intervene at different levels of the recognition system. First of all, we introduce two new preprocessing techniques : a cleaning and a local baseline correction for text-lines. Then, a language model is built and optimized for handwritten mails. Afterwards, we propose two state-of-the-art recognition systems based on contextual HMMs (Hidden Markov Models) and recurrent neural networks BLSTM (Bi-directional Long Short-Term Memory). We optimize our systems in order to give a comparison of those two approaches. Our systems are evaluated on arabic and latin cursive handwritings and have been submitted to two international handwriting recognition competitions. At last, we introduce a strategy for some out-of-vocabulary character strings recognition, as a prospect of future work
Żbikowski, Rafal Waclaw. "Recurrent neural networks some control aspects /." Connect to electronic version, 1994. http://hdl.handle.net/1905/180.
Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.
Zbikowski, Rafal Waclaw. "Recurrent neural networks : some control aspects." Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.
Jacobsson, Henrik. "Rule extraction from recurrent neural networks." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.
Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
Brax, Christoffer. "Recurrent neural networks for time-series prediction." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-480.
Recurrent neural networks have been used for time-series prediction with good results. In this dissertation recurrent neural networks are compared with time-delayed feed forward networks, feed forward networks and linear regression models on a prediction task. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns. It is evaluated if more accurate predictions can be made when only using the campaign segments of the data.
Throughout the entire project a knowledge discovery process, identified in the literature has been used to give a structured work-process. The results show that the recurrent network is not better than the other evaluated algorithms, in fact, the time-delayed feed forward neural network showed to give the best predictions. The results also show that more accurate predictions could be made when only using information from campaign segments.
Rabi, Gihad. "Visual speech recognition by recurrent neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0010/MQ36169.pdf.
Книги з теми "Recurrent neural networks BLSTM":
Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.
Salem, Fathi M. Recurrent Neural Networks. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.
Hammer, Barbara. Learning with recurrent neural networks. London: Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.
ElHevnawi, Mahmoud, and Mohamed Mysara. Recurrent neural networks and soft computing. Rijeka: InTech, 2012.
Yi, Zhang. Convergence analysis of recurrent neural networks. Boston: Kluwer Academic Publishers, 2004.
Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.
Michel, Anthony N. Qualitative analysis and synthesis of recurrent neural networks. New York: Marcel Dekker, Inc., 2002.
Chen, Wen. Recurrent neural networks applied to robotic motion control. Ottawa: National Library of Canada, 2002.
Частини книг з теми "Recurrent neural networks BLSTM":
Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning, 351–71. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_12.
Yalçın, Orhan Gazi. "Recurrent Neural Networks." In Applied Neural Networks with TensorFlow 2, 161–85. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6513-0_8.
Calin, Ovidiu. "Recurrent Neural Networks." In Deep Learning Architectures, 543–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_17.
Caterini, Anthony L., and Dong Eui Chang. "Recurrent Neural Networks." In Deep Neural Networks in a Mathematical Framework, 59–79. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1_5.
Kamath, Uday, John Liu, and James Whitaker. "Recurrent Neural Networks." In Deep Learning for NLP and Speech Recognition, 315–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14596-5_7.
Marhon, Sajid A., Christopher J. F. Cameron, and Stefan C. Kremer. "Recurrent Neural Networks." In Intelligent Systems Reference Library, 29–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36657-4_2.
Aggarwal, Charu C. "Recurrent Neural Networks." In Neural Networks and Deep Learning, 271–313. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0_7.
Skansi, Sandro. "Recurrent Neural Networks." In Undergraduate Topics in Computer Science, 135–52. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_7.
Ketkar, Nikhil. "Recurrent Neural Networks." In Deep Learning with Python, 79–96. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2766-4_6.
Salvaris, Mathew, Danielle Dean, and Wee Hyong Tok. "Recurrent Neural Networks." In Deep Learning with Azure, 161–86. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3679-6_7.
Тези доповідей конференцій з теми "Recurrent neural networks BLSTM":
Brueckner, Raymond, and Bjorn Schulter. "Social signal classification using deep blstm recurrent neural networks." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854518.
Zheng, Changyan, Xiongwei Zhang, Meng Sun, Jibin Yang, and Yibo Xing. "A Novel Throat Microphone Speech Enhancement Framework Based on Deep BLSTM Recurrent Neural Networks." In 2018 IEEE 4th International Conference on Computer and Communications (ICCC). IEEE, 2018. http://dx.doi.org/10.1109/compcomm.2018.8780872.
Liu, Bin, Jianhua Tao, Dawei Zhang, and Yibin Zheng. "A novel pitch extraction based on jointly trained deep BLSTM Recurrent Neural Networks with bottleneck features." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952173.
Chen, Kai, Zhi-Jie Yan, and Qiang Huo. "A context-sensitive-chunk BPTT approach to training deep LSTM/BLSTM recurrent neural networks for offline handwriting recognition." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333794.
Liu, Bin, and Jianhua Tao. "A Novel Research to Artificial Bandwidth Extension Based on Deep BLSTM Recurrent Neural Networks and Exemplar-Based Sparse Representation." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-772.
Ni, Zhaoheng, Rutuja Ubale, Yao Qian, Michael Mandel, Su-Youn Yoon, Abhinav Misra, and David Suendermann-Oeft. "Unusable Spoken Response Detection with BLSTM Neural Networks." In 2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2018. http://dx.doi.org/10.1109/iscslp.2018.8706635.
Ding, Chuang, Pengcheng Zhu, and Lei Xie. "BLSTM neural networks for speech driven head motion synthesis." In Interspeech 2015. ISCA: ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-137.
Diao, Enmao, Jie Ding, and Vahid Tarokh. "Restricted Recurrent Neural Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006257.
Kuo, Che-Yu, and Jen-Tzung Chien. "MARKOV RECURRENT NEURAL NETWORKS." In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2018. http://dx.doi.org/10.1109/mlsp.2018.8517074.
Frinken, Volkmar, and Seiichi Uchida. "Deep BLSTM neural networks for unconstrained continuous handwritten text recognition." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333894.
Звіти організацій з теми "Recurrent neural networks BLSTM":
Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Fort Belvoir, VA: Defense Technical Information Center, July 1988. http://dx.doi.org/10.21236/ada219114.
Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1366924.
Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1354.
Lin, Linyu, Joomyung Lee, Bikash Poudel, Timothy McJunkin, Nam Dinh, and Vivek Agarwal. Enhancing the Operational Resilience of Advanced Reactors with Digital Twins by Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), October 2021. http://dx.doi.org/10.2172/1835892.
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.