Статті в журналах з теми "Recurrent neural networks BLSTM"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Recurrent neural networks BLSTM.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Recurrent neural networks BLSTM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Guo, Yanbu, Bingyi Wang, Weihua Li, and Bei Yang. "Protein secondary structure prediction improved by recurrent neural networks integrated with two-dimensional convolutional neural networks." Journal of Bioinformatics and Computational Biology 16, no. 05 (October 2018): 1850021. http://dx.doi.org/10.1142/s021972001850021x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Protein secondary structure prediction (PSSP) is an important research field in bioinformatics. The representation of protein sequence features could be treated as a matrix, which includes the amino-acid residue (time-step) dimension and the feature vector dimension. Common approaches to predict secondary structures only focus on the amino-acid residue dimension. However, the feature vector dimension may also contain useful information for PSSP. To integrate the information on both dimensions of the matrix, we propose a hybrid deep learning framework, two-dimensional convolutional bidirectional recurrent neural network (2C-BRNN), for improving the accuracy of 8-class secondary structure prediction. The proposed hybrid framework is to extract the discriminative local interactions between amino-acid residues by two-dimensional convolutional neural networks (2DCNNs), and then further capture long-range interactions between amino-acid residues by bidirectional gated recurrent units (BGRUs) or bidirectional long short-term memory (BLSTM). Specifically, our proposed 2C-BRNNs framework consists of four models: 2DConv-BGRUs, 2DCNN-BGRUs, 2DConv-BLSTM and 2DCNN-BLSTM. Among these four models, the 2DConv- models only contain two-dimensional (2D) convolution operations. Moreover, the 2DCNN- models contain 2D convolutional and pooling operations. Experiments are conducted on four public datasets. The experimental results show that our proposed 2DConv-BLSTM model performs significantly better than the benchmark models. Furthermore, the experiments also demonstrate that the proposed models can extract more meaningful features from the matrix of proteins, and the feature vector dimension is also useful for PSSP. The codes and datasets of our proposed methods are available at https://github.com/guoyanb/JBCB2018/ .
2

Zhong, Cheng, Zhonglian Jiang, Xiumin Chu, and Lei Liu. "Inland Ship Trajectory Restoration by Recurrent Neural Network." Journal of Navigation 72, no. 06 (May 17, 2019): 1359–77. http://dx.doi.org/10.1017/s0373463319000316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The quality of Automatic Identification System (AIS) data is of fundamental importance for maritime situational awareness and navigation risk assessment. To improve operational efficiency, a deep learning method based on Bi-directional Long Short-Term Memory Recurrent Neural Networks (BLSTM-RNNs) is proposed and applied in AIS trajectory data restoration. Case studies have been conducted in two distinct reaches of the Yangtze River and the capability of the proposed method has been evaluated. Comparisons have been made between the BLSTM-RNNs-based method and the linear method and classic Artificial Neural Networks. Satisfactory results have been obtained by all methods in straight waterways while the BLSTM-RNNs-based method is superior in meandering waterways. Owing to the bi-directional prediction nature of the proposed method, ship trajectory restoration is favourable for complicated geometry and multiple missing points cases. The residual error of the proposed model is computed through Euclidean distance which decreases to an order of 10 m. It is considered that the present study could provide an alternative method for improving AIS data quality, thus ensuring its completeness and reliability.
3

KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (September 4, 2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractNeural Network-based approaches have recently produced good performances in Natural language tasks, such as Supertagging. In the supertagging task, a Supertag (Lexical category) is assigned to each word in an input sequence. Combinatory Categorial Grammar Supertagging is a more challenging problem than various sequence-tagging problems, such as part-of-speech (POS) tagging and named entity recognition due to the large number of the lexical categories. Specifically, simple Recurrent Neural Network (RNN) has shown to significantly outperform the previous state-of-the-art feed-forward neural networks. On the other hand, it is well known that Recurrent Networks fail to learn long dependencies. In this paper, we introduce a new neural network architecture based on backward and Bidirectional Long Short-Term Memory (BLSTM) Networks that has the ability to memorize information for long dependencies and benefit from both past and future information. State-of-the-art methods focus on previous information, whereas BLSTM has access to information in both previous and future directions. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short-Term Memory (LSTM) networks are more precise and successful than both unidirectional and bidirectional standard RNNs. Experiment results reveal the effectiveness of our proposed method on both in-domain and out-of-domain datasets. Experiments show improvements about (1.2 per cent) over standard RNN.
4

Shchetinin, E. Yu. "EMOTIONS RECOGNITION IN HUMAN SPEECH USING DEEP NEURAL NETWORKS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 199 (January 2021): 44–51. http://dx.doi.org/10.14489/vkit.2021.01.pp.044-051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.
5

Dutta, Aparajita, Kusum Kumari Singh, and Ashish Anand. "SpliceViNCI: Visualizing the splicing of non-canonical introns through recurrent neural networks." Journal of Bioinformatics and Computational Biology 19, no. 04 (June 4, 2021): 2150014. http://dx.doi.org/10.1142/s0219720021500141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most of the current computational models for splice junction prediction are based on the identification of canonical splice junctions. However, it is observed that the junctions lacking the consensus dimers GT and AG also undergo splicing. Identification of such splice junctions, called the non-canonical splice junctions, is also essential for a comprehensive understanding of the splicing phenomenon. This work focuses on the identification of non-canonical splice junctions through the application of a bidirectional long short-term memory (BLSTM) network. Furthermore, we apply a back-propagation-based (integrated gradient) and a perturbation-based (occlusion) visualization techniques to extract the non-canonical splicing features learned by the model. The features obtained are validated with the existing knowledge from the literature. Integrated gradient extracts features that comprise contiguous nucleotides, whereas occlusion extracts features that are individual nucleotides distributed across the sequence.
6

Zhang, Ansi, Honglei Wang, Shaobo Li, Yuxin Cui, Zhonghao Liu, Guanci Yang, and Jianjun Hu. "Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation." Applied Sciences 8, no. 12 (November 28, 2018): 2416. http://dx.doi.org/10.3390/app8122416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different but related datasets and then fine-tuned by the target dataset. Extensive experimental results show that transfer learning can in general improve the prediction models on the dataset with a small number of samples. There is one exception that when transferring from multi-type operating conditions to single operating conditions, transfer learning led to a worse result.
7

Li, Yue, Xutao Wang, and Pengjian Xu. "Chinese Text Classification Model Based on Deep Learning." Future Internet 10, no. 11 (November 20, 2018): 113. http://dx.doi.org/10.3390/fi10110113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.
8

Xuan, Wenjing, Ning Liu, Neng Huang, Yaohang Li, and Jianxin Wang. "CLPred: a sequence-based protein crystallization predictor using BLSTM neural network." Bioinformatics 36, Supplement_2 (December 2020): i709—i717. http://dx.doi.org/10.1093/bioinformatics/btaa791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Motivation Determining the structures of proteins is a critical step to understand their biological functions. Crystallography-based X-ray diffraction technique is the main method for experimental protein structure determination. However, the underlying crystallization process, which needs multiple time-consuming and costly experimental steps, has a high attrition rate. To overcome this issue, a series of in silico methods have been developed with the primary aim of selecting the protein sequences that are promising to be crystallized. However, the predictive performance of the current methods is modest. Results We propose a deep learning model, so-called CLPred, which uses a bidirectional recurrent neural network with long short-term memory (BLSTM) to capture the long-range interaction patterns between k-mers amino acids to predict protein crystallizability. Using sequence only information, CLPred outperforms the existing deep-learning predictors and a vast majority of sequence-based diffraction-quality crystals predictors on three independent test sets. The results highlight the effectiveness of BLSTM in capturing non-local, long-range inter-peptide interaction patterns to distinguish proteins that can result in diffraction-quality crystals from those that cannot. CLPred has been steadily improved over the previous window-based neural networks, which is able to predict crystallization propensity with high accuracy. CLPred can also be improved significantly if it incorporates additional features from pre-extracted evolutional, structural and physicochemical characteristics. The correctness of CLPred predictions is further validated by the case studies of Sox transcription factor family member proteins and Zika virus non-structural proteins. Availability and implementation https://github.com/xuanwenjing/CLPred.
9

Brocki, Łukasz, and Krzysztof Marasek. "Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition." Archives of Acoustics 40, no. 2 (June 1, 2015): 191–95. http://dx.doi.org/10.1515/aoa-2015-0021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.
10

Varshney, Abhishek, Samit Kumar Ghosh, Sibasankar Padhy, Rajesh Kumar Tripathy, and U. Rajendra Acharya. "Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals." Electronics 10, no. 9 (May 2, 2021): 1079. http://dx.doi.org/10.3390/electronics10091079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The automated classification of cognitive workload tasks based on the analysis of multi-channel EEG signals is vital for human–computer interface (HCI) applications. In this paper, we propose a computerized approach for categorizing mental-arithmetic-based cognitive workload tasks using multi-channel electroencephalogram (EEG) signals. The approach evaluates various entropy features, such as the approximation entropy, sample entropy, permutation entropy, dispersion entropy, and slope entropy, from each channel of the EEG signal. These features were fed to various recurrent neural network (RNN) models, such as long-short term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent unit (GRU), for the automated classification of mental-arithmetic-based cognitive workload tasks. Two cognitive workload classification strategies (bad mental arithmetic calculation (BMAC) vs. good mental arithmetic calculation (GMAC); and before mental arithmetic calculation (BFMAC) vs. during mental arithmetic calculation (DMAC)) are considered in this work. The approach was evaluated using the publicly available mental arithmetic task-based EEG database. The results reveal that our proposed approach obtained classification accuracy values of 99.81%, 99.43%, and 99.81%, using the LSTM, BLSTM, and GRU-based RNN classifiers, respectively for the BMAC vs. GMAC cognitive workload classification strategy using all entropy features and a 10-fold cross-validation (CV) technique. The slope entropy features combined with each RNN-based model obtained higher classification accuracy compared with other entropy features for the classification of the BMAC vs. GMAC task. We obtained the average classification accuracy values of 99.39%, 99.44%, and 99.63% for the classification of the BFMAC vs. DMAC tasks, using the LSTM, BLSTM, and GRU classifiers with all entropy features and a hold-out CV scheme. Our developed automated mental arithmetic task system is ready to be tested with more databases for real-world applications.
11

Mahmoud, Adnen, and Mounir Zrigui. "BLSTM-API: Bi-LSTM Recurrent Neural Network-Based Approach for Arabic Paraphrase Identification." Arabian Journal for Science and Engineering 46, no. 4 (February 24, 2021): 4163–74. http://dx.doi.org/10.1007/s13369-020-05320-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Long, Haixia, Zhao Sun, Manzhi Li, Hai Yan Fu, and Ming Cai Lin. "Predicting Protein Phosphorylation Sites Based on Deep Learning." Current Bioinformatics 15, no. 4 (June 11, 2020): 300–308. http://dx.doi.org/10.2174/1574893614666190902154332.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Protein phosphorylation is one of the most important Post-translational Modifications (PTMs) occurring at amino acid residues serine (S), threonine (T), and tyrosine (Y). It plays critical roles in protein structure and function predicting. With the development of novel high-throughput sequencing technologies, there are a huge amount of protein sequences being generated and stored in databases. Objective: It is of great importance in both basic research and drug development to quickly and accurately predict which residues of S, T, or Y can be phosphorylated. Methods: In order to solve the problem, a novel hybrid deep learning model with a convolutional neural network and bi-directional long short-term memory recurrent neural network (CNN+BLSTM) is proposed for predicting phosphorylation sites in proteins. The model contains a list of layers that transform the input data into an output class, in which the convolution layer captures higher-level abstraction features of amino acid, while the recurrent layer captures long-term dependencies between amino acids to improve predictions. The joint model learns interactions between higher-level features derived from the protein sequence to predict the phosphorylated sites. Results: We applied our model together with two canonical methods namely iPhos-PseEn and MusiteDeep. A 5-fold cross-validation process indicated that CNN+BLSTM outperforms the two competitors in various evaluation metrics like the area under the receiver operating characteristic and precision-recall curves, the Matthews correlation coefficient, F-measure, accuracy, and so on. Conclusion: CNN+BLSTM is promising in identifying potential protein phosphorylation for further experimental validation.
13

Gao, Shenghan, Changyan Zheng, Yicong Zhao, Ziyue Wu, Jiao Li, and Xian Huang. "Comparison of enhancement techniques based on neural networks for attenuated voice signal captured by flexible vibration sensors on throats." Nanotechnology and Precision Engineering 5, no. 1 (March 1, 2022): 013001. http://dx.doi.org/10.1063/10.0009187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Wearable flexible sensors attached on the neck have been developed to measure the vibration of vocal cords during speech. However, high-frequency attenuation caused by the frequency response of the flexible sensors and absorption of high-frequency sound by the skin are obstacles to the practical application of these sensors in speech capture based on bone conduction. In this paper, speech enhancement techniques for enhancing the intelligibility of sensor signals are developed and compared. Four kinds of speech enhancement algorithms based on a fully connected neural network (FCNN), a long short-term memory (LSTM), a bidirectional long short-term memory (BLSTM), and a convolutional-recurrent neural network (CRNN) are adopted to enhance the sensor signals, and their performance after deployment on four kinds of edge and cloud platforms is also investigated. Experimental results show that the BLSTM performs best in improving speech quality, but is poorest with regard to hardware deployment. It improves short-time objective intelligibility (STOI) by 0.18 to nearly 0.80, which corresponds to a good intelligibility level, but it introduces latency as well as being a large model. The CRNN, which improves STOI to about 0.75, ranks second among the four neural networks. It is also the only model that is able to achieves real-time processing with all four hardware platforms, demonstrating its great potential for deployment on mobile platforms. To the best of our knowledge, this is one of the first trials to systematically and specifically develop processing techniques for bone-conduction speed signals captured by flexible sensors. The results demonstrate the possibility of realizing a wearable lightweight speech collection system based on flexible vibration sensors and real-time speech enhancement to compensate for high-frequency attenuation.
14

Zulqarnain, Muhammad, Rozaida Ghazali, Yana Mazwin Mohmad Hassim, and Muhammad Rehan. "Text classification based on gated recurrent unit combines with support vector machine." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 4 (August 1, 2020): 3734. http://dx.doi.org/10.11591/ijece.v10i4.pp3734-3742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As the amount of unstructured text data that humanity produce largely and a lot of texts are grows on the Internet, so the one of the intelligent technique is require processing it and extracting different types of knowledge from it. Gated recurrent unit (GRU) and support vector machine (SVM) have been successfully used to Natural Language Processing (NLP) systems with comparative, remarkable results. GRU networks perform well in sequential learning tasks and overcome the issues of “vanishing and explosion of gradients in standard recurrent neural networks (RNNs) when captureing long-term dependencies. In this paper, we proposed a text classification model based on improved approaches to this norm by presenting a linear support vector machine (SVM) as the replacement of Softmax in the final output layer of a GRU model. Furthermore, the cross-entropy function shall be replaced with a margin-based function. Empirical results present that the proposed GRU-SVM model achieved comparatively better results than the baseline approaches BLSTM-C, DABN.
15

Ziafat, Nishmia, Hafiz Farooq Ahmad, Iram Fatima, Muhammad Zia, Abdulaziz Alhumam, and Kashif Rajpoot. "Correct Pronunciation Detection of the Arabic Alphabet Using Deep Learning." Applied Sciences 11, no. 6 (March 11, 2021): 2508. http://dx.doi.org/10.3390/app11062508.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic speech recognition for Arabic has its unique challenges and there has been relatively slow progress in this domain. Specifically, Classic Arabic has received even less research attention. The correct pronunciation of the Arabic alphabet has significant implications on the meaning of words. In this work, we have designed learning models for the Arabic alphabet classification based on the correct pronunciation of an alphabet. The correct pronunciation classification of the Arabic alphabet is a challenging task for the research community. We divide the problem into two steps, firstly we train the model to recognize an alphabet, namely Arabic alphabet classification. Secondly, we train the model to determine its quality of pronunciation, namely Arabic alphabet pronunciation classification. Due to the less availability of audio data of this kind, we had to collect audio data from the experts, and novices for our model’s training. To train these models, we extract pronunciation features from audio data of the Arabic alphabet using mel-spectrogram. We have employed a deep convolution neural network (DCNN), AlexNet with transfer learning, and bidirectional long short-term memory (BLSTM), a type of recurrent neural network (RNN), for the classification of the audio data. For alphabet classification, DCNN, AlexNet, and BLSTM achieve an accuracy of 95.95%, 98.41%, and 88.32%, respectively. For Arabic alphabet pronunciation classification, DCNN, AlexNet, and BLSTM achieve an accuracy of 97.88%, 99.14%, and 77.71%, respectively.
16

Terra Vieira, Samuel, Renata Lopes Rosa, Demóstenes Zegarra Rodríguez, Miguel Arjona Ramírez, Muhammad Saadi, and Lunchakorn Wuttisittikulkij. "Q-Meter: Quality Monitoring System for Telecommunication Services Based on Sentiment Analysis Using Deep Learning." Sensors 21, no. 5 (March 8, 2021): 1880. http://dx.doi.org/10.3390/s21051880.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A quality monitoring system for telecommunication services is relevant for network operators because it can help to improve users’ quality-of-experience (QoE). In this context, this article proposes a quality monitoring system, named Q-Meter, whose main objective is to improve subscriber complaint detection about telecommunication services using online-social-networks (OSNs). The complaint is detected by sentiment analysis performed by a deep learning algorithm, and the subscriber’s geographical location is extracted to evaluate the signal strength. The regions in which users posted a complaint in OSN are analyzed using a freeware application, which uses the radio base station (RBS) information provided by an open database. Experimental results demonstrated that sentiment analysis based on a convolutional neural network (CNN) and a bidirectional long short-term memory (BLSTM)-recurrent neural network (RNN) with the soft-root-sign (SRS) activation function presented a precision of 97% for weak signal topic classification. Additionally, the results showed that 78.3% of the total number of complaints are related to weak coverage, and 92% of these regions were proved that have coverage problems considering a specific cellular operator. Moreover, a Q-Meter is low cost and easy to integrate into current and next-generation cellular networks, and it will be useful in sensing and monitoring tasks.
17

Chhetri, Manoj, Sudhanshu Kumar, Partha Pratim Roy, and Byung-Gyu Kim. "Deep BLSTM-GRU Model for Monthly Rainfall Prediction: A Case Study of Simtokha, Bhutan." Remote Sensing 12, no. 19 (September 28, 2020): 3174. http://dx.doi.org/10.3390/rs12193174.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rainfall prediction is an important task due to the dependence of many people on it, especially in the agriculture sector. Prediction is difficult and even more complex due to the dynamic nature of rainfalls. In this study, we carry out monthly rainfall prediction over Simtokha a region in the capital of Bhutan, Thimphu. The rainfall data were obtained from the National Center of Hydrology and Meteorology Department (NCHM) of Bhutan. We study the predictive capability with Linear Regression, Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bidirectional Long Short Term Memory (BLSTM) based on the parameters recorded by the automatic weather station in the region. Furthermore, this paper proposes a BLSTM-GRU based model which outperforms the existing machine and deep learning models. From the six different existing models under study, LSTM recorded the best Mean Square Error (MSE) score of 0.0128. The proposed BLSTM-GRU model outperformed LSTM by 41.1% with a MSE score of 0.0075. Experimental results are encouraging and suggest that the proposed model can achieve lower MSE in rainfall prediction systems.
18

Kumar, S., M. Anand Kumar, and K. P. Soman. "Deep Learning Based Part-of-Speech Tagging for Malayalam Twitter Data (Special Issue: Deep Learning Techniques for Natural Language Processing)." Journal of Intelligent Systems 28, no. 3 (July 26, 2019): 423–35. http://dx.doi.org/10.1515/jisys-2017-0520.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The paper addresses the problem of part-of-speech (POS) tagging for Malayalam tweets. The conversational style of posts/tweets/text in social media data poses a challenge in using general POS tagset for tagging the text. For the current work, a tagset was designed that contains 17 coarse tags and 9915 tweets were tagged manually for experiment and evaluation. The tagged data were evaluated using sequential deep learning methods like recurrent neural network (RNN), gated recurrent units (GRU), long short-term memory (LSTM), and bidirectional LSTM (BLSTM). The training of the model was performed on the tagged tweets, at word level and character level. The experiments were evaluated using measures like precision, recall, f1-measure, and accuracy. During the experiment, it was found that the GRU-based deep learning sequential model at word level gave the highest f1-measure of 0.9254; at character-level, the BLSTM-based deep learning sequential model gave the highest f1-measure of 0.8739. To choose the suitable number of hidden states, we varied it as 4, 16, 32, and 64, and performed training for each. It was observed that the increase in hidden states improved the tagger model. This is an initial work to perform Malayalam Twitter data POS tagging using deep learning sequential models.
19

Hou, Hongwei, Kunzhi Tang, Xiaoqian Liu, and Yue Zhou. "Application of Artificial Intelligence Technology Optimized by Deep Learning to Rural Financial Development and Rural Governance." Journal of Global Information Management 30, no. 7 (September 2022): 1–23. http://dx.doi.org/10.4018/jgim.289220.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The aim of this article is to promote the development of rural finance and the further informatization of rural banks. Based on DL (deep learning) and artificial intelligence technology, data pre-processing and feature selection are conducted on the customer information of rural banks in a certain region, including the historical deposit and loan, transaction record, and credit information. Besides, four DL models are proposed with a precision of more than 87% by test to improve the simulation effect and explore the application of DL. The BLSTM-CNN (Bi-directional Long Short-Term Memory-Convolutional Neural Network) model with a precision of 95.8%, which integrates RNN (Recurrent Neural Network) and CNN (Convolutional Neural Network) in parallel, solves the shortcomings of RNN and CNN separately. The research result can provide a more reasonable prediction model for rural banks, and ideas for the development of rural informatization and promoting rural governance.
20

Javeed, Danish, Tianhan Gao, Muhammad Taimoor Khan, and Ijaz Ahmad. "A Hybrid Deep Learning-Driven SDN Enabled Mechanism for Secure Communication in Internet of Things (IoT)." Sensors 21, no. 14 (July 18, 2021): 4884. http://dx.doi.org/10.3390/s21144884.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Internet of Things (IoT) has emerged as a new technological world connecting billions of devices. Despite providing several benefits, the heterogeneous nature and the extensive connectivity of the devices make it a target of different cyberattacks that result in data breach and financial loss. There is a severe need to secure the IoT environment from such attacks. In this paper, an SDN-enabled deep-learning-driven framework is proposed for threats detection in an IoT environment. The state-of-the-art Cuda-deep neural network, gated recurrent unit (Cu- DNNGRU), and Cuda-bidirectional long short-term memory (Cu-BLSTM) classifiers are adopted for effective threat detection. We have performed 10 folds cross-validation to show the unbiasedness of results. The up-to-date publicly available CICIDS2018 data set is introduced to train our hybrid model. The achieved accuracy of the proposed scheme is 99.87%, with a recall of 99.96%. Furthermore, we compare the proposed hybrid model with Cuda-Gated Recurrent Unit, Long short term memory (Cu-GRULSTM) and Cuda-Deep Neural Network, Long short term memory (Cu- DNNLSTM), as well as with existing benchmark classifiers. Our proposed mechanism achieves impressive results in terms of accuracy, F1-score, precision, speed efficiency, and other evaluation metrics.
21

Otte, S., L. Wittig, G. Hüttmann, C. Kugler, D. Drömann, A. Zell, A. Schlaefer, and C. Otte. "Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis." Methods of Information in Medicine 53, no. 04 (2014): 245–49. http://dx.doi.org/10.3414/me13-01-0135.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Summary Objectives: Optical Coherence Tomography (OCT) has been proposed as a high resolution image modality to guide transbronchial biopsies. In this study we address the question, whether individual A-scans obtained in needle direction can contribute to the identification of pulmonary nodules. Methods: OCT A-scans from freshly resected human lung tissue specimen were recorded through a customized needle with an embedded optical fiber. Bidirectional Long Short Term Memory networks (BLSTMs) were trained on randomly distributed training and test sets of the acquired A-scans. Patient specific training and different pre-processing steps were evaluated. Results: Classification rates from 67.5% up to 76% were archived for different training scenarios. Sensitivity and specificity were highest for a patient specific training with 0.87 and 0.85. Low pass filtering decreased the accuracy from 73.2% on a reference distribution to 62.2% for higher cutoff frequencies and to 56% for lower cutoff frequencies. Conclusion: The results indicate that a grey value based classification is feasible and may provide additional information for diagnosis and navigation. Furthermore, the experiments show patient specific signal properties and indicate that the lower and upper parts of the frequency spectrum contribute to the classification.
22

Feigl, Tobias, Sebastian Kram, Philipp Woller, Ramiz H. Siddiqui, Michael Philippsen, and Christopher Mutschler. "RNN-Aided Human Velocity Estimation from a Single IMU." Sensors 20, no. 13 (June 29, 2020): 3656. http://dx.doi.org/10.3390/s20133656.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Pedestrian Dead Reckoning (PDR) uses inertial measurement units (IMUs) and combines velocity and orientation estimates to determine a position. The estimation of the velocity is still challenging, as the integration of noisy acceleration and angular speed signals over a long period of time causes large drifts. Classic approaches to estimate the velocity optimize for specific applications, sensor positions, and types of movement and require extensive parameter tuning. Our novel hybrid filter combines a convolutional neural network (CNN) and a bidirectional recurrent neural network (BLSTM) (that extract spatial features from the sensor signals and track their temporal relationships) with a linear Kalman filter (LKF) that improves the velocity estimates. Our experiments show the robustness against different movement states and changes in orientation, even in highly dynamic situations. We compare the new architecture with conventional, machine, and deep learning methods and show that from a single non-calibrated IMU, our novel architecture outperforms the state-of-the-art in terms of velocity (≤0.16 m/s) and traveled distance (≤3 m/km). It also generalizes well to different and varying movement speeds and provides accurate and precise velocity estimates.
23

Lynn, Htet Myet, Pankoo Kim, and Sung Bum Pan. "Data Independent Acquisition Based Bi-Directional Deep Networks for Biometric ECG Authentication." Applied Sciences 11, no. 3 (January 26, 2021): 1125. http://dx.doi.org/10.3390/app11031125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this report, the study of non-fiducial based approaches for Electrocardiogram(ECG) biometric authentication is examined, and several excessive techniques are proposed to perform comparative experiments for evaluating the best possible approach for all the classification tasks. Non-fiducial methods are designed to extract the discriminative information of a signal without annotating fiducial points. However, this process requires peak detection to identify a heartbeat signal. Based on recent studies that usually rely on heartbeat segmentation, QRS detection is required, and the process can be complicated for ECG signals for which the QRS complex is absent. Thus, many studies only conduct biometric authentication tasks on ECG signals with QRS complexes, and are hindered by similar limitations. To overcome this issue, we proposed a data-independent acquisition method to facilitate highly generalizable signal processing and feature learning processes. This is achieved by enhancing random segmentation to avoid complicated fiducial feature extraction, along with auto-correlation to eliminate the phase difference due to random segmentation. Subsequently, a bidirectional recurrent neural network (RNN) with long short-term memory (LSTM) deep networks is utilized to automatically learn the features associated with the signal and to perform an authentication task. The experimental results suggest that the proposed data-independent approach using a BLSTM network achieves a relatively high classification accuracy for every dataset relative to the compared techniques. Moreover, it exhibited a significantly higher accuracy rate in experiments using ECG signals without the QRS complex. The results also revealed that data-dependent methods can only perform well for specified data types and amendments of data variations, whereas the presented approach can also be considered for generalization to other quasi-periodical biometric signal-based classification tasks in future studies.
24

Zheng, Chunjun, Chunli Wang, and Ning Jia. "An Ensemble Model for Multi-Level Speech Emotion Recognition." Applied Sciences 10, no. 1 (December 26, 2019): 205. http://dx.doi.org/10.3390/app10010205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Speech emotion recognition is a challenging and widely examined research topic in the field of speech processing. The accuracy of existing models in speech emotion recognition tasks is not high, and the generalization ability is not strong. Since the feature set and model design of effective speech directly affect the accuracy of speech emotion recognition, research on features and models is important. Because emotional expression is often correlated with the global features, local features, and model design of speech, it is often difficult to find a universal solution for effective speech emotion recognition. Based on this, the main research purpose of this paper is to generate general emotion features in speech signals from different angles, and use the ensemble learning model to perform emotion recognition tasks. It is divided into the following aspects: (1) Three expert roles of speech emotion recognition are designed. Expert 1 focuses on three-dimensional feature extraction of local signals; expert 2 focuses on extraction of comprehensive information in local data; and expert 3 emphasizes global features: acoustic feature descriptors (low-level descriptors (LLDs)), high-level statistics functionals (HSFs), and local features and their timing relationships. A single-/multiple-level deep learning model that meets expert characteristics is designed for each expert, including convolutional neural network (CNN), bi-directional long short-term memory (BLSTM), and gated recurrent unit (GRU). Convolutional recurrent neural network (CRNN), based on a combination of an attention mechanism, is used for internal training of experts. (2) By designing an ensemble learning model, each expert can play to its own advantages and evaluate speech emotions from different focuses. (3) Through experiments, the performance of various experts and ensemble learning models in emotion recognition is compared in the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus and the validity of the proposed model is verified.
25

Qin, Tianyun, Rangding Wang, Diqun Yan, and Lang Lin. "Source Cell-Phone Identification in the Presence of Additive Noise from CQT Domain." Information 9, no. 8 (August 17, 2018): 205. http://dx.doi.org/10.3390/info9080205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the widespread availability of cell-phone recording devices, source cell-phone identification has become a hot topic in multimedia forensics. At present, the research on the source cell-phone identification in clean conditions has achieved good results, but that in noisy environments is not ideal. This paper proposes a novel source cell-phone identification system suitable for both clean and noisy environments using spectral distribution features of constant Q transform (CQT) domain and multi-scene training method. Based on the analysis, it is found that the identification difficulty lies in different models of cell-phones of the same brand, and their tiny differences are mainly in the middle and low frequency bands. Therefore, this paper extracts spectral distribution features from the CQT domain, which has a higher frequency resolution in the mid-low frequency. To evaluate the effectiveness of the proposed feature, four classification techniques of Support Vector Machine (SVM), Random Forest (RF), Convolutional Neural Network (CNN) and Recurrent Neuron Network-Long Short-Term Memory Neural Network (RNN-BLSTM) are used to identify the source recording device. Experimental results show that the features proposed in this paper have superior performance. Compared with Mel frequency cepstral coefficient (MFCC) and linear frequency cepstral coefficient (LFCC), it enhances the accuracy of cell-phones within the same brand, whether the speech to be tested comprises clean speech files or noisy speech files. In addition, the CNN classification effect is outstanding. In terms of models, the model is established by the multi-scene training method, which improves the distinguishing ability of the model in the noisy environment than single-scenario training method. The average accuracy rate in CNN for clean speech files on the CKC speech database (CKC-SD) and TIMIT Recaptured Database (TIMIT-RD) databases increased from 95.47% and 97.89% to 97.08% and 99.29%, respectively. For noisy speech files with seen noisy types and unseen noisy types, the performance was greatly improved, and most of the recognition rates exceeded 90%. Therefore, the source identification system in this paper is robust to noise.
26

Grossberg, Stephen. "Recurrent neural networks." Scholarpedia 8, no. 2 (2013): 1888. http://dx.doi.org/10.4249/scholarpedia.1888.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Bitzer, Sebastian, and Stefan J. Kiebel. "Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks." Biological Cybernetics 106, no. 4-5 (May 12, 2012): 201–17. http://dx.doi.org/10.1007/s00422-012-0490-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Schuster, M., and K. K. Paliwal. "Bidirectional recurrent neural networks." IEEE Transactions on Signal Processing 45, no. 11 (1997): 2673–81. http://dx.doi.org/10.1109/78.650093.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Passricha, Vishal, and Rajesh Kumar Aggarwal. "A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition." Journal of Intelligent Systems 29, no. 1 (March 5, 2019): 1261–74. http://dx.doi.org/10.1515/jisys-2018-0372.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce higher-level representations of acoustic data. Spatial and temporal properties of the speech signal are essential for high recognition rate, so the concept of combining two different networks came into mind. In this paper, a hybrid architecture of CNN-BLSTM is proposed to appropriately use these properties and to improve the continuous speech recognition task. Further, we explore different methods like weight sharing, the appropriate number of hidden units, and ideal pooling strategy for CNN to achieve a high recognition rate. Specifically, the focus is also on how many BLSTM layers are effective. This paper also attempts to overcome another shortcoming of CNN, i.e. speaker-adapted features, which are not possible to be directly modeled in CNN. Next, various non-linearities with or without dropout are analyzed for speech tasks. Experiments indicate that proposed hybrid architecture with speaker-adapted features and maxout non-linearity with dropout idea shows 5.8% and 10% relative decrease in WER over the CNN and DNN systems, respectively.
30

Ma, Xiao, Peter Karkus, David Hsu, and Wee Sun Lee. "Particle Filter Recurrent Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5101–8. http://dx.doi.org/10.1609/aaai.v34i04.5952.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data. To tackle highly variable and multi-modal real-world data, we introduce Particle Filter Recurrent Neural Networks (PF-RNNs), a new RNN family that explicitly models uncertainty in its internal structure: while an RNN relies on a long, deterministic latent state vector, a PF-RNN maintains a latent state distribution, approximated as a set of particles. For effective learning, we provide a fully differentiable particle filter algorithm that updates the PF-RNN latent state distribution according to the Bayes rule. Experiments demonstrate that the proposed PF-RNNs outperform the corresponding standard gated RNNs on a synthetic robot localization dataset and 10 real-world sequence prediction datasets for text classification, stock price prediction, etc.
31

KAWAMURA, Yoshiaki. "Learning for Recurrent Neural Networks." Journal of Japan Society for Fuzzy Theory and Systems 7, no. 1 (1995): 52–56. http://dx.doi.org/10.3156/jfuzzy.7.1_52.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sutskever, Ilya, and Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks." Neural Networks 23, no. 2 (March 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Gavaldà, Ricard, and Hava T. Siegelmann. "Discontinuities in Recurrent Neural Networks." Neural Computation 11, no. 3 (April 1, 1999): 715–45. http://dx.doi.org/10.1162/089976699300016638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.
34

Jinmiao Chen and N. S. Chaudhari. "Segmented-Memory Recurrent Neural Networks." IEEE Transactions on Neural Networks 20, no. 8 (August 2009): 1267–80. http://dx.doi.org/10.1109/tnn.2009.2022980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Samuelides, M., and B. Cessac. "Random recurrent neural networks dynamics." European Physical Journal Special Topics 142, no. 1 (March 2007): 89–122. http://dx.doi.org/10.1140/epjst/e2007-00059-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cheng, Chang-Yuan, Kuang-Hui Lin, and Chih-Wen Shih. "Multistability in Recurrent Neural Networks." SIAM Journal on Applied Mathematics 66, no. 4 (January 2006): 1301–20. http://dx.doi.org/10.1137/050632440.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ruiz, Luana, Fernando Gama, and Alejandro Ribeiro. "Gated Graph Recurrent Neural Networks." IEEE Transactions on Signal Processing 68 (2020): 6303–18. http://dx.doi.org/10.1109/tsp.2020.3033962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Santini, Simone, Alberto Del Bimbo, and Ramesh Jain. "Block-structured recurrent neural networks." Neural Networks 8, no. 1 (January 1995): 135–47. http://dx.doi.org/10.1016/0893-6080(94)00060-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hunt, Andrew. "Recurrent neural networks for syllabification." Speech Communication 13, no. 3-4 (December 1993): 323–32. http://dx.doi.org/10.1016/0167-6393(93)90031-f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

White, Halbert. "Learning in recurrent neural networks." Mathematical Social Sciences 22, no. 1 (August 1991): 102–3. http://dx.doi.org/10.1016/0165-4896(91)90073-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Imam, Nabil. "Wiring up recurrent neural networks." Nature Machine Intelligence 3, no. 9 (September 2021): 740–41. http://dx.doi.org/10.1038/s42256-021-00391-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Pu, Yi-Fei, Zhang Yi, and Ji-Liu Zhou. "Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 28, no. 10 (October 2017): 2319–33. http://dx.doi.org/10.1109/tnnls.2016.2582512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Shchetinin, Eugene Yu, and Leonid Sevastianov. "Improving the Learning Power of Artificial Intelligence Using Multimodal Deep Learning." EPJ Web of Conferences 248 (2021): 01017. http://dx.doi.org/10.1051/epjconf/202124801017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer paralinguistic analysis is widely used in security systems, biometric research, call centers and banks. Paralinguistic models estimate different physical properties of voice, such as pitch, intensity, formants and harmonics to classify emotions. The main goal is to find such features that would be robust to outliers and will retain variety of human voice properties at the same time. Moreover, the model used must be able to estimate features on a time scale for an effective analysis of voice variability. In this paper a paralinguistic model based on Bidirectional Long Short-Term Memory (BLSTM) neural network is described, which was trained for vocal-based emotion recognition. The main advantage of this network architecture is that each module of the network consists of several interconnected layers, providing the ability to recognize flexible long-term dependencies in data, which is important in context of vocal analysis. We explain the architecture of a bidirectional neural network model, its main advantages over regular neural networks and compare experimental results of BLSTM network with other models.
44

Tamura, Akihiro, Taro Watanabe, and Eiichiro Sumita. "Recurrent Neural Networks for Word Alignment." Journal of Natural Language Processing 22, no. 4 (2015): 289–312. http://dx.doi.org/10.5715/jnlp.22.289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Park, Sungrae, Kyungwoo Song, Mingi Ji, Wonsung Lee, and Il-Chul Moon. "Adversarial Dropout for Recurrent Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4699–706. http://dx.doi.org/10.1609/aaai.v33i01.33014699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Successful application processing sequential data, such as text and speech, requires an improved generalization performance of recurrent neural networks (RNNs). Dropout techniques for RNNs were introduced to respond to these demands, but we conjecture that the dropout on RNNs could have been improved by adopting the adversarial concept. This paper investigates ways to improve the dropout for RNNs by utilizing intentionally generated dropout masks. Specifically, the guided dropout used in this research is called as adversarial dropout, which adversarially disconnects neurons that are dominantly used to predict correct targets over time. Our analysis showed that our regularizer, which consists of a gap between the original and the reconfigured RNNs, was the upper bound of the gap between the training and the inference phases of the random dropout. We demonstrated that minimizing our regularizer improved the effectiveness of the dropout for RNNs on sequential MNIST tasks, semi-supervised text classification tasks, and language modeling tasks.
46

Mu, Yangzi, Mengxing Huang, Chunyang Ye, and Qingzhou Wu. "Diagnosis Prediction via Recurrent Neural Networks." International Journal of Machine Learning and Computing 8, no. 2 (April 2018): 117–20. http://dx.doi.org/10.18178/ijmlc.2018.8.2.673.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

S B, Chandini. "Intrusion Detection using Recurrent Neural Networks." International Journal for Research in Applied Science and Engineering Technology 8, no. 6 (June 30, 2020): 2050–52. http://dx.doi.org/10.22214/ijraset.2020.6335.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Freitag, Steffen, Wolfgang Graf, and Michael Kaliske. "Recurrent neural networks for fuzzy data." Integrated Computer-Aided Engineering 18, no. 3 (June 17, 2011): 265–80. http://dx.doi.org/10.3233/ica-2011-0373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Garzon, Max, and Fernanda Botelho. "Dynamical approximation by recurrent neural networks." Neurocomputing 29, no. 1-3 (November 1999): 25–46. http://dx.doi.org/10.1016/s0925-2312(99)00114-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Dobnikar, Andrej, and Branko Šter. "Structural Properties of Recurrent Neural Networks." Neural Processing Letters 29, no. 2 (February 12, 2009): 75–88. http://dx.doi.org/10.1007/s11063-009-9096-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії