Academic literature on the topic 'GRU – Gated Recurrent Unit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'GRU – Gated Recurrent Unit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "GRU – Gated Recurrent Unit"

1

Hong, Binjie, Zhijie Yan, Yingxi Chen, and Xiaobo-Jin. "Long Memory Gated Recurrent Unit for Time Series Classification." Journal of Physics: Conference Series 2278, no. 1 (2022): 012017. http://dx.doi.org/10.1088/1742-6596/2278/1/012017.

Full text
Abstract:
Abstract Time series analysis is an important and challenging problem in data mining, where time series is a class of temporal data objects. In the classification task, the label is dependent on the features from the last moments. Due to the time dependency, the recurrent neural networks, as one of the prevalent learning-based architectures, take advantage of the relation among history data. The Long Short-Term Memory Network (LSTM) and Gated Recurrent Unit (GRU) are two popular artificial recurrent neural networks used in the field of deep learning. LSTM designed a gate-like method to control the short and long historical information, and GRU simplified those gates to obtain more efficient training. In our work, we propose a new model called as Long Memory Gated Recurrent Unit (LMGRU) based on such two remarkable models, where the reset gate is introduced to reset the stored value of the cell in Long Short-Term Memory (LSTM) model but the forget gate and the input gate are omitted. The experimental results on several time series benchmarks show that LMGRU achieves better effectiveness and efficiency than LSTM and GRU.
APA, Harvard, Vancouver, ISO, and other styles
2

Arfianti, Unix Izyah, Dian Candra Rini Novitasari, Nanang Widodo, Moh Hafiyusholeh, and Wika Dianita Utami. "Sunspot Number Prediction Using Gated Recurrent Unit (GRU) Algorithm." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 15, no. 2 (2021): 141. http://dx.doi.org/10.22146/ijccs.63676.

Full text
Abstract:
Sunspot is an area on photosphere layer which is dark-colored. Sunspot is very important to be researched because sunspot is affected by sunspot numbers, which present the level of solar activity. This research was conducted to make prediction on sunspot numbers using Gated Recurrent Unit (GRU) algorithm. The work principle of GRU is similar to Long short-term Memory (LSTM) method: the information from the previous memory is processed through two gates, that is update gate and reset gate, then the output generated will be input for the next process. The purpose of predicting sunspot numbers was to find out the information of sunspot numbers in the future, so that if there is a significant increase in sunspot numbers, it can inform other physical consequences that may be caused. The data used was the data of monthly sunspot numbers obtained from SILSO website. The data division and parameters used were based on the results of the trials resulted in the smallest MAPE value. The smallest MAPE value obtained from the prediction was 7.171% with 70% training data, 30% testing data, 150 hidden layer, 32 batch size, 100 learning rate drop.
APA, Harvard, Vancouver, ISO, and other styles
3

Subair, Hilma, R. Pangayar Selvi, R. Vasanthi, S. Kokilavani, and V. Karthick. "Minimum Temperature Forecasting Using Gated Recurrent Unit." International Journal of Environment and Climate Change 13, no. 9 (2023): 2681–88. http://dx.doi.org/10.9734/ijecc/2023/v13i92499.

Full text
Abstract:
Aim: To forecast the monthly average Minimum Temperature (ºC) in Coimbatore district.
 Study Design: Gated Recurrent Unit (GRU) has been employed to forecast the Minimum Temperature.
 Place and Duration of Study: Time series data for average month wise Minimum Temperature from January 1982 to September 2022 was collected from Agro Climate Research Centre, TNAU for Coimbatore District.
 Methodology: GRU which belongs to the field of deep learning has been employed to anticipate the average monthly Minimum Temperature by analyzing time series data from January 1982 to September 2022 in the district of Coimbatore. The model was trained using data from 1982 January to 2019 December and tested on data from 2020 January to 2022 September. After training and testing the algorithm was deployed to forecast Minimum Temperature for the lead time ahead.
 Results: The GRU model generated RMSE and MAE scores of 0.694ºC and 0.523ºC, respectively, for Minimum Temperature. GRU model had a Willmott’s Index of Agreement (WI) value as 0.943 that is very close to 1. This demonstrates the effectiveness of the model built to effectively predict the Minimum Temperature. The study's evaluation of the RMSE, MAE, and Willmott Index value made it readily evident that the GRU model performed quite accurately for forecasting Minimum Temperature. Gated Recurrent Unit algorithm was used to forecast the Minimum Temperature from October 2022 till December 2023 that is for the next 15 months.
APA, Harvard, Vancouver, ISO, and other styles
4

Achmad, Rizkial, Yokelin Tokoro, Jusuf Haurissa, and Andik Wijanarko. "Recurrent Neural Network-Gated Recurrent Unit for Indonesia-Sentani Papua Machine Translation." Journal of Information Systems and Informatics 5, no. 4 (2023): 1449–60. http://dx.doi.org/10.51519/journalisi.v5i4.597.

Full text
Abstract:
The Papuan Sentani language is spoken in the city of Jayapura, Papua. The law states the need to preserve regional languages. One of them is by building an Indonesian-Sentani Papua translation machine. The problem is how to build a translation machine and what model to choose in doing so. The model chosen is Recurrent Neural Network – Gated Recurrent Units (RNN-GRU) which has been widely used to build regional languages in Indonesia. The method used is an experiment starting from creating a parallel corpus, followed by corpus training using the RNN-GRU model, and the final step is conducting an evaluation using Bilingual Evaluation Understudy (BLEU) to find out the score. The parallel corpus used contains 281 sentences, each sentence has an average length of 8 words. The training time required is 3 hours without using a GPU. The result of this research was that a fairly good BLEU score was obtained, namely 35.3, which means that the RNN-GRU model and parallel corpus produced sufficient translation quality and could still be improved.
APA, Harvard, Vancouver, ISO, and other styles
5

Annisa, Darmawahyuni, Nurmaini Siti, Naufal Rachmatullah Muhammad, Firdaus Firdaus, and Tutuko Bambang. "Unidirectional-bidirectional recurrent networks for cardiac disorders classification." TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 19, no. 3 (2021): 902–10. https://doi.org/10.12928/telkomnika.v19i3.18876.

Full text
Abstract:
The deep learning approach of supervised recurrent network classifiers model, i.e., recurrent neural networks (RNNs), long short-term memory (LSTM), and gated recurrent units (GRUs) are used in this study. The unidirectional and bidirectional for each cardiac disorder (CDs) class is also compared. Comparing both phases is needed to figure out the optimum phase and the best model performance for ECG using the Physionet dataset to classify five classes of CDs with 15 leads ECG signals. The result shows that the bidirectional RNNs method produces better results than the unidirectional method. In contrast to RNNs, the unidirectional LSTM and GRU outperformed the bidirectional phase. The best recurrent network classifier performance is unidirectional GRU with average accuracy, sensitivity, specificity, precision, and F1-score of 98.50%, 95.54%, 98.42%, 89.93%, 92.31%, respectively. Overall, deep learning is a promising improved method for ECG classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Vinod Prakash and Dharmender Kumar. "A Modified Gated Recurrent Unit Approach for Epileptic Electroencephalography Classification." Journal of Information and Communication Technology 22, no. 4 (2023): 587–617. http://dx.doi.org/10.32890/jict2023.22.4.3.

Full text
Abstract:
Epilepsy is one of the most severe non-communicable brain disorders associated with sudden attacks. Electroencephalography (EEG), a non-invasive technique, records brain activities, and these recordings are routinely used for the clinical evaluation of epilepsy. EEG signal analysis for seizure identification relies on expert manual examination, which is labour-intensive, time-consuming, and prone to human error. To overcome these limitations, researchers have proposed machine learning and deep learning approaches. Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have shown significant results in automating seizure prediction, but due to complex gated mechanisms and the storage of excessive redundant information, these approaches face slow convergence and a low learning rate. The proposed modified GRU approach includes an improved update gate unit that adjusts the update gate based on the output of the reset gate. By decreasing the amount of superfluous data in the reset gate, convergence is speeded, which improves both learning efficiency and the accuracy of epilepsy seizure prediction. The performance of the proposed approach is verified on a publicly available epileptic EEG dataset collected from the University of California, Irvine machine learning repository (UCI) in terms of performance metrics such as accuracy, precision, recall, and F1 score when it comes to diagnosing epileptic seizures. The proposed modified GRU has obtained 98.84% accuracy, 96.9% precision, 97.1 recall, and 97% F1 score. The performance results are significant because they could enhance the diagnosis and treatment of neurological disorders, leading to better patient outcomes.
APA, Harvard, Vancouver, ISO, and other styles
7

Jeong, Myeong-Hun, Tae-Young Lee, Seung-Bae Jeon, and Minkyo Youm. "Highway Speed Prediction Using Gated Recurrent Unit Neural Networks." Applied Sciences 11, no. 7 (2021): 3059. http://dx.doi.org/10.3390/app11073059.

Full text
Abstract:
Movement analytics and mobility insights play a crucial role in urban planning and transportation management. The plethora of mobility data sources, such as GPS trajectories, poses new challenges and opportunities for understanding and predicting movement patterns. In this study, we predict highway speed using a gated recurrent unit (GRU) neural network. Based on statistical models, previous approaches suffer from the inherited features of traffic data, such as nonlinear problems. The proposed method predicts highway speed based on the GRU method after training on digital tachograph data (DTG). The DTG data were recorded in one month, giving approximately 300 million records. These data included the velocity and locations of vehicles on the highway. Experimental results demonstrate that the GRU-based deep learning approach outperformed the state-of-the-art alternatives, the autoregressive integrated moving average model, and the long short-term neural network (LSTM) model, in terms of prediction accuracy. Further, the computational cost of the GRU model was lower than that of the LSTM. The proposed method can be applied to traffic prediction and intelligent transportation systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Dutta, Aniruddha, Saket Kumar, and Meheli Basu. "A Gated Recurrent Unit Approach to Bitcoin Price Prediction." Journal of Risk and Financial Management 13, no. 2 (2020): 23. http://dx.doi.org/10.3390/jrfm13020023.

Full text
Abstract:
In today’s era of big data, deep learning and artificial intelligence have formed the backbone for cryptocurrency portfolio optimization. Researchers have investigated various state of the art machine learning models to predict Bitcoin price and volatility. Machine learning models like recurrent neural network (RNN) and long short-term memory (LSTM) have been shown to perform better than traditional time series models in cryptocurrency price prediction. However, very few studies have applied sequence models with robust feature engineering to predict future pricing. In this study, we investigate a framework with a set of advanced machine learning forecasting methods with a fixed set of exogenous and endogenous factors to predict daily Bitcoin prices. We study and compare different approaches using the root mean squared error (RMSE). Experimental results show that the gated recurring unit (GRU) model with recurrent dropout performs better than popular existing models. We also show that simple trading strategies, when implemented with our proposed GRU model and with proper learning, can lead to financial gain.
APA, Harvard, Vancouver, ISO, and other styles
9

Zainuddin, Z., Akhir E. A. P., and M. H. Hasan. "Predicting machine failure using recurrent neural network-gated recurrent unit (RNN-GRU) through time series data." Bulletin of Electrical Engineering and Informatics 10, no. 2 (2021): 870~878. https://doi.org/10.11591/eei.v10i2.2036.

Full text
Abstract:
Time series data often involves big size environment that lead to high dimensionality problem. Many industries are generating time series data that continuously update each second. The arising of machine learning may help in managing the data. It can forecast future instance while handling large data issues. Forecasting is related to predicting task of an upcoming event to avoid any circumstances happen in current environment. It helps those sectors such as production to foresee the state of machine in line with saving the cost from sudden breakdown as unplanned machine failure can disrupt the operation and loss up to millions. Thus, this paper offers a deep learning algorithm named recurrent neural network-gated recurrent unit (RNN-GRU) to forecast the state of machines producing the time series data in an oil and gas sector. RNN-GRU is an affiliation of recurrent neural network (RNN) that can control consecutive data due to the existence of update and reset gates. The gates decided on the necessary information to be kept in the memory. RNN-GRU is a simpler structure of long short-term memory (RNN-LSTM) with 87% of accuracy on prediction.
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Yi-Wei, Chia-Yu Hsu, and Kuang-Chieh Huang. "An Autoencoder Gated Recurrent Unit for Remaining Useful Life Prediction." Processes 8, no. 9 (2020): 1155. http://dx.doi.org/10.3390/pr8091155.

Full text
Abstract:
With the development of smart manufacturing, in order to detect abnormal conditions of the equipment, a large number of sensors have been used to record the variables associated with production equipment. This study focuses on the prediction of Remaining Useful Life (RUL). RUL prediction is part of predictive maintenance, which uses the development trend of the machine to predict when the machine will malfunction. High accuracy of RUL prediction not only reduces the consumption of manpower and materials, but also reduces the need for future maintenance. This study focuses on detecting faults as early as possible, before the machine needs to be replaced or repaired, to ensure the reliability of the system. It is difficult to extract meaningful features from sensor data directly. This study proposes a model based on an Autoencoder Gated Recurrent Unit (AE-GRU), in which the Autoencoder (AE) extracts the important features from the raw data and the Gated Recurrent Unit (GRU) selects the information from the sequences to forecast RUL. To evaluate the performance of the proposed AE-GRU model, an aircraft turbofan engine degradation simulation dataset provided by NASA was used and a comparison made of different recurrent neural networks. The results demonstrate that the AE-GRU is better than other recurrent neural networks, such as Long Short-Term Memory (LSTM) and GRU.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "GRU – Gated Recurrent Unit"

1

Sarika, Pawan Kumar. "Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213.

Full text
Abstract:
Today, we are living in a data-driven world. Due to a surge in data generation, there is a need for efficient and accurate techniques to analyze data. One such kind of data which is needed to be analyzed are text reviews given for movies. Rather than classifying the reviews as positive or negative, we will classify the sentiment of the reviews on the scale of one to ten. In doing so, we will compare two recurrent neural network algorithms Long short term memory(LSTM) and Gated recurrent unit(GRU). The main objective of this study is to compare the accuracies of LSTM and GRU models. For training models, we collected data from two different sources. For filtering data, we used porter stemming and stop words. We coupled LSTM and GRU with the convolutional neural networks to increase the performance. After conducting experiments, we have observed that LSTM performed better in predicting border values. Whereas, GRU predicted every class equally. Overall GRU was able to predict multiclass text data of movie reviews slightly better than LSTM. GRU was computationally expansive when compared to LSTM.
APA, Harvard, Vancouver, ISO, and other styles
2

Putchala, Manoj Kumar. "Deep Learning Approach for Intrusion Detection System (IDS) in the Internet of Things (IoT) Network using Gated Recurrent Neural Networks (GRU)." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1503680452498351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gattoni, Giacomo. "Improving the reliability of recurrent neural networks while dealing with bad data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
In practical applications, machine learning and deep learning models can have difficulty in achieving generalization, especially when dealing with training samples that are either noisy or limited in quantity. Standard neural networks do not guarantee the monotonicity of the input features with respect to the output, therefore they lack interpretability and predictability when it is known a priori that the input-output relationship should be monotonic. This problem can be encountered in the CPG industry, where it is not possible to ensure that a deep learning model will learn the increasing monotonic relationship between promotional mechanics and sales. To overcome this issue, it is proposed the combined usage of recurrent neural networks, a type of artificial neural networks specifically designed to deal with data structured as sequences, with lattice networks, conceived to guarantee monotonicity of the desired input features with respect to the output. The proposed architecture has proven to be more reliable when new samples are fed to the neural network, demonstrating its ability to infer the evolution of the sales depending on the promotions, even when it is trained on bad data.
APA, Harvard, Vancouver, ISO, and other styles
4

Talevi, Luca, and Luca Talevi. "“Decodifica di intenzioni di movimento dalla corteccia parietale posteriore di macaco attraverso il paradigma Deep Learning”." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17846/.

Full text
Abstract:
Le Brain Computer Interfaces (BCI) invasive permettono di restituire la mobilità a pazienti che hanno perso il controllo degli arti: ciò avviene attraverso la decodifica di segnali bioelettrici prelevati da aree corticali di interesse al fine di guidare un arto prostetico. La decodifica dei segnali neurali è quindi un punto critico nelle BCI, richiedendo lo sviluppo di algoritmi performanti, affidabili e robusti. Tali requisiti sono soddisfatti in numerosi campi dalle Deep Neural Networks, algoritmi adattivi le cui performance scalano con la quantità di dati forniti, allineandosi con il crescente numero di elettrodi degli impianti. Impiegando segnali pre-registrati dalla corteccia di due macachi durante movimenti di reach-to-grasp verso 5 oggetti differenti, ho testato tre basilari esempi notevoli di DNN – una rete densa multistrato, una Convolutional Neural Network (CNN) ed una Recurrent NN (RNN) – nel compito di discriminare in maniera continua e real-time l’intenzione di movimento verso ciascun oggetto. In particolare, è stata testata la capacità di ciascun modello di decodificare una generica intenzione (single-class), la performance della migliore rete risultante nel discriminarle (multi-class) con o senza metodi di ensemble learning e la sua risposta ad un degrado del segnale in ingresso. Per agevolarne il confronto, ciascuna rete è stata costruita e sottoposta a ricerca iperparametrica seguendo criteri comuni. L’architettura CNN ha ottenuto risultati particolarmente interessanti, ottenendo F-Score superiori a 0.6 ed AUC superiori a 0.9 nel caso single-class con metà dei parametri delle altre reti e tuttavia maggior robustezza. Ha inoltre mostrato una relazione quasi-lineare con il degrado del segnale, priva di crolli prestazionali imprevedibili. Le DNN impiegate si sono rivelate performanti e robuste malgrado la semplicità, rendendo eventuali architetture progettate ad-hoc promettenti nello stabilire un nuovo stato dell’arte nel controllo neuroprotesico.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Ngan Thi Hoang. "Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1166.

Full text
Abstract:
Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation, augmented reality systems, semantic searching, medical imaging are on the rise, requiring more accurate and efficient segmentation mechanisms. In recent years, deep learning approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have dramatically emerged as the dominant paradigm for solving many problems in computer vision and machine learning. The main focus of this thesis is to investigate robust approaches that can tackle the challenging semantic labeling tasks including semantic instance segmentation and scene understanding. In the first approach, we convert the classic variational Level Set method to a learnable deep framework by proposing a novel definition of contour evolution named Recurrent Level Set (RLS). The proposed RLS employs Gated Recurrent Units to solve the energy minimization of a variational Level Set functional. The curve deformation processes in RLS is formulated as a hidden state evolution procedure and is updated by minimizing an energy functional composed of fitting forces and contour length. We show that by sharing the convolutional features in a fully end-to-end trainable framework, RLS is able to be extended to Contextual Recurrent Level Set (CRLS) Networks to address semantic segmentation in the wild problem. The experimental results have shown that our proposed RLS improves both computational time and segmentation accuracy against the classic variational Level Set-based methods whereas the fully end-to-end system CRLS achieves competitive performance compared to the state-of-the-art semantic segmentation approaches on PAS CAL VOC 2012 and MS COCO 2014 databases. The second proposed approach, Contextual Recurrent Residual Networks (CRRN), inherits all the merits of sequence learning information and residual learning in order to simultaneously model long-range contextual infor- mation and learn powerful visual representation within a single deep network. Our proposed CRRN deep network consists of three parts corresponding to sequential input data, sequential output data and hidden state as in a recurrent network. Each unit in hidden state is designed as a combination of two components: a context-based component via sequence learning and a visualbased component via residual learning. That means, each hidden unit in our proposed CRRN simultaneously (1) learns long-range contextual dependencies via a context-based component. The relationship between the current unit and the previous units is performed as sequential information under an undirected cyclic graph (UCG) and (2) provides powerful encoded visual representation via residual component which contains blocks of convolution and/or batch normalization layers equipped with an identity skip connection. Furthermore, unlike previous scene labeling approaches [1, 2, 3], our method is not only able to exploit the long-range context and visual representation but also formed under a fully-end-to-end trainable system that effectively leads to the optimal model. In contrast to other existing deep learning networks which are based on pretrained models, our fully-end-to-end CRRN is completely trained from scratch. The experiments are conducted on four challenging scene labeling datasets, i.e. SiftFlow, CamVid, Stanford background, and SUN datasets, and compared against various state-of-the-art scene labeling methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Komol, Md Mostafizur Rahman. "C-ITS based prediction of driver red light running and turning behaviours." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227694/1/Md%20Mostafizur%20Rahman_Komol_Thesis.pdf.

Full text
Abstract:
Red light running is a major traffic violation. Drivers often aggressively or unintentionally violate red signal and cause traffic collisions. Moreover, Vision impairment of turning vehicles by large vehicles and road side static structures near intersections often lead to VRU crashes during their crossing at the intersection. In this research, we have developed models to predict drivers’ red light running and turning behaviour at intersections using Long Short Term Memory and Gated Recurrent Unit algorithms. We have used vehicle kinematic dataset of the C-ITS project: Ipswich Connected Vehicle Pilot, Queensland, taken from the Department of Transport and Main Road, Queensland.
APA, Harvard, Vancouver, ISO, and other styles
7

Côté, Marc-Alexandre. "Réseaux de neurones génératifs avec structure." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10489.

Full text
Abstract:
Cette thèse porte sur les modèles génératifs en apprentissage automatique. Deux nouveaux modèles basés sur les réseaux de neurones y sont proposés. Le premier modèle possède une représentation interne où une certaine structure a été imposée afin d’ordonner les caractéristiques apprises. Le deuxième modèle parvient à exploiter la structure topologique des données observées, et d’en tenir compte lors de la phase générative. Cette thèse présente également une des premières applications de l’apprentissage automatique au problème de la tractographie du cerveau. Pour ce faire, un réseau de neurones récurrent est appliqué à des données de diffusion afin d’obtenir une représentation des fibres de la matière blanche sous forme de séquences de points en trois dimensions.
APA, Harvard, Vancouver, ISO, and other styles
8

Tovedal, Sofiea. "On The Effectiveness of Multi-TaskLearningAn evaluation of Multi-Task Learning techniques in deep learning models." Thesis, Umeå universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172257.

Full text
Abstract:
Multi-Task Learning is today an interesting and promising field which many mention as a must for achieving the next level advancement within machine learning. However, in reality, Multi-Task Learning is much more rarely used in real-world implementations than its more popular cousin Transfer Learning. The questionis why that is and if Multi-Task Learning outperforms its Single-Task counterparts. In this thesis different Multi-Task Learning architectures were utilized in order to build a model that can handle labeling real technical issues within two categories. The model faces a challenging imbalanced data set with many labels to choose from and short texts to base its predictions on. Can task-sharing be the answer to these problems? This thesis investigated three Multi-Task Learning architectures and compared their performance to a Single-Task model. An authentic data set and two labeling tasks was used in training the models with the method of supervised learning. The four model architectures; Single-Task, Multi-Task, Cross-Stitched and the Shared-Private, first went through a hyper parameter tuning process using one of the two layer options LSTM and GRU. They were then boosted by auxiliary tasks and finally evaluated against each other.
APA, Harvard, Vancouver, ISO, and other styles
9

Howard, Shaun Michael. "Deep Learning for Sensor Fusion." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495751146601099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Javid, Gelareh. "Contribution à l’estimation de charge et à la gestion optimisée d’une batterie Lithium-ion : application au véhicule électrique." Thesis, Mulhouse, 2021. https://www.learning-center.uha.fr/.

Full text
Abstract:
L'estimation de l'état de charge (SOC) est un point crucial pour la sécurité des performances et la durée de vie des batteries lithium-ion (Li-ion) utilisées pour alimenter les VE.Dans cette thèse, la précision de l'estimation de l'état de charge est étudiée à l'aide d'algorithmes de réseaux neuronaux récurrents profonds (DRNN). Pour ce faire, pour une cellule d’une batterie Li-ion, trois nouvelles méthodes sont proposées : une mémoire bidirectionnelle à long et court terme (BiLSTM), une mémoire robuste à long et court terme (RoLSTM) et une technique d'unités récurrentes à grille (GRU).En utilisant ces techniques, on ne dépend pas de modèles précis de la batterie et on peut éviter les méthodes mathématiques complexes, en particulier dans un bloc de batterie. En outre, ces modèles sont capables d'estimer précisément le SOC à des températures variables. En outre, contrairement au réseau de neurones récursif traditionnel dont le contenu est réécrit à chaque fois, ces réseaux peuvent décider de préserver la mémoire actuelle grâce aux passerelles proposées. Dans ce cas, il peut facilement transférer l'information sur de longs chemins pour recevoir et maintenir des dépendances à long terme.La comparaison des résultats indique que le réseau BiLSTM a de meilleures performances que les deux autres méthodes. De plus, le modèle BiLSTM peut travailler avec des séquences plus longues provenant de deux directions, le passé et le futur, sans problème de disparition du gradient. Cette caractéristique permet de sélectionner une longueur de séquence équivalente à une période de décharge dans un cycle de conduite, et d'obtenir une plus grande précision dans l'estimation. En outre, ce modèle s'est bien comporté face à une valeur initiale incorrecte du SOC.Enfin, une nouvelle méthode BiLSTM a été introduite pour estimer le SOC d'un pack de batteries dans un EV. Le logiciel IPG Carmaker a été utilisé pour collecter les données et tester le modèle en simulation. Les résultats ont montré que l'algorithme proposé peut fournir une bonne estimation du SOC sans utilisation de filtre dans le système de gestion de la batterie (BMS)<br>The State Of Charge (SOC) estimation is a significant issue for safe performance and the lifespan of Lithium-ion (Li-ion) batteries, which is used to power the Electric Vehicles (EVs). In this thesis, the accuracy of SOC estimation is investigated using Deep Recurrent Neural Network (DRNN) algorithms. To do this, for a one cell Li-ion battery, three new SOC estimator based on different DRNN algorithms are proposed: a Bidirectional LSTM (BiLSTM) method, Robust Long-Short Term Memory (RoLSTM) algorithm, and a Gated Recurrent Units (GRUs) technique. Using these, one is not dependent on precise battery models and can avoid complicated mathematical methods especially in a battery pack. In addition, these models are able to precisely estimate the SOC at varying temperature. Also, unlike the traditional recursive neural network where content is re-written at each time, these networks can decide on preserving the current memory through the proposed gateways. In such case, it can easily transfer the information over long paths to receive and maintain long-term dependencies. Comparing the results indicates the BiLSTM network has a better performance than the other two. Moreover, the BiLSTM model can work with longer sequences from two direction, the past and the future, without gradient vanishing problem. This feature helps to select a sequence length as much as a discharge period in one drive cycle, and to have more accuracy in the estimation. Also, this model well behaved against the incorrect initial value of SOC. Finally, a new BiLSTM method introduced to estimate the SOC of a pack of batteries in an Ev. IPG Carmaker software was used to collect data and test the model in the simulation. The results showed that the suggested algorithm can provide a good SOC estimation without using any filter in the Battery Management System (BMS)
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "GRU – Gated Recurrent Unit"

1

Salem, Fathi M. "Gated RNN: The Gated Recurrent Unit (GRU) RNN." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wüthrich, Mario V., and Michael Merz. "Recurrent Neural Networks." In Springer Actuarial. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Full text
Abstract:
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
APA, Harvard, Vancouver, ISO, and other styles
3

Ajayi, O. O., A. O. Olorunda, O. G. Aju, and A. A. Adegbite. "A Gated Recurrent Unit (GRU) Model for Predicting the Popularity of Local Musicians." In Sustainable Education and Development – Sustainable Industrialization and Innovation. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25998-2_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Flores, Anibal, Hugo Tito-Chura, and Victor Yana-Mamani. "Wind Speed Time Series Imputation with a Bidirectional Gated Recurrent Unit (GRU) Model." In Lecture Notes in Networks and Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89880-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Manavi, Mahdi, and Yunpeng Zhang. "A New Intrusion Detection System Based on Gated Recurrent Unit (GRU) and Genetic Algorithm." In Security, Privacy, and Anonymity in Computation, Communication, and Storage. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24907-6_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Dongge, Jiahui Zhang, and Xinxin Tan. "Considering Power Factors in Macroeconomic Forecasting Based on the Gated Recurrent Unit (GRU) Model." In Lecture Notes on Data Engineering and Communications Technologies. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-0208-7_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Xuan-Hien, Hung Viet Ho, and Giha Lee. "Application of Gated Recurrent Unit (GRU) Network for Forecasting River Water Levels Affected by Tides." In APAC 2019. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0291-0_92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jing, Weiting, Song Xue, Songjie Yao, et al. "Fast Prediction Method of SMT Solder Joint Shape and Reliability Based on Gated Recurrent Unit (GRU)." In Proceedings of the Eighth Asia International Symposium on Mechatronics. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1309-9_129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shoukat, Duaa, Adnan Akhunzada, Muhammad Taimoor Khan, Ahmad Sami Al-Shamayleh, Mueen Uddin, and Hashem Alaidaros. "Bridging Innovation and Security: Advancing Cyber-Threat Detection in Sustainable Smart Infrastructure." In Proceedings in Technology Transfer. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-8588-9_11.

Full text
Abstract:
Abstract The rapid evolution of Smart Infrastructure (SI) on a global scale has revolutionized our daily lives, empowering us with unprecedented connectivity and convenience. However, this evolution has also exposed smart devices to increasingly sophisticated cyber-threats, endangering the integrity of entire smart networks. In response to these challenges, this paper proposes a novel approach utilizing Deep Learning (DL) models for multi-class threat detection in SI environments. Specifically, we introduce the Cu-GRULSTM model, which leverages CUDA-enabled Gated Recurrent Units (GRU) and Long Short-Term Memory (LSTM) architecture. Additionally, we employ the Cu-GRUDNN model for comparative analysis. Both models are trained and evaluated using the efficient and publicly available CICIDS2018 dataset. Our evaluation results demonstrate the superior performance of the proposed Cu-GRULSTM model, achieving an exceptional accuracy rate of 99.62% with a minimal False Alarms Rate (FAR) of 0.0003. This significant improvement over existing models underscores the efficacy of our approach in mitigating cyber-threats in smart infrastructure environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Salem, Fathi M. "Gated RNN: The Minimal Gated Unit (MGU) RNN." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "GRU – Gated Recurrent Unit"

1

Dey, Maloy Kumar, Shouvik Dey, and Dushmanta Kumar Das. "Indian Stock Price Prediction Using Optimal Gated Recurrent Unit (GRU) Network." In 2024 International Conference on Advancement in Renewable Energy and Intelligent Systems (AREIS). IEEE, 2024. https://doi.org/10.1109/areis62559.2024.10893641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hussein, Layth, Prashant Johri, A. Anusha Priya, R. Ramya, J. Karpagam, and A. Devendran. "Crop Yield Forecasting Using Bidirectional Gated Recurrent Unit (Bi-GRU) Networks." In 2025 International Conference on Automation and Computation (AUTOCOM). IEEE, 2025. https://doi.org/10.1109/autocom64127.2025.10957465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nayyar, Poorva, Kunal Bhardwaj, Saiyam Gupta, Ravi Prakash Chaturvedi, Annu Mishra, and Hirdesh Sharma. "Cryptocurrency Price Prediction using Optimised LSTM with GRU (Gated Recurrent Unit)." In 2024 International Conference on Control, Computing, Communication and Materials (ICCCCM). IEEE, 2024. https://doi.org/10.1109/iccccm61016.2024.11039812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Suganthi, V., and J. Jebathangam. "A Novel Approach for Credit Card Fraud Detection using Gated Recurrent Unit (GRU) Networks." In 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). IEEE, 2024. http://dx.doi.org/10.1109/i-smac61858.2024.10714795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fitri Amalia, Nur Wakhidah, and Erwin Budi Setiawan. "Cyberbullying Detection on Twitter using Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU)." In 2023 International Conference on Artificial Intelligence Robotics, Signal and Image Processing (AIRoSIP). IEEE, 2023. https://doi.org/10.1109/airosip58759.2023.10873879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Emu, Injamamul Hoque, Md Sahidur Rahman, Faria Farzana, Mohammad Rashedul Islam, Istiaq Firoz Shiam, and Arifa Sultana. "Wind Power Forecasting Using Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) and Hybrid Model." In 2024 IEEE International Conference on Computing, Applications and Systems (COMPAS). IEEE, 2024. https://doi.org/10.1109/compas60761.2024.10797166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saputra, Eggifian, Hilal Hudan Nuha, Sutiyo, Endro Ariyanto, and Gatot Santoso. "Analysis of Biogas Processing Control in Composting Tools Using IoT with the Gated Recurrent Unit (GRU) Method." In 2024 International Conference on Decision Aid Sciences and Applications (DASA). IEEE, 2024. https://doi.org/10.1109/dasa63652.2024.10836536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Otman, Maarouf, El Ayachi Rachid, and Biniz Mohamed. "Expression of Concern for: Amazigh Part Of Speech Tagging using Gated recurrent units (GRU)." In 2021 7th International Conference on Optimization and Applications (ICOA). IEEE, 2021. http://dx.doi.org/10.1109/icoa51614.2021.10702965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Jasoor, Hanan G., and Samaher Al-Janabi. "Expression of Concern for: Oil Price Prediction Using Deep Neural Network Technique Gated Recurrent Unit (GRU) and Multivariate Analysis." In 2022 22nd International Conference on Computational Science and Its Applications (ICCSA). IEEE, 2022. http://dx.doi.org/10.1109/iccsa57511.2022.10703608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Umezuruike, Chinecherem, Deborah Olaniyan, Julius Olaniyan, Abidemi Emmanuel Adeniyi, Adedoyin Oyebade, and David Abaneme. "Comparative Analysis of Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) and Transformer Models in Predicting Stock Prices." In 2024 IEEE 5th International Conference on Electro-Computing Technologies for Humanity (NIGERCON). IEEE, 2024. https://doi.org/10.1109/nigercon62786.2024.10927198.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "GRU – Gated Recurrent Unit"

1

Kumar, Kaushal, and Yupeng Wei. Attention-Based Data Analytic Models for Traffic Flow Predictions. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2211.

Full text
Abstract:
Traffic congestion causes Americans to lose millions of hours and dollars each year. In fact, 1.9 billion gallons of fuel are wasted each year due to traffic congestion, and each hour stuck in traffic costs about $21 in wasted time and fuel. The traffic congestion can be caused by various factors, such as bottlenecks, traffic incidents, bad weather, work zones, poor traffic signal timing, and special events. One key step to addressing traffic congestion and identifying its root cause is an accurate prediction of traffic flow. Accurate traffic flow prediction is also important for the successful deployment of smart transportation systems. It can help road users make better travel decisions to avoid traffic congestion areas so that passenger and freight movements can be optimized to improve the mobility of people and goods. Moreover, it can also help reduce carbon emissions and the risks of traffic incidents. Although numerous methods have been developed for traffic flow predictions, current methods have limitations in utilizing the most relevant part of traffic flow data and considering the correlation among the collected high-dimensional features. To address this issue, this project developed attention-based methodologies for traffic flow predictions. We propose the use of an attention-based deep learning model that incorporates the attention mechanism with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. This attention mechanism can calculate the importance level of traffic flow data and enable the model to consider the most relevant part of the data while making predictions, thus improving accuracy and reducing prediction duration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!