Статті в журналах з теми "Long Short-Term Memory Neural Network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Long Short-Term Memory Neural Network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Long Short-Term Memory Neural Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chang, Ching-Chun. "Neural Reversible Steganography with Long Short-Term Memory." Security and Communication Networks 2021 (April 4, 2021): 1–14. http://dx.doi.org/10.1155/2021/5580272.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep learning has brought about a phenomenal paradigm shift in digital steganography. However, there is as yet no consensus on the use of deep neural networks in reversible steganography, a class of steganographic methods that permits the distortion caused by message embedding to be removed. The underdevelopment of the field of reversible steganography with deep learning can be attributed to the perception that perfect reversal of steganographic distortion seems scarcely achievable, due to the lack of transparency and interpretability of neural networks. Rather than employing neural networks in the coding module of a reversible steganographic scheme, we instead apply them to an analytics module that exploits data redundancy to maximise steganographic capacity. State-of-the-art reversible steganographic schemes for digital images are based primarily on a histogram-shifting method in which the analytics module is often modelled as a pixel intensity predictor. In this paper, we propose to refine the prior estimation from a conventional linear predictor through a neural network model. The refinement can be to some extent viewed as a low-level vision task (e.g., noise reduction and super-resolution imaging). In this way, we explore a leading-edge neuroscience-inspired low-level vision model based on long short-term memory with a brief discussion of its biological plausibility. Experimental results demonstrated a significant boost contributed by the neural network model in terms of prediction accuracy and steganographic rate-distortion performance.
2

Labusov, M. V. "SHORT-TERM FINANCIAL TIME SERIES ANALYSIS WITH LONG SHORT-TERM MEMORY NEURAL NETWORKS." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 3, no. 4 (2021): 165–77. http://dx.doi.org/10.36871/ek.up.p.r.2021.04.03.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of creating a long short-term memory neural network for high-frequency financial time series analyzing and forecasting is considered in the article. The research base is compiled in the beginning. Further the estimation of long short-term memory neural network parameters is carried out on the learning subsamples. The forecast of future returns signs is made for the horizon of 90 minutes with the estimated neural network. In conclusion the trading strategy is formulated.
3

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
4

KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (September 4, 2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractNeural Network-based approaches have recently produced good performances in Natural language tasks, such as Supertagging. In the supertagging task, a Supertag (Lexical category) is assigned to each word in an input sequence. Combinatory Categorial Grammar Supertagging is a more challenging problem than various sequence-tagging problems, such as part-of-speech (POS) tagging and named entity recognition due to the large number of the lexical categories. Specifically, simple Recurrent Neural Network (RNN) has shown to significantly outperform the previous state-of-the-art feed-forward neural networks. On the other hand, it is well known that Recurrent Networks fail to learn long dependencies. In this paper, we introduce a new neural network architecture based on backward and Bidirectional Long Short-Term Memory (BLSTM) Networks that has the ability to memorize information for long dependencies and benefit from both past and future information. State-of-the-art methods focus on previous information, whereas BLSTM has access to information in both previous and future directions. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short-Term Memory (LSTM) networks are more precise and successful than both unidirectional and bidirectional standard RNNs. Experiment results reveal the effectiveness of our proposed method on both in-domain and out-of-domain datasets. Experiments show improvements about (1.2 per cent) over standard RNN.
5

Hoque, Mohammad Shamsul, Norziana Jamil, Nowshad Amin, Azril Azam Abdul Rahim, and Razali B. Jidin. "Forecasting number of vulnerabilities using long short-term neural memory network." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (October 1, 2021): 4381. http://dx.doi.org/10.11591/ijece.v11i5.pp4381-4391.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cyber-attacks are launched through the exploitation of some existing vulnerabilities in the software, hardware, system and/or network. Machine learning algorithms can be used to forecast the number of post release vulnerabilities. Traditional neural networks work like a black box approach; hence it is unclear how reasoning is used in utilizing past data points in inferring the subsequent data points. However, the long short-term memory network (LSTM), a variant of the recurrent neural network, is able to address this limitation by introducing a lot of loops in its network to retain and utilize past data points for future calculations. Moving on from the previous finding, we further enhance the results to predict the number of vulnerabilities by developing a time series-based sequential model using a long short-term memory neural network. Specifically, this study developed a supervised machine learning based on the non-linear sequential time series forecasting model with a long short-term memory neural network to predict the number of vulnerabilities for three vendors having the highest number of vulnerabilities published in the national vulnerability database (NVD), namely microsoft, IBM and oracle. Our proposed model outperforms the existing models with a prediction result root mean squared error (RMSE) of as low as 0.072.
6

Xie, Qi, Gengguo Cheng, Xu Xu, and Zixuan Zhao. "Research Based on Stock Predicting Model of Neural Networks Ensemble Learning." MATEC Web of Conferences 232 (2018): 02029. http://dx.doi.org/10.1051/matecconf/201823202029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Financial time series is always one of the focus of financial market analysis and research. In recent years, with the rapid development of artificial intelligence, machine learning and financial market are more and more closely linked. Artificial neural network is usually used to analyze and predict financial time series. Based on deep learning, six layer long short-term memory neural networks were constructed. Eight long short-term memory neural networks were combined with Bagging method in ensemble learning and predicting model of neural networks ensemble learning was used in Chinese Stock Market. The experiment tested Shanghai Composite Index, Shenzhen Composite Index, Shanghai Stock Exchange 50 Index, Shanghai-Shenzhen 300 Index, Medium and Small Plate Index and Gem Index during the period from January 4, 2012 to December 29, 2017. For long short-term memory neural network ensemble learning model, its accuracy is 58.5%, precision is 58.33%, recall is 73.5%, F1 value is 64.5%, and AUC value is 57.67%, which are better than those of multilayer long short-term memory neural network model and reflect a good prediction outcome.
7

Kumar, Naresh, Jatin Bindra, Rajat Sharma, and Deepali Gupta. "Air Pollution Prediction Using Recurrent Neural Network, Long Short-Term Memory and Hybrid of Convolutional Neural Network and Long Short-Term Memory Models." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4580–84. http://dx.doi.org/10.1166/jctn.2020.9283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Air pollution prediction was not an easy task few years back. With the increasing computation power and wide availability of the datasets, air pollution prediction problem is solved to some extend. Inspired by the deep learning models, in this paper three techniques for air pollution prediction have been proposed. The models used includes recurrent neural network (RNN), Long short-term memory (LSTM) and a hybrid combination of Convolutional neural network (CNN) and LSTM models. These models are tested by comparing MSE loss on air pollution test of Belgium. The validation loss on RNN is 0.0045, LSTM is 0.00441 and CNN and LSTM is 0.0049. The loss on testing dataset for these models are 0.00088, 0.00441 and 0.0049 respectively.
8

Lihong, Dong, and Xie Qian. "Short-term electricity price forecast based on long short-term memory neural network." Journal of Physics: Conference Series 1453 (January 2020): 012103. http://dx.doi.org/10.1088/1742-6596/1453/1/012103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ghassaei, Sina, and Reza Ravanmehr. "Short-term Load Forecasting using Convolutional Neural Network and Long Short-term Memory." Iranian Electric Industry Journal of Quality and Productivity 10, no. 1 (April 1, 2021): 35–51. http://dx.doi.org/10.52547/ieijqp.10.1.35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wei, Xiaolu, Binbin Lei, Hongbing Ouyang, and Qiufeng Wu. "Stock Index Prices Prediction via Temporal Pattern Attention and Long-Short-Term Memory." Advances in Multimedia 2020 (December 10, 2020): 1–7. http://dx.doi.org/10.1155/2020/8831893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study attempts to predict stock index prices using multivariate time series analysis. The study’s motivation is based on the notion that datasets of stock index prices involve weak periodic patterns, long-term and short-term information, for which traditional approaches and current neural networks such as Autoregressive models and Support Vector Machine (SVM) may fail. This study applied Temporal Pattern Attention and Long-Short-Term Memory (TPA-LSTM) for prediction to overcome the issue. The results show that stock index prices prediction through the TPA-LSTM algorithm could achieve better prediction performance over traditional deep neural networks, such as recurrent neural network (RNN), convolutional neural network (CNN), and long and short-term time series network (LSTNet).
11

Bhandarkar, Tanvi, Vardaan K, Nikhil Satish, S. Sridhar, R. Sivakumar, and Snehasish Ghosh. "Earthquake trend prediction using long short-term memory RNN." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 2 (April 1, 2019): 1304. http://dx.doi.org/10.11591/ijece.v9i2.pp1304-1312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The prediction of a natural calamity such as earthquakes has been an area of interest for a long time but accurate results in earthquake forecasting have evaded scientists, even leading some to deem it intrinsically impossible to forecast them accurately. In this paper an attempt to forecast earthquakes and trends using a data of a series of past earthquakes. A type of recurrent neural network called Long Short-Term Memory (LSTM) is used to model the sequence of earthquakes. The trained model is then used to predict the future trend of earthquakes. An ordinary Feed Forward Neural Network (FFNN) solution for the same problem was done for comparison. The LSTM neural network was found to outperform the FFNN. The R^2 score of the LSTM is better than the FFNN’s by 59%.</p>
12

Hua, Chi, Erxi Zhu, Liang Kuang, and Dechang Pi. "Short-term power prediction of photovoltaic power station based on long short-term memory-back-propagation." International Journal of Distributed Sensor Networks 15, no. 10 (October 2019): 155014771988313. http://dx.doi.org/10.1177/1550147719883134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Accurate prediction of the generation capacity of photovoltaic systems is fundamental to ensuring the stability of the grid and to performing scheduling arrangements correctly. In view of the temporal defect and the local minimum problem of back-propagation neural network, a forecasting method of power generation based on long short-term memory-back-propagation is proposed. On this basis, the traditional prediction data set is improved. According to the three traditional methods listed in this article, we propose a fourth method to improve the traditional photovoltaic power station short-term power generation prediction. Compared with the traditional method, the long short-term memory-back-propagation neural network based on the improved data set has a lower prediction error. At the same time, a horizontal comparison with the multiple linear regression and the support vector machine shows that the long short-term memory-back-propagation method has several advantages. Based on the long short-term memory-back-propagation neural network, the short-term forecasting method proposed in this article for generating capacity of photovoltaic power stations will provide a basis for dispatching plan and optimizing operation of power grid.
13

Mizumachi, Mitsunori, and Ryotarou Oka. "Non-linear beamformer with long short-term memory network." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 2 (August 1, 2021): 4355–60. http://dx.doi.org/10.3397/in-2021-2673.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Acoustic beamforming with a microphone array enables spatial filtering in a wide frequency range. It is a challenging issue to sharpen the main-lobe in the lower frequency region with a small-scale microphone array, of which the number and spacing of microphones are small. A neural network-based non-linear beamformer achieves a breakthrough in sharpening the main-lobe. The non-linear beamforming works well for the narrowband signals but is weak in wideband beamforming. The non-linear beamforming with the long short-term memory is proposed to deal with wideband speech signals. The long short-term memory network is trained in the recurrent neural network architecture with the sequence of audio data such as speech signals. The performance of the proposed beamformer is confirmed using a small-scale 8-ch MEMS microphone array, where eight microphones are linearly arranged with the neighboring spacing of 10 mm, under a real environment. The beam-pattern of the proposed non-linear beamformer succeeds in sharpening the main-lobe although the linear delay-and-sum beamformer could not achieve frequency selectivity. The feasibility of the proposed beamformer is also confirmed in speech enhancement.
14

Li, Xiaodong, Changjun Yu, Fulin Su, Taifan Quan, and Xuguang Yang. "Novel training algorithms for long short-term memory neural network." IET Signal Processing 13, no. 3 (May 1, 2019): 304–8. http://dx.doi.org/10.1049/iet-spr.2018.5240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Grande, Davide, Catherine A. Harris, Giles Thomas, and Enrico Anderlini. "Data-Driven Stability Assessment of Multilayer Long Short-Term Memory Networks." Applied Sciences 11, no. 4 (February 19, 2021): 1829. http://dx.doi.org/10.3390/app11041829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.
16

Wang, Lipeng. "An Improved Long Short-Term Memory Neural Network for Macroeconomic Forecast." Revue d'Intelligence Artificielle 34, no. 5 (November 20, 2020): 577–84. http://dx.doi.org/10.18280/ria.340507.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The statistics and cyclical swings of macroeconomics are necessary for exploring the internal laws and features of the market economy. To realize intelligent and efficient macroeconomic forecast, this paper puts forward a macroeconomic forecast model based on improved long short-term memory (LSTM) neural network. Firstly, a scientific evaluation index system (EIS) was constructed for macroeconomy. The correlation between indices was measured by Spearman correlation coefficient, and the index data were preprocessed by interpolating the missing items and converting low-frequency series into high-frequency series. Next, the corresponding mixed frequency dataset was constructed, followed by the derivation of the state space equation. Then, the LSTM neutral network was optimized by the Kalman filter or macroeconomic forecast. The effectiveness of the proposed forecast method was verified through experiments. The research results lay a theoretical basis for the application of LSTM in financial forecasts.
17

Arya, Putu Bagus, Wayan Firdaus Mahmudy, and Achmad Basuki. "Website Visitors Forecasting using Recurrent Neural Network Method." Journal of Information Technology and Computer Science 6, no. 2 (September 3, 2021): 137–45. http://dx.doi.org/10.25126/jitecs.202162296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The number of visitors and content accessed by users on a site shows the performance of the site. Therefore, forecasting needs to be done to find out how many users a website will come. This study applies the Long Short Term Memory method which is a development of the Recurrent Neural Network method. Long Short Term Memory has the advantage that there is an architecture of remembering and forgetting the output to be processed back into the input. In addition, the ability of another Long Short Term Memory is to be able to maintain errors that occur when doing backpropagation so that it does not allow errors to increase. The comparison method used in this study is Backpropagation. Neural Network method that is often used in various fields. The testing using new visitor data and first time visitors from 2018 to 2019 with vulnerable time per month. The computational experiment prove that the Long Short Term Memory produces better result in term of the mean square error (MSE) comparable to those achieved by Backpropagation Neural Network method.
18

Tra, Nguyen Ngoc, Ho Phuoc Tien, Nguyen Thanh Dat, and Nguyen Ngoc Vu. "VN-INDEX TREND PREDICTION USING LONG-SHORT TERM MEMORY NEURAL NETWORKS." Journal of Science and Technology: Issue on Information and Communications Technology 17, no. 12.2 (December 9, 2019): 61. http://dx.doi.org/10.31130/ict-ud.2019.94.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paper attemps to forecast the future trend of Vietnam index (VN-index) by using long-short term memory (LSTM) networks. In particular, an LSTM-based neural network is employed to study the temporal dependence in time-series data of past and present VN index values. Empirical forecasting results show that LSTM-based stock trend prediction offers an accuracy of about 60% which outperforms moving-average-based prediction.
19

Zhou, Hangxia, Qian Liu, Ke Yan, and Yang Du. "Deep Learning Enhanced Solar Energy Forecasting with AI-Driven IoT." Wireless Communications and Mobile Computing 2021 (June 18, 2021): 1–11. http://dx.doi.org/10.1155/2021/9249387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Short-term photovoltaic (PV) energy generation forecasting models are important, stabilizing the power integration between the PV and the smart grid for artificial intelligence- (AI-) driven internet of things (IoT) modeling of smart cities. With the recent development of AI and IoT technologies, it is possible for deep learning techniques to achieve more accurate energy generation forecasting results for the PV systems. Difficulties exist for the traditional PV energy generation forecasting method considering external feature variables, such as the seasonality. In this study, we propose a hybrid deep learning method that combines the clustering techniques, convolutional neural network (CNN), long short-term memory (LSTM), and attention mechanism with the wireless sensor network to overcome the existing difficulties of the PV energy generation forecasting problem. The overall proposed method is divided into three stages, namely, clustering, training, and forecasting. In the clustering stage, correlation analysis and self-organizing mapping are employed to select the highest relevant factors in historical data. In the training stage, a convolutional neural network, long short-term memory neural network, and attention mechanism are combined to construct a hybrid deep learning model to perform the forecasting task. In the testing stage, the most appropriate training model is selected based on the month of the testing data. The experimental results showed significantly higher prediction accuracy rates for all time intervals compared to existing methods, including traditional artificial neural networks, long short-term memory neural networks, and an algorithm combining long short-term memory neural network and attention mechanism.
20

Tian, Chujie, Jian Ma, Chunhong Zhang, and Panpan Zhan. "A Deep Neural Network Model for Short-Term Load Forecast Based on Long Short-Term Memory Network and Convolutional Neural Network." Energies 11, no. 12 (December 14, 2018): 3493. http://dx.doi.org/10.3390/en11123493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Accurate electrical load forecasting is of great significance to help power companies in better scheduling and efficient management. Since high levels of uncertainties exist in the load time series, it is a challenging task to make accurate short-term load forecast (STLF). In recent years, deep learning approaches provide better performance to predict electrical load in real world cases. The convolutional neural network (CNN) can extract the local trend and capture the same pattern, and the long short-term memory (LSTM) is proposed to learn the relationship in time steps. In this paper, a new deep neural network framework that integrates the hidden feature of the CNN model and the LSTM model is proposed to improve the forecasting accuracy. The proposed model was tested in a real-world case, and detailed experiments were conducted to validate its practicality and stability. The forecasting performance of the proposed model was compared with the LSTM model and the CNN model. The Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) were used as the evaluation indexes. The experimental results demonstrate that the proposed model can achieve better and stable performance in STLF.
21

Lv, Liujia, Weijian Kong, Jie Qi, and Jue Zhang. "An improved long short-term memory neural network for stock forecast." MATEC Web of Conferences 232 (2018): 01024. http://dx.doi.org/10.1051/matecconf/201823201024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents an improved long short-term memory (LSTM) neural network based on particle swarm optimization (PSO), which is applied to predict the closing price of the stock. PSO is introduced to optimize the weights of the LSTM neural network, which reduces the prediction error. After preprocessing the historical data of the stock, including opening price, closing price, highest price, lowest price, and daily volume these five attributes, we train the LSTM by employing time series of the historical data. Finally, we apply the proposed LSTM to predict the closing price of the stock in the last two years. Compared with typical algorithms by simulation, we find the LSTM has better performance in reliability and adaptability, and the improved PSO-LSTM algorithm has better accuracy.
22

Tanaka, Tomohiro, Takafumi Moriya, Takahiro Shinozaki, Shinji Watanabe, Takaaki Hori, and Kevin Duh. "Evolutionary optimization of long short-term memory neural network language model." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 3062. http://dx.doi.org/10.1121/1.4969532.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Jeon, Seung-Bae, Myeong-Hun Jeong, Tae-Young Lee, Jeong-Hwan Lee, and Jae-Myoung Cho. "Bus Travel Speed Prediction Using Long Short-term Memory Neural Network." Sensors and Materials 32, no. 12 (December 29, 2020): 4441. http://dx.doi.org/10.18494/sam.2020.3111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cai, Changchun, Haolin Liu, Yuan Tao, Zhixiang Deng, Weili Dai, and Jie Chen. "Microgrid Equivalent Modeling Based on Long Short-Term Memory Neural Network." IEEE Access 8 (2020): 23120–33. http://dx.doi.org/10.1109/access.2020.2966238.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yang, Cheng-Hong, Chih-Hsien Wu, and Chih-Min Hsieh. "Long Short-Term Memory Recurrent Neural Network for Tidal Level Forecasting." IEEE Access 8 (2020): 159389–401. http://dx.doi.org/10.1109/access.2020.3017089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chen, Yao, Jiancheng Lv, Yanan Sun, and Bijue Jia. "Heart sound segmentation via Duration Long–Short Term Memory neural network." Applied Soft Computing 95 (October 2020): 106540. http://dx.doi.org/10.1016/j.asoc.2020.106540.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Chen, Xueyan, Jie He, Xiaoqiang Wu, Wei Yan, and Wei Wei. "Sleep staging by bidirectional long short-term memory convolution neural network." Future Generation Computer Systems 109 (August 2020): 188–96. http://dx.doi.org/10.1016/j.future.2020.03.019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Mohamed, Takwa, Sabah Sayed, Akram Salah, and Essam H. Houssein. "Long Short-Term Memory Neural Networks for RNA Viruses Mutations Prediction." Mathematical Problems in Engineering 2021 (June 25, 2021): 1–9. http://dx.doi.org/10.1155/2021/9980347.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Viral progress remains a major deterrent in the viability of antiviral drugs. The ability to anticipate this development will provide assistance in the early detection of drug-resistant strains and may encourage antiviral drugs to be the most effective plan. In recent years, a deep learning model called the seq2seq neural network has emerged and has been widely used in natural language processing. In this research, we borrow this approach for predicting next generation sequences using the seq2seq LSTM neural network while considering these sequences as text data. We used hot single vectors to represent the sequences as input to the model; subsequently, it maintains the basic information position of each nucleotide in the sequences. Two RNA viruses sequence datasets are used to evaluate the proposed model which achieved encouraging results. The achieved results illustrate the potential for utilizing the LSTM neural network for DNA and RNA sequences in solving other sequencing issues in bioinformatics.
29

Rußwurm, M., and M. Körner. "MULTI-TEMPORAL LAND COVER CLASSIFICATION WITH LONG SHORT-TERM MEMORY NEURAL NETWORKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 551–58. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-551-2017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<i>Land cover classification (LCC)</i> is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how <i>long short-term memory</i> (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, <i>i.e.</i>, LSTM and <i>recurrent neural network</i> (RNN), with a classical non-temporal <i>convolutional neural network</i> (CNN) model and an additional <i>support vector machine</i> (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.
30

How, Dickson Neoh Tze, Chu Kiong Loo, and Khairul Salleh Mohamed Sahari. "Behavior recognition for humanoid robots using long short-term memory." International Journal of Advanced Robotic Systems 13, no. 6 (October 26, 2016): 172988141666336. http://dx.doi.org/10.1177/1729881416663369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning from demonstration plays an important role in enabling robot to acquire new behaviors from human teachers. Within learning from demonstration, robots learn new tasks by recognizing a set of preprogrammed behaviors or skills as building blocks for new, potentially more complex tasks. One important aspect in this approach is the recognition of the set of behaviors that comprises the entire task. The ability to recognize a complex task as a sequence of simple behaviors enables the robot to generalize better on more complex tasks. In this article, we propose that primitive behaviors can be taught to a robot via learning from demonstration. In our experiment, we teach the robot new behaviors by demonstrating the behaviors to the robot several times. Following that, a long short-term memory recurrent neural network is trained to recognize the behaviors. In this study, we managed to teach at least six behaviors on a NAO humanoid robot and trained a long short-term memory recurrent neural network to recognize the behaviors using the supervised learning scheme. Our result shows that long short-term memory can recognize all the taught behaviors effectively, and it is able to generalize to recognize similar types of behaviors that have not been demonstrated on the robot before. We also show that the long short-term memory is advantageous compared to other neural network frameworks in recognizing the behaviors in the presence of noise in the behaviors.
31

Sherstinsky, Alex. "Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network." Physica D: Nonlinear Phenomena 404 (March 2020): 132306. http://dx.doi.org/10.1016/j.physd.2019.132306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Jadid Abdulkadir, Said, Hitham Alhussian, Muhammad Nazmi, and Asim A Elsheikh. "Long Short Term Memory Recurrent Network for Standard and Poor’s 500 Index Modelling." International Journal of Engineering & Technology 7, no. 4.15 (October 7, 2018): 25. http://dx.doi.org/10.14419/ijet.v7i4.15.21365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Forecasting time-series data are imperative especially when planning is required through modelling using uncertain knowledge of future events. Recurrent neural network models have been applied in the industry and outperform standard artificial neural networks in forecasting, but fail in long term time-series forecasting due to the vanishing gradient problem. This study offers a robust solution that can be implemented for long-term forecasting using a special architecture of recurrent neural network known as Long Short Term Memory (LSTM) model to overcome the vanishing gradient problem. LSTM is specially designed to avoid the long-term dependency problem as their default behavior. Empirical analysis is performed using quantitative forecasting metrics and comparative model performance on the forecasted outputs. An evaluation analysis is performed to validate that the LSTM model provides better forecasted outputs on Standard & Poor’s 500 Index (S&P 500) in terms of error metrics as compared to other forecasting models.
33

BALOGLU, ULAS BARAN, and ÖZAL YILDIRIM. "CONVOLUTIONAL LONG-SHORT TERM MEMORY NETWORKS MODEL FOR LONG DURATION EEG SIGNAL CLASSIFICATION." Journal of Mechanics in Medicine and Biology 19, no. 01 (February 2019): 1940005. http://dx.doi.org/10.1142/s0219519419400050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background and objective: Deep learning structures have recently achieved remarkable success in the field of machine learning. Convolutional neural networks (CNN) in image processing and long-short term memory (LSTM) in the time-series analysis are commonly used deep learning algorithms. Healthcare applications of deep learning algorithms provide important contributions for computer-aided diagnosis research. In this study, convolutional long-short term memory (CLSTM) network was used for automatic classification of EEG signals and automatic seizure detection. Methods: A new nine-layer deep network model consisting of convolutional and LSTM layers was designed. The signals processed in the convolutional layers were given as an input to the LSTM network whose outputs were processed in densely connected neural network layers. The EEG data is appropriate for a model having 1-D convolution layers. A bidirectional model was employed in the LSTM layer. Results: Bonn University EEG database with five different datasets was used for experimental studies. In this database, each dataset contains 23.6[Formula: see text]s duration 100 single channel EEG segments which consist of 4097 dimensional samples (173.61[Formula: see text]Hz). Eight two-class and three three-class clinical scenarios were examined. When the experimental results were evaluated, it was seen that the proposed model had high accuracy on both binary and ternary classification tasks. Conclusions: The proposed end-to-end learning structure showed a good performance without using any hand-crafted feature extraction or shallow classifiers to detect the seizures. The model does not require filtering, and also automatically learns to filter the input as well. As a result, the proposed model can process long duration EEG signals without applying segmentation, and can detect epileptic seizures automatically by using the correlation of ictal and interictal signals of raw data.
34

Minh-Tuan, Nguyen, and Yong-Hwa Kim. "Bidirectional Long Short-Term Memory Neural Networks for Linear Sum Assignment Problems." Applied Sciences 9, no. 17 (August 22, 2019): 3470. http://dx.doi.org/10.3390/app9173470.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many resource allocation problems can be modeled as a linear sum assignment problem (LSAP) in wireless communications. Deep learning techniques such as the fully-connected neural network and convolutional neural network have been used to solve the LSAP. We herein propose a new deep learning model based on the bidirectional long short-term memory (BDLSTM) structure for the LSAP. In the proposed method, the LSAP is divided into sequential sub-assignment problems, and BDLSTM extracts the features from sequential data. Simulation results indicate that the proposed BDLSTM is more memory efficient and achieves a higher accuracy than conventional techniques.
35

Sugiartawan, Putu, Agus Aan Jiwa Permana, and Paholo Iman Prakoso. "Forecasting Kunjungan Wisatawan Dengan Long Short Term Memory (LSTM)." Jurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI) 1, no. 1 (September 30, 2018): 43–52. http://dx.doi.org/10.33173/jsikti.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bali is one of the favorite tourist attractions in Indonesia, where the number of foreign tourists visiting Bali is around 4 million over 2015 (Dispar Bali). The number of tourists visiting is spread in various regions and tourist attractions that are located in Bali. Although tourist visits to Bali can be said to be large, the visit was not evenly distributed, there were significant fluctuations in tourist visits. Forecasting or forecasting techniques can find out the pattern of tourist visits. Forecasting technique aims to predict the previous data pattern so that the next data pattern can be known. In this study using the technique of recurrent neural network in predicting the level of tourist visits. One of the techniques for a recurrent neural network (RNN) used in this study is Long Short-Term Memory (LSTM). This model is better than a simple RNN model. In this study predicting the level of tourist visits using the LSTM algorithm, the data used is data on tourist visits to one of the attractions in Bali. The results obtained using the LSTM model amounted to 15,962. The measured value is an error value, with the MAPE technique. The LSTM architecture used consists of 16 units of neuron units in the hidden layer, a learning rate of 0.01, windows size of 3, and the number of hidden layers is 1.
36

Brocki, Łukasz, and Krzysztof Marasek. "Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition." Archives of Acoustics 40, no. 2 (June 1, 2015): 191–95. http://dx.doi.org/10.1515/aoa-2015-0021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.
37

Zhou, Hangxia, Yujin Zhang, Lingfan Yang, Qian Liu, Ke Yan, and Yang Du. "Short-Term Photovoltaic Power Forecasting Based on Long Short Term Memory Neural Network and Attention Mechanism." IEEE Access 7 (2019): 78063–74. http://dx.doi.org/10.1109/access.2019.2923006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Shinan, and Yunkai Qiao. "Short-term forecast of Yangtze River water level based on Long Short-Term Memory neural network." IOP Conference Series: Earth and Environmental Science 831, no. 1 (August 1, 2021): 012051. http://dx.doi.org/10.1088/1755-1315/831/1/012051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chen, Gonggui, Bangrui Tang, Xianjun Zeng, Ping Zhou, Peng Kang, and Hongyu Long. "Short-term wind speed forecasting based on long short-term memory and improved BP neural network." International Journal of Electrical Power & Energy Systems 134 (January 2022): 107365. http://dx.doi.org/10.1016/j.ijepes.2021.107365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fan, H., M. Yang, F. Xiao, and K. Zhao. "PREDICTING DAILY PM2.5 USING WEIGHTED LONG SHORT-TERM MEMORY NEURAL NETWORK MODEL." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 22, 2020): 1451–55. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1451-2020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Over the past few decades, air pollution has caused serious damage on public health, thus making accurate predictions of PM2.5 crucial. Due to the transportation of air pollutants among areas, the PM2.5 concentration is strongly spatiotemporal correlated. However, the distribution of air pollution monitoring sites is not even, making the spatiotemporal correlation between the central site and surrounding sites varies with different density of sites, and this was neglected by most existing methods. To tackle this problem, this study proposed a weighted long short-term memory neural network extended model (WLSTME), which addressed the issue that how to consider the effect of the density of sites and wind condition on the spatiotemporal correlation of air pollution concentration. First, several the nearest surrounding sites were chosen as the neighbour sites to the central station, and their distance as well as their air pollution concentration and wind condition were input to multi-layer perception (MLP) to generate weighted historical PM2.5 time series data. Second, historical PM2.5 concentration of the central site and weighted PM2.5 series data of neighbour sites were input into LSTM to address spatiotemporal dependency simultaneously and extract spatiotemporal features. Finally, another MLP was utilized to integrate spatiotemporal features extracted above with the meteorological data of central site to generate the forecasts future PM_2.5 concentration of the central site. Daily PM_2.5 concentration and meteorological data on Beijing–Tianjin–Hebei from 2015 to 2017 were collected to train models and evaluate the performance. Experimental results with 3 other methods showed that the proposed WLSTME model has the lowest RMSE (40.67) and MAE (26.10) and the highest p (0.59). This finding confirms that WLSTME can significantly improve the PM2.5 prediction accuracy.
41

Rizal, Ahmad Ashril, and Siti Soraya. "Multi Time Steps Prediction dengan Recurrent Neural Network Long Short Term Memory." MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 18, no. 1 (November 30, 2018): 115–24. http://dx.doi.org/10.30812/matrik.v18i1.344.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Tidak tersedianya sumber daya alam seperti migas, hasil hutan ataupun industri manufaktur yang berskala besar di pulau Lombok menyebabkan pariwisata telah menjadi sektor andalan dalam pembangunan daerah. Kontribusi sektor pariwisata menunjukkan trend yang semakin meningkat dari tahun ke tahun. Dampak positif pengeluaran wisatawan terhadap perekonomian terdistribusikan ke berbagai sektor. Akan tetapi, pemerinatah daerah umumnya akan melakukan persiapan wisata daerah hanya pada saat even lokal saja. Padahal kunjungan wisatawan bukan hanya karena faktor adanya event lokal saja. Persiapan pemerintah daerah dan pelaku wisata sangat penting untuk meningkatkan stabilitas kunjungan wisatawan. Penelitian ini mengkaji prediksi kunjungan wisatawan dengan pendekatan Recurrent Neural Network Long Short Term Memory (RNN LSTM). LSTM berisi informasi di luar aliran normal dari recurrent nertwork dalam gate cell. Cell membuat keputusan tentang apa yang harus disimpan dan kapan mengizinkan pembacaan, penulisan dan penghapusan, melalui gate yang terbuka dan tertutup. Gate menyampaikan informasi berdasarkan kekuatan yang masuk ke dalamnya dan akan difilter menjadi bobot dari gate itu sendiri. Bobot tersebut sama seperti bobot input dan hidden unit yang disesuaikan melalui proses leraning pada recurrent network. Hasil penelitian yang dilakukan dengan membangun model prediksi kunjungan wisatawan dengan RNN LSTM menggunakan multi time steps mendapatkan hasil RMSE sebesar 6888.37 pada data training dan 14684.33 pada data testing.
42

Su, Yuanhang, and C. C. Jay Kuo. "On extended long short-term memory and dependent bidirectional recurrent neural network." Neurocomputing 356 (September 2019): 151–61. http://dx.doi.org/10.1016/j.neucom.2019.04.044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Le, Ho, Lee, and Jung. "Application of Long Short-Term Memory (LSTM) Neural Network for Flood Forecasting." Water 11, no. 7 (July 5, 2019): 1387. http://dx.doi.org/10.3390/w11071387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which may influence the model performance were also of interest. As a result, the Da River basin in Vietnam was chosen and two different combinations of input data sets from before 1985 (when the Hoa Binh dam was built) were used for one-day, two-day, and three-day flowrate forecasting ahead at Hoa Binh Station. The predictive ability of the model is quite impressive: The Nash–Sutcliffe efficiency (NSE) reached 99%, 95%, and 87% corresponding to three forecasting cases, respectively. The findings of this study suggest a viable option for flood forecasting on the Da River in Vietnam, where the river basin stretches between many countries and downstream flows (Vietnam) may fluctuate suddenly due to flood discharge from upstream hydroelectric reservoirs.
44

Zhang, Xu, Yixian Wang, Yuchuan Zheng, Ruiting Ding, Yunlong Chen, Yi Wang, Xueting Cheng, and Shuai Yue. "Reactive Load Prediction Based on a Long Short-Term Memory Neural Network." IEEE Access 8 (2020): 90969–77. http://dx.doi.org/10.1109/access.2020.2991739.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ma, Wu, Zhang, Wu, Jeon, Tan, and Zhang. "Sea Clutter Amplitude Prediction Using a Long Short-Term Memory Neural Network." Remote Sensing 11, no. 23 (November 28, 2019): 2826. http://dx.doi.org/10.3390/rs11232826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the marine environment, shore-based radars play an important role in military surveillance and sensing. Sea clutter is one of the main factors affecting the performance of shore-based radar. Affected by marine environmental factors and radar parameters, the fluctuation law of sea clutter amplitude is very complicated. In the process of training a sea clutter amplitude prediction model, the traditional method updates the model parameters according to the current input data and the parameters in the current model, and cannot utilize the historical information of sea clutter amplitude. It is only possible to learn the short-term variation characteristics of the sea clutter. In order to learn the long-term variation law of sea clutter, a sea clutter prediction system based on the long short-term memory neural network is proposed. Based on sea clutter data collected by IPIX radar, UHF-band radar and S-band radar, the experimental results show that the mean square error of this prediction system is smaller than the traditional prediction methods. The sea clutter suppression signal is extracted by comparing the predicted sea clutter data with the original sea clutter data. The results show that the proposed sea clutter prediction system has a good effect on sea clutter suppression.
46

Yang, Rui, Mengjie Huang, Qidong Lu, and Maiying Zhong. "Rotating Machinery Fault Diagnosis Using Long-short-term Memory Recurrent Neural Network." IFAC-PapersOnLine 51, no. 24 (2018): 228–32. http://dx.doi.org/10.1016/j.ifacol.2018.09.582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zia, Tehseen, and Usman Zahid. "Long short-term memory recurrent neural network architectures for Urdu acoustic modeling." International Journal of Speech Technology 22, no. 1 (November 8, 2018): 21–30. http://dx.doi.org/10.1007/s10772-018-09573-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jin, Ning, Yongkang Zeng, Ke Yan, and Zhiwei Ji. "Multivariate Air Quality Forecasting With Nested Long Short Term Memory Neural Network." IEEE Transactions on Industrial Informatics 17, no. 12 (December 2021): 8514–22. http://dx.doi.org/10.1109/tii.2021.3065425.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Shu, Jingxiao, Dongyue Zhao, Xuda Zheng, Yiwen Li, and Yufeng Zhang. "Bridge Temperature Prediction Model Based on Long-Short Term Memory Neural Network." Journal of Physics: Conference Series 1966, no. 1 (July 1, 2021): 012013. http://dx.doi.org/10.1088/1742-6596/1966/1/012013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Volkov, S. S., and I. I. Kurochkin. "Network attacks classification using Long Short-term memory based neural networks in Software-Defined Networks." Procedia Computer Science 178 (2020): 394–403. http://dx.doi.org/10.1016/j.procs.2020.11.041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії