Добірка наукової літератури з теми "Long Short-Term Memory Neural Network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Long Short-Term Memory Neural Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Long Short-Term Memory Neural Network":

1

Chang, Ching-Chun. "Neural Reversible Steganography with Long Short-Term Memory." Security and Communication Networks 2021 (April 4, 2021): 1–14. http://dx.doi.org/10.1155/2021/5580272.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep learning has brought about a phenomenal paradigm shift in digital steganography. However, there is as yet no consensus on the use of deep neural networks in reversible steganography, a class of steganographic methods that permits the distortion caused by message embedding to be removed. The underdevelopment of the field of reversible steganography with deep learning can be attributed to the perception that perfect reversal of steganographic distortion seems scarcely achievable, due to the lack of transparency and interpretability of neural networks. Rather than employing neural networks in the coding module of a reversible steganographic scheme, we instead apply them to an analytics module that exploits data redundancy to maximise steganographic capacity. State-of-the-art reversible steganographic schemes for digital images are based primarily on a histogram-shifting method in which the analytics module is often modelled as a pixel intensity predictor. In this paper, we propose to refine the prior estimation from a conventional linear predictor through a neural network model. The refinement can be to some extent viewed as a low-level vision task (e.g., noise reduction and super-resolution imaging). In this way, we explore a leading-edge neuroscience-inspired low-level vision model based on long short-term memory with a brief discussion of its biological plausibility. Experimental results demonstrated a significant boost contributed by the neural network model in terms of prediction accuracy and steganographic rate-distortion performance.
2

Labusov, M. V. "SHORT-TERM FINANCIAL TIME SERIES ANALYSIS WITH LONG SHORT-TERM MEMORY NEURAL NETWORKS." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 3, no. 4 (2021): 165–77. http://dx.doi.org/10.36871/ek.up.p.r.2021.04.03.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of creating a long short-term memory neural network for high-frequency financial time series analyzing and forecasting is considered in the article. The research base is compiled in the beginning. Further the estimation of long short-term memory neural network parameters is carried out on the learning subsamples. The forecast of future returns signs is made for the horizon of 90 minutes with the estimated neural network. In conclusion the trading strategy is formulated.
3

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
4

KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (September 4, 2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractNeural Network-based approaches have recently produced good performances in Natural language tasks, such as Supertagging. In the supertagging task, a Supertag (Lexical category) is assigned to each word in an input sequence. Combinatory Categorial Grammar Supertagging is a more challenging problem than various sequence-tagging problems, such as part-of-speech (POS) tagging and named entity recognition due to the large number of the lexical categories. Specifically, simple Recurrent Neural Network (RNN) has shown to significantly outperform the previous state-of-the-art feed-forward neural networks. On the other hand, it is well known that Recurrent Networks fail to learn long dependencies. In this paper, we introduce a new neural network architecture based on backward and Bidirectional Long Short-Term Memory (BLSTM) Networks that has the ability to memorize information for long dependencies and benefit from both past and future information. State-of-the-art methods focus on previous information, whereas BLSTM has access to information in both previous and future directions. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short-Term Memory (LSTM) networks are more precise and successful than both unidirectional and bidirectional standard RNNs. Experiment results reveal the effectiveness of our proposed method on both in-domain and out-of-domain datasets. Experiments show improvements about (1.2 per cent) over standard RNN.
5

Hoque, Mohammad Shamsul, Norziana Jamil, Nowshad Amin, Azril Azam Abdul Rahim, and Razali B. Jidin. "Forecasting number of vulnerabilities using long short-term neural memory network." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (October 1, 2021): 4381. http://dx.doi.org/10.11591/ijece.v11i5.pp4381-4391.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cyber-attacks are launched through the exploitation of some existing vulnerabilities in the software, hardware, system and/or network. Machine learning algorithms can be used to forecast the number of post release vulnerabilities. Traditional neural networks work like a black box approach; hence it is unclear how reasoning is used in utilizing past data points in inferring the subsequent data points. However, the long short-term memory network (LSTM), a variant of the recurrent neural network, is able to address this limitation by introducing a lot of loops in its network to retain and utilize past data points for future calculations. Moving on from the previous finding, we further enhance the results to predict the number of vulnerabilities by developing a time series-based sequential model using a long short-term memory neural network. Specifically, this study developed a supervised machine learning based on the non-linear sequential time series forecasting model with a long short-term memory neural network to predict the number of vulnerabilities for three vendors having the highest number of vulnerabilities published in the national vulnerability database (NVD), namely microsoft, IBM and oracle. Our proposed model outperforms the existing models with a prediction result root mean squared error (RMSE) of as low as 0.072.
6

Xie, Qi, Gengguo Cheng, Xu Xu, and Zixuan Zhao. "Research Based on Stock Predicting Model of Neural Networks Ensemble Learning." MATEC Web of Conferences 232 (2018): 02029. http://dx.doi.org/10.1051/matecconf/201823202029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Financial time series is always one of the focus of financial market analysis and research. In recent years, with the rapid development of artificial intelligence, machine learning and financial market are more and more closely linked. Artificial neural network is usually used to analyze and predict financial time series. Based on deep learning, six layer long short-term memory neural networks were constructed. Eight long short-term memory neural networks were combined with Bagging method in ensemble learning and predicting model of neural networks ensemble learning was used in Chinese Stock Market. The experiment tested Shanghai Composite Index, Shenzhen Composite Index, Shanghai Stock Exchange 50 Index, Shanghai-Shenzhen 300 Index, Medium and Small Plate Index and Gem Index during the period from January 4, 2012 to December 29, 2017. For long short-term memory neural network ensemble learning model, its accuracy is 58.5%, precision is 58.33%, recall is 73.5%, F1 value is 64.5%, and AUC value is 57.67%, which are better than those of multilayer long short-term memory neural network model and reflect a good prediction outcome.
7

Kumar, Naresh, Jatin Bindra, Rajat Sharma, and Deepali Gupta. "Air Pollution Prediction Using Recurrent Neural Network, Long Short-Term Memory and Hybrid of Convolutional Neural Network and Long Short-Term Memory Models." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4580–84. http://dx.doi.org/10.1166/jctn.2020.9283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Air pollution prediction was not an easy task few years back. With the increasing computation power and wide availability of the datasets, air pollution prediction problem is solved to some extend. Inspired by the deep learning models, in this paper three techniques for air pollution prediction have been proposed. The models used includes recurrent neural network (RNN), Long short-term memory (LSTM) and a hybrid combination of Convolutional neural network (CNN) and LSTM models. These models are tested by comparing MSE loss on air pollution test of Belgium. The validation loss on RNN is 0.0045, LSTM is 0.00441 and CNN and LSTM is 0.0049. The loss on testing dataset for these models are 0.00088, 0.00441 and 0.0049 respectively.
8

Lihong, Dong, and Xie Qian. "Short-term electricity price forecast based on long short-term memory neural network." Journal of Physics: Conference Series 1453 (January 2020): 012103. http://dx.doi.org/10.1088/1742-6596/1453/1/012103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ghassaei, Sina, and Reza Ravanmehr. "Short-term Load Forecasting using Convolutional Neural Network and Long Short-term Memory." Iranian Electric Industry Journal of Quality and Productivity 10, no. 1 (April 1, 2021): 35–51. http://dx.doi.org/10.52547/ieijqp.10.1.35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wei, Xiaolu, Binbin Lei, Hongbing Ouyang, and Qiufeng Wu. "Stock Index Prices Prediction via Temporal Pattern Attention and Long-Short-Term Memory." Advances in Multimedia 2020 (December 10, 2020): 1–7. http://dx.doi.org/10.1155/2020/8831893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study attempts to predict stock index prices using multivariate time series analysis. The study’s motivation is based on the notion that datasets of stock index prices involve weak periodic patterns, long-term and short-term information, for which traditional approaches and current neural networks such as Autoregressive models and Support Vector Machine (SVM) may fail. This study applied Temporal Pattern Attention and Long-Short-Term Memory (TPA-LSTM) for prediction to overcome the issue. The results show that stock index prices prediction through the TPA-LSTM algorithm could achieve better prediction performance over traditional deep neural networks, such as recurrent neural network (RNN), convolutional neural network (CNN), and long and short-term time series network (LSTNet).

Дисертації з теми "Long Short-Term Memory Neural Network":

1

Gers, Félix. "Long short-term memory in recurrent neural networks /." [S.l.] : [s.n.], 2001. http://library.epfl.ch/theses/?nr=2366.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yangyang, Wen. "Sensor numerical prediction based on long-term and short-term memory neural network." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many sensor nodes are scattered in the sensor network,which are used in all aspects of life due to their small size, low power consumption, and multiple functions. With the advent of the Internet of Things, more small sensor devices will appear in our lives. The research of deep learning neural networks is generally based on large and medium-sized devices such as servers and computers, and it is rarely heard about the research of neural networks based on small Internet of Things devices. In this study, the Internet of Things devices are divided into three types: large, medium, and small in terms of device size, running speed, and computing power. More vividly, I classify the laptop as a medium- sized device, the device with more computing power than the laptop, like server, as a large-size IoT(Internet of Things) device, and the IoT mobile device that is smaller than it as a small IoT device. The purpose of this paper is to explore the feasibility, usefulness, and effectiveness of long-short-term memory neural network model value prediction research based on small IoT devices. In the control experiment of small and medium-sized Internet of Things devices, the following results are obtained: the error curves of the training set and verification set of small and medium-sized devices have the same downward trend, and similar accuracy and errors. But in terms of time consumption, small equipment is about 12 times that of medium-sized equipment. Therefore, it can be concluded that the LSTM(long-and-short-term memory neural networks) model value prediction research based on small IoT devices is feasible, and the results are useful and effective. One of the main problems encountered when the LSTM model is extended to small devices is time-consuming.
3

Shojaee, Ali B. S. "Bacteria Growth Modeling using Long-Short-Term-Memory Networks." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617105038908441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bailey, Tony J. "Neuromorphic Architecture with Heterogeneously Integrated Short-Term and Long-Term Learning Paradigms." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554217105047975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

van, der Westhuizen Jos. "Biological applications, visualizations, and extensions of the long short-term memory network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287476.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sequences are ubiquitous in the domain of biology. One of the current best machine learning techniques for analysing sequences is the long short-term memory (LSTM) network. Owing to significant barriers to adoption in biology, focussed efforts are required to realize the use of LSTMs in practice. Thus, the aim of this work is to improve the state of LSTMs for biology, and we focus on biological tasks pertaining to physiological signals, peripheral neural signals, and molecules. This goal drives the three subplots in this thesis: biological applications, visualizations, and extensions. We start by demonstrating the utility of LSTMs for biological applications. On two new physiological-signal datasets, LSTMs were found to outperform hidden Markov models. LSTM-based models, implemented by other researchers, also constituted the majority of the best performing approaches on publicly available medical datasets. However, even if these models achieve the best performance on such datasets, their adoption will be limited if they fail to indicate when they are likely mistaken. Thus, we demonstrate on medical data that it is straightforward to use LSTMs in a Bayesian framework via dropout, providing model predictions with corresponding uncertainty estimates. Another dataset used to show the utility of LSTMs is a novel collection of peripheral neural signals. Manual labelling of this dataset is prohibitively expensive, and as a remedy, we propose a sequence-to-sequence model regularized by Wasserstein adversarial networks. The results indicate that the proposed model is able to infer which actions a subject performed based on its peripheral neural signals with reasonable accuracy. As these LSTMs achieve state-of-the-art performance on many biological datasets, one of the main concerns for their practical adoption is their interpretability. We explore various visualization techniques for LSTMs applied to continuous-valued medical time series and find that learning a mask to optimally delete information in the input provides useful interpretations. Furthermore, we find that the input features looked for by the LSTM align well with medical theory. For many applications, extensions of the LSTM can provide enhanced suitability. One such application is drug discovery -- another important aspect of biology. Deep learning can aid drug discovery by means of generative models, but they often produce invalid molecules due to their complex discrete structures. As a solution, we propose a version of active learning that leverages the sequential nature of the LSTM along with its Bayesian capabilities. This approach enables efficient learning of the grammar that governs the generation of discrete-valued sequences such as molecules. Efficiency is achieved by reducing the search space from one over sequences to one over the set of possible elements at each time step -- a much smaller space. Having demonstrated the suitability of LSTMs for biological applications, we seek a hardware efficient implementation. Given the success of the gated recurrent unit (GRU), which has two gates, a natural question is whether any of the LSTM gates are redundant. Research has shown that the forget gate is one of the most important gates in the LSTM. Hence, we propose a forget-gate-only version of the LSTM -- the JANET -- which outperforms both the LSTM and some of the best contemporary models on benchmark datasets, while also reducing computational cost.
6

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
7

Nawaz, Sabeen. "Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages.
Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
8

Gustafsson, Anton, and Julian Sjödal. "Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43552.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
9

Valluru, Aravind-Deshikh. "Realization of LSTM Based Cognitive Radio Network." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538697/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents the realization of an intelligent cognitive radio network that uses long short term memory (LSTM) neural network for sensing and predicting the spectrum activity at each instant of time. The simulation is done using Python and GNU Radio. The implementation is done using GNU Radio and Universal Software Radio Peripherals (USRP). Simulation results show that the confidence factor of opportunistic users not causing interference to licensed users of the spectrum is 98.75%. The implementation results demonstrate high reliability of the LSTM based cognitive radio network.
10

Jaffe, Alexander Scott. "Long short-term memory recurrent neural networks for classification of acute hypotensive episodes." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113146.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 37-39).
An acute hypotensive episode (AHE) is a life-threatening condition durich which a patient's mean arterial blood pressure drops below 60 mmHG for a period of 30 minutes. This thesis presents the development and evaluation of a series of Long short-term memory recurrent neural network (LSTM RNN) models which predict whether a patient will experience an AHE or not based on a time series of mean arterial blood pressure (ABP). A 2-layer, 128-hidden unit LSTM RNN trained with rmsprop and dropout regularization achieves sensitivity of 78% and specificity of 98%.
by Alexander Scott Jaffe.
M. Eng.

Книги з теми "Long Short-Term Memory Neural Network":

1

Dienel, Samuel J., and David A. Lewis. Cellular Mechanisms of Psychotic Disorders. Edited by Dennis S. Charney, Eric J. Nestler, Pamela Sklar, and Joseph D. Buxbaum. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190681425.003.0018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cognitive dysfunction in schizophrenia, including disturbances in working memory, is a core feature of the illness and the best predictor of long-term functional outcome. Working memory relies on neural network oscillations in the prefrontal cortex. Gamma-aminobutyric acid (GABA) neurons in the prefrontal cortex, which are crucial for this oscillatory activity, exhibit a number of alterations in individuals diagnosed with schizophrenia. These GABA neuron disturbances may be secondary to upstream alterations in excitatory pyramidal cells in the prefrontal cortex. Together, these findings suggest both a neural substrate for working memory impairments in schizophrenia and therapeutic targets for improving functional outcomes in this patient population.
2

Brain Theory From A Circuits And Systems Perspective How Electrical Science Explains Neurocircuits Neurosystems And Qubits. Springer-Verlag New York Inc., 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Menon, Vinod. Arithmetic in the Child and Adult Brain. Edited by Roi Cohen Kadosh and Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This review examines brain and cognitive processes involved in arithmetic. I take a distinctly developmental perspective because neither the cognitive nor the brain processes involved in arithmetic can be adequately understood outside the framework of how developmental processes unfold. I review four basic neurocognitive processes involved in arithmetic, highlighting (1) the role of core dorsal parietal and ventral temporal-occipital cortex systems that form basic building blocks from which number form and quantity representations are constructed in the brain; (2) procedural and working memory systems anchored in the basal ganglia and frontoparietal circuits, which create short-term representations that allow manipulation of multiple discrete quantities over several seconds; (3) episodic and semantic memory systems anchored in the medial and lateral temporal cortex that play an important role in long-term memory formation and generalization beyond individual problem attributes; and (4) prefrontal cortex control processes that guide allocation of attention resources and retrieval of facts from memory in the service of goal-directed problem solving. Next I examine arithmetic in the developing brain, first focusing on studies comparing arithmetic in children and adults, and then on studies examining development in children during critical stages of skill acquisition. I highlight neurodevelopmental models that go beyond parietal cortex regions involved in number processing, and demonstrate that brain systems and circuits in the developing child brain are clearly not the same as those seen in more mature adult brains sculpted by years of learning. The implications of these findings for a more comprehensive view of the neural basis of arithmetic in both children and adults are discussed.
4

Koch, Christof. Biophysics of Computation. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195104912.001.0001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neural network research often builds on the fiction that neurons are simple linear threshold units, completely neglecting the highly dynamic and complex nature of synapses, dendrites, and voltage-dependent ionic currents. Biophysics of Computation: Information Processing in Single Neurons challenges this notion, using richly detailed experimental and theoretical findings from cellular biophysics to explain the repertoire of computational functions available to single neurons. The author shows how individual nerve cells can multiply, integrate, or delay synaptic inputs and how information can be encoded in the voltage across the membrane, in the intracellular calcium concentration, or in the timing of individual spikes. Key topics covered include the linear cable equation; cable theory as applied to passive dendritic trees and dendritic spines; chemical and electrical synapses and how to treat them from a computational point of view; nonlinear interactions of synaptic input in passive and active dendritic trees; the Hodgkin-Huxley model of action potential generation and propagation; phase space analysis; linking stochastic ionic channels to membrane-dependent currents; calcium and potassium currents and their role in information processing; the role of diffusion, buffering and binding of calcium, and other messenger systems in information processing and storage; short- and long-term models of synaptic plasticity; simplified models of single cells; stochastic aspects of neuronal firing; the nature of the neuronal code; and unconventional models of sub-cellular computation. Biophysics of Computation: Information Processing in Single Neurons serves as an ideal text for advanced undergraduate and graduate courses in cellular biophysics, computational neuroscience, and neural networks, and will appeal to students and professionals in neuroscience, electrical and computer engineering, and physics.
5

Nobre, Anna C. (Kia), and M.-Marsel Mesulam. Large-scale Networks for Attentional Biases. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Selective attention is essential for all aspects of cognition. Using the paradigmatic case of visual spatial attention, we present a theoretical account proposing the flexible control of attention through coordinated activity across a large-scale network of brain areas. It reviews evidence supporting top-down control of visual spatial attention by a distributed network, and describes principles emerging from a network approach. Stepping beyond the paradigm of visual spatial attention, we consider attentional control mechanisms more broadly. The chapter suggests that top-down biasing mechanisms originate from multiple sources and can be of several types, carrying information about receptive-field properties such as spatial locations or features of items; but also carrying information about properties that are not easily mapped onto receptive fields, such as the meanings or timings of items. The chapter considers how selective biases can operate on multiple slates of information processing, not restricted to the immediate sensory-motor stream, but also operating within internalized, short-term and long-term memory representations. Selective attention appears to be a general property of information processing systems rather than an independent domain within our cognitive make-up.

Частини книг з теми "Long Short-Term Memory Neural Network":

1

Xie, Zongxia, and Hao Wen. "Composite Quantile Regression Long Short-Term Memory Network." In Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series, 513–24. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30490-4_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mehdipour Ghazi, Mostafa, Mads Nielsen, Akshay Pai, Marc Modat, M. Jorge Cardoso, Sébastien Ourselin, and Lauge Sørensen. "On the Initialization of Long Short-Term Memory Networks." In Neural Information Processing, 275–86. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Smagulova, Kamilya, and Alex Pappachen James. "Overview of Long Short-Term Memory Neural Networks." In Modeling and Optimization in Science and Technologies, 139–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14524-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wu, Xing, Zhikang Du, Mingyu Zhong, Shuji Dai, and Yazhou Liu. "Chinese Lyrics Generation Using Long Short-Term Memory Neural Network." In Advances in Artificial Intelligence: From Theory to Practice, 419–27. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60045-1_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chantamit-o-pas, Pattanapong, and Madhu Goyal. "Long Short-Term Memory Recurrent Neural Network for Stroke Prediction." In Machine Learning and Data Mining in Pattern Recognition, 312–23. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96136-1_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Prakash, N., and G. Sumaiya Farzana. "Short Term Price Forecasting of Horticultural Crops Using Long Short Term Memory Neural Network." In Learning and Analytics in Intelligent Systems, 111–18. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46943-6_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Lingfeng, Yuanping Nie, Weihong Han, and Jiuming Huang. "A Multi-attention-Based Bidirectional Long Short-Term Memory Network for Relation Extraction." In Neural Information Processing, 216–27. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70139-4_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Szadkowski, Rudolf J., Jan Drchal, and Jan Faigl. "Terrain Classification with Crawling Robot Using Long Short-Term Memory Network." In Artificial Neural Networks and Machine Learning – ICANN 2018, 771–80. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_75.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Yunfei, Jing Li, and Yi Zhuang. "Instruction SDC Vulnerability Prediction Using Long Short-Term Memory Neural Network." In Advanced Data Mining and Applications, 140–49. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05090-0_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mishra, Abhinav, Kshitij Tripathi, Lakshay Gupta, and Krishna Pratap Singh. "Long Short-Term Memory Recurrent Neural Network Architectures for Melody Generation." In Advances in Intelligent Systems and Computing, 41–55. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1595-4_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Long Short-Term Memory Neural Network":

1

Zhuo, Qinzheng, Qianmu Li, Han Yan, and Yong Qi. "Long short-term memory neural network for network traffic prediction." In 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE). IEEE, 2017. http://dx.doi.org/10.1109/iske.2017.8258815.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tian, Yongxue, and Li Pan. "Predicting Short-Term Traffic Flow by Long Short-Term Memory Recurrent Neural Network." In 2015 IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity). IEEE, 2015. http://dx.doi.org/10.1109/smartcity.2015.63.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Abbas, Zainab, Ahmad Al-Shishtawy, Sarunas Girdzijauskas, and Vladimir Vlassov. "Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks." In 2018 IEEE International Congress on Big Data (BigData Congress). IEEE, 2018. http://dx.doi.org/10.1109/bigdatacongress.2018.00015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Qiao, Songlin, Rencheng Sun, Guangpeng Fan, and Ji Liu. "Short-term traffic flow forecast based on parallel long short-term memory neural network." In 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2017. http://dx.doi.org/10.1109/icsess.2017.8342908.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Daneshvar, Mohammad, and Hadi Veisi. "Persian phoneme recognition using long short-term memory neural network." In 2016 Eighth International Conference on Information and Knowledge Technology (IKT). IEEE, 2016. http://dx.doi.org/10.1109/ikt.2016.7777777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nadda, Wanchaloem, Waraporn Boonchieng, and Ekkarat Boonchieng. "Dengue Fever Detection using Long Short-term Memory Neural Network." In 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2020. http://dx.doi.org/10.1109/ecti-con49241.2020.9158315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Alsharif, Ouais, Tom Ouyang, Francoise Beaufays, Shumin Zhai, Thomas Breuel, and Johan Schalkwyk. "Long short term memory neural network for keyboard gesture decoding." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lezhenin, Iurii, Natalia Bogach, and Evgeny Pyshkin. "Urban Sound Classification using Long Short-Term Memory Neural Network." In 2019 Federated Conference on Computer Science and Information Systems. IEEE, 2019. http://dx.doi.org/10.15439/2019f185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Arisoy, Ebru, and Murat Saraçlar. "Multi-stream long short-term memory neural network language model." In Interspeech 2015. ISCA: ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-339.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Howard, Emma R., Bicky A. Marquez, and Bhavin J. Shastri. "Photonic Long-Short Term Memory Neural Networks with Analog Memory." In 2020 IEEE Photonics Conference (IPC). IEEE, 2020. http://dx.doi.org/10.1109/ipc47351.2020.9252216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Long Short-Term Memory Neural Network":

1

Ly, Racine, Fousseini Traore, and Khadim Dia. Forecasting commodity prices using long-short-term memory neural networks. Washington, DC: International Food Policy Research Institute, 2021. http://dx.doi.org/10.2499/p15738coll2.134265.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ankel, Victoria, Stella Pantopoulou, Matthew Weathered, Darius Lisowski, Anthonie Cilliers, and Alexander Heifetz. One-Step Ahead Prediction of Thermal Mixing Tee Sensors with Long Short Term Memory (LSTM) Neural Networks. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1760289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vold, Andrew. Improving Physics Based Electron Neutrino Appearance Identication with a Long Short-Term Memory Network. Office of Scientific and Technical Information (OSTI), January 2018. http://dx.doi.org/10.2172/1529330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії