To see the other types of publications on this topic, follow the link: Simple recurrent neural network.

Dissertations / Theses on the topic 'Simple recurrent neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Simple recurrent neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodriguez, Paul Fabian. "Mathematical foundations of simple recurrent networks /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9935464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacobsson, Henrik. "A Comparison of Simple Recurrent and Sequential Cascaded Networks for Formal Language Recognition." Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-391.

Full text
Abstract:
<p>Two classes of recurrent neural network models are compared in this report, simple recurrent networks (SRNs) and sequential cascaded networks (SCNs) which are first- and second-order networks respectively. The comparison is aimed at describing and analysing the behaviour of the networks such that the differences between them become clear. A theoretical analysis, using techniques from dynamic systems theory (DST), shows that the second-order network has more possibilities in terms of dynamical behaviours than the first-order network. It also revealed that the second order network could interpret its context with an input-dependent function in the output nodes. The experiments were based on training with backpropagation (BP) and an evolutionary algorithm (EA) on the AnBn-grammar which requires the ability to count. This analysis revealed some differences between the two training-regimes tested and also between the performance of the two types of networks. The EA was found to be far more reliable than BP in this domain. Another important finding from the experiments was that although the SCN had more possibilities than the SRN in how it could solve the problem, these were not exploited in the domain tested in this project</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Tekin, Mim Kemal. "Vehicle Path Prediction Using Recurrent Neural Network." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166134.

Full text
Abstract:
Vehicle Path Prediction can be used to support Advanced Driver Assistance Systems (ADAS) that covers different technologies like Autonomous Braking System, Adaptive Cruise Control, etc. In this thesis, the vehicle’s future path, parameterized as 5 coordinates along the path, is predicted by using only visual data collected by a front vision sensor. This approach provides cheaper application opportunities without using different sensors. The predictions are done by deep convolutional neural networks (CNN) and the goal of the project is to use recurrent neural networks (RNN) and to investigate the benefits of using reccurence to the task. Two different approaches are used for the models. The first approach is a single-frame approach that makes predictions by using only one image frame as input and predicts the future location points of the car. The single-frame approach is the baseline model. The second approach is a sequential approach that enables the network the usage of historical information of previous image frames in order to predict the vehicle’s future path for the current frame. With this approach, the effect of using recurrence is investigated. Moreover, uncertainty is important for the model reliability. Having a small uncertainty in most of the predictions or having a high uncertainty in unfamiliar situations for the model will increase success of the model. In this project, the uncertainty estimation approach is based on capturing the uncertainty by following a method that allows to work on deep learning models. The uncertainty approach uses the same models that are defined by the first two approaches. Finally, the evaluation of the approaches are done by the mean absolute error and defining two different reasonable tolerance levels for the distance between the prediction path and the ground truth path. The difference between two tolerance levels is that the first one is a strict tolerance level and the the second one is a more relaxed tolerance level. When using strict tolerance level based on distances on test data, 36% of the predictions are accepted for single-frame model, 48% for the sequential model, 27% and 13% are accepted for single-frame and sequential models of uncertainty models. When using relaxed tolerance level on test data, 60% of the predictions are accepted by single-frame model, 67% for the sequential model, 65% and 53% are accepted for single-frame and sequential models of uncertainty models. Furthermore, by using stored information for each sequence, the methods are evaluated for different conditions such as day/night, road type and road cover. As a result, the sequential model outperforms in the majority of the evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
5

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Full text
Abstract:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
APA, Harvard, Vancouver, ISO, and other styles
6

He, Jian. "Adaptive power system stabilizer based on recurrent neural network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0008/NQ38471.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Full text
Abstract:
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
APA, Harvard, Vancouver, ISO, and other styles
8

Amartur, Sundar C. "Competitive recurrent neural network model for clustering of multispectral data." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058445974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bopaiah, Jeevith. "A recurrent neural network architecture for biomedical event trigger classification." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/73.

Full text
Abstract:
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
APA, Harvard, Vancouver, ISO, and other styles
10

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Full text
Abstract:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The results show that the model performs better than random. This thesis also investigated the use of the K-means algorithm as a replacement to a rule-extraction algorithm. The K-means algorithm contributed to a more comprehensive analytical context regarding the churn prediction of the proposed model.<br>Illojalitet prediktering används för att identifiera kunder som är påväg att bli mindre lojala och är ett hjälpsamt verktyg för att ett företag ska kunna driva en konkurrenskraftig verksamhet. I detaljhandel behöves en dynamisk definition av illojalitet för att korrekt kunna identifera illojala kunder. Kundens livstidsvärde är ett mått på monetärt värde av en kundrelation. En avstannad förändring av detta värde indikerar en minskning av kundens lojalitet. Denna rapport föreslår en ny metod för att utföra illojalitet prediktering. Den föreslagna metoden består av ett återkommande neuralt nätverk som används för att identifiera illojalitet hos kunder genom att prediktera kunders livstidsvärde. Resultaten visar att den föreslagna modellen presterar bättre jämfört med slumpmässig metod. Rapporten undersöker också användningen av en k-medelvärdesalgoritm som ett substitut för en regelextraktionsalgoritm. K-medelsalgoritm bidrog till en mer omfattande analys av illojalitet predikteringen.
APA, Harvard, Vancouver, ISO, and other styles
11

Dimopoulos, Konstantinos Panagiotis. "Non-linear control strategies using input-state network models." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Poormehdi, Ghaemmaghami Masoumeh. "Tracking of Humans in Video Stream Using LSTM Recurrent Neural Network." Thesis, KTH, Teoretisk datalogi, TCS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217495.

Full text
Abstract:
In this master thesis, the problem of tracking humans in video streams by using Deep Learning is examined. We use spatially supervised recurrent convolutional neural networks for visual human tracking. In this method, the recurrent convolutional network uses both the history of locations and the visual features from the deep neural networks. This method is used for tracking, based on the detection results. We concatenate the location of detected bounding boxes with high-level visual features produced by convolutional networks and then predict the tracking bounding box for next frames. Because a video contain continuous frames, we decide to have a method which uses the information from history of frames to have a robust tracking in different visually challenging cases such as occlusion, motion blur, fast movement, etc. Long Short-Term Memory (LSTM) is a kind of recurrent convolutional neural network and useful for our purpose. Instead of using binary classification which is commonly used in deep learning based tracking methods, we use a regression for direct prediction of the tracking locations. Our purpose is to test our method on real videos which is recorded by head-mounted camera. So our test videos are very challenging and contain different cases of fast movements, motion blur, occlusions, etc. Considering the limitation of the training data-set which is spatially imbalanced, we have a problem for tracking the humans who are in the corners of the image but in other challenging cases, the proposed tracking method worked well.<br>I detta examensarbete undersöks problemet att spåra människor i videoströmmar genom att använda deep learning. Spårningen utförs genom att använda ett recurrent convolutional neural network. Input till nätverket består av visuella features extraherade med hjälp av ett convolutional neural network, samt av detektionsresultat från tidigare frames. Vi väljer att använda oss av historiska detektioner för att skapa en metod som är robust mot olika utmanande situationer, som t.ex. snabba rörelser, rörelseoskärpa och ocklusion. Long Short- Term Memory (LSTM) är ett recurrent convolutional neural network som är användbart för detta ändamål. Istället för att använda binära klassificering, vilket är vanligt i många deep learning-baserade tracking-metoder, så använder vi oss av regression för att direkt förutse positionen av de spårade subjekten. Vårt syfte är att testa vår metod på videor som spelats in med hjälp av en huvudmonterad kamera. På grund av begränsningar i våra träningsdataset som är spatiellt oblanserade har vi problem att spåra människor som befinner sig i utkanten av bildområdet, men i andra utmanande fall lyckades spårningen bra.
APA, Harvard, Vancouver, ISO, and other styles
13

Gonzalez, Juan. "Spacecraft Formation Control| Adaptive PID-Extended Memory Recurrent Neural Network Controller." Thesis, California State University, Long Beach, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10978237.

Full text
Abstract:
<p> In today&rsquo;s space industry, satellite formation flying has become a cost-efficient alternative solution for science, on-orbit repair and military time-critical missions. While in orbit, the satellites are exposed to the space environment and unpredictable spacecraft on-board disturbances that negatively affect the attitude control system&rsquo;s ability to reduce relative position and velocity error. Satellites utilizing a PID or adaptive controller are typically tune to reduce the error induced by space environment disturbances. However, in the case of an unforeseen spacecraft disturbance, such as a fault in an IMU, the PID based attitude control system effectiveness will deteriorate and will not be able to reduce the error to an acceptable magnitude. </p><p> In order to address the shortcomings a PID-Extended Memory RNN (EMRNN) adaptive controller is proposed. A PID-EMRNN with a short memory of multiple time steps is capable of producing a control input that improves the translational position and velocity error transient response compared to a PID. The results demonstrate the PID-EMRNN controller ability to generate a faster settling and rise time for control signal curves. The PID-EMRNN also produced similar results for an altitude range of 400 km to 1000 km and inclination range of 40 to 65 degrees angles of inclination. The proposed PID-EMRNN adaptive controller has demonstrated the capability of yielding a faster position error and control signal transient response in satellite formation flying scenario. </p><p>
APA, Harvard, Vancouver, ISO, and other styles
14

Moradi, Mahdi. "TIME SERIES FORECASTING USING DUAL-STAGE ATTENTION-BASED RECURRENT NEURAL NETWORK." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2701.

Full text
Abstract:
AN ABSTRACT OF THE RESEARCH PAPER OFMahdi Moradi, for the Master of Science degree in Computer Science, presented on April 1, 2020, at Southern Illinois University Carbondale.TITLE: TIME SERIES FORECASTING USING DUAL-STAGE ATTENTION-BASED RECURRENT NEURAL NETWORKMAJOR PROFESSOR: Dr. Banafsheh Rekabdar
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Yuchen. "Detection of Opioid Addicts via Attention-based bidirectional Recurrent Neural Network." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1592255095863388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hanson, Jack. "Protein Structure Prediction by Recurrent and Convolutional Deep Neural Network Architectures." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382722.

Full text
Abstract:
In this thesis, the application of convolutional and recurrent machine learning techniques to several key structural properties of proteins is explored. Chapter 2 presents the rst application of an LSTM-BRNN in structural bioinformat- ics. The method, called SPOT-Disorder, predicts the per-residue probability of a protein being intrinsically disordered (ie. unstructured, or exible). Using this methodology, SPOT-Disorder achieved the highest accuracy in the literature without separating short and long disordered regions during training as was required in previous models, and was additionally proven capable of indirectly discerning functional sites located in disordered regions. Chapter 3 extends the application of an LSTM-BRNN to a two-dimensional problem in the prediction of protein contact maps. Protein contact maps describe the intra-sequence distance between each residue pairing at a distance cuto , providing key restraints towards the possible conformations of a protein. This work, entitled SPOT-Contact, introduced the coupling of two-dimensional LSTM-BRNNs with ResNets to maximise dependency propagation in order to achieve the highest reported accuracies for contact map preci- sion. Several models of varying architectures were trained and combined as an ensemble predictor in order to minimise incorrect generalisations. Chapter 4 discusses the utilisation of an ensemble of LSTM-BRNNs and ResNets to predict local protein one-dimensional structural properties. The method, called SPOT-1D, predicts for a wide range of local structural descriptors, including several solvent exposure metrics, secondary structure, and real-valued backbone angles. SPOT-1D was signi cantly improved by the inclusion of the outputs of SPOT-Contact in the input features. Using this topology led to the best reported accuracy metrics for all predicted properties. The protein structures constructed by the backbone angles predicted by SPOT-1D achieved the lowest average error from their native structures in the literature. Chapter 5 presents an update on SPOT-Disorder, as it employs the inputs from SPOT- 1D in conjunction with an ensemble of LSTM-BRNN's and Inception Residual Squeeze and Excitation networks to predict for protein intrinsic disorder. This model con rmed the enhancement provided by utilising the coupled architectures over the LSTM-BRNN solely, whilst also introducing a new convolutional format to the bioinformatics eld. The work in Chapter 6 utilises the same topology from SPOT-1D for single-sequence prediction of protein intrinsic disorder in SPOT-Disorder-Single. Single-sequence predic- tion describes the prediction of a protein's properties without the use of evolutionary information. While evolutionary information generally improves the performance of a computational model, it comes at the expense of a greatly increased computational and time load. Removing this from the model allows for genome-scale protein analysis at a minor drop in accuracy. However, models trained without evolutionary profi les can be more accurate for proteins with limited and therefore unreliable evolutionary information.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>School of Eng & Built Env<br>Science, Environment, Engineering and Technology<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
17

Corell, Simon. "A Recurrent Neural Network For Battery Capacity Estimations In Electrical Vehicles." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160536.

Full text
Abstract:
This study is an investigation if a recurrent long short-term memory (LSTM) based neural network can be used to estimate the battery capacity in electrical cars. There is an enormous interest in finding the underlying reasons why and how Lithium-ion batteries ages and this study is a part of this broader question. The research questions that have been answered are how well a LSTM model estimates the battery capacity, how the LSTM model is performing compared to a linear model and what parameters that are important when estimating the capacity. There have been other studies covering similar topics but only a few that has been performed on a real data set from real cars driving. With a data science approach, it was discovered that the LSTM model indeed is a powerful model to use for estimation the capacity. It had better accuracy than a linear regression model, but the linear regression model still gave good results. The parameters that implied to be important when estimating the capacity were logically related to the properties of a Lithium-ion battery.En studie över hur väl ett återkommande neuralt nätverk kan estimera kapaciteten hos Litium-ion batteri hos elektroniska fordon, när en en datavetenskaplig strategi har använts.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahrneteg, Jakob, and Dean Kulenovic. "Semantic Segmentation of Historical Document Images Using Recurrent Neural Networks." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18219.

Full text
Abstract:
Background. This thesis focuses on the task of historical document semantic segmentation with recurrent neural networks. Document semantic segmentation involves the segmentation of a page into different meaningful regions and is an important prerequisite step of automated document analysis and digitisation with optical character recognition. At the time of writing, convolutional neural network based solutions are the state-of-the-art for analyzing document images while the use of recurrent neural networks in document semantic segmentation has not yet been studied. Considering the nature of a recurrent neural network and the recent success of recurrent neural networks in document image binarization, it should be possible to employ a recurrent neural network for document semantic segmentation and further achieve high performance results. Objectives. The main objective of this thesis is to investigate if recurrent neural networks are a viable alternative to convolutional neural networks in document semantic segmentation. By using a combination of a convolutional neural network and a recurrent neural network, another objective is also to determine if the performance of the combination can improve upon the existing case of only using the recurrent neural network. Methods. To investigate the impact of recurrent neural networks in document semantic segmentation, three different recurrent neural network architectures are implemented and trained while their performance are further evaluated with Intersection over Union. Afterwards their segmentation result are compared to a convolutional neural network. By performing pre-processing on training images and multi-class labeling, prediction images are ultimately produced by the employed models. Results. The results from the gathered performance data shows a 2.7% performance difference between the best recurrent neural network model and the convolutional neural network. Notably, it can be observed that this recurrent neural network model has a more consistent performance than the convolutional neural network but comparable performance results overall. For the other recurrent neural network architectures lower performance results are observed which is connected to the complexity of these models. Furthermore, by analyzing the performance results of a model using a combination of a convolutional neural network and a recurrent neural network, it can be noticed that the combination performs significantly better with a 4.9% performance increase compared to the case with only using the recurrent neural network. Conclusions. This thesis concludes that recurrent neural networks are likely a viable alternative to convolutional neural networks in document semantic segmentation but that further investigation is required. Furthermore, by combining a convolutional neural network with a recurrent neural network it is concluded that the performance of a recurrent neural network model is significantly increased.<br>Bakgrund. Detta arbete handlar om semantisk segmentering av historiska dokument med recurrent neural network. Semantisk segmentering av dokument inbegriper att dela in ett dokument i olika regioner, något som är viktigt för att i efterhand kunna utföra automatisk dokument analys och digitalisering med optisk teckenläsning. Vidare är convolutional neural network det främsta alternativet för bearbetning av dokument bilder medan recurrent neural network aldrig har använts för semantisk segmentering av dokument. Detta är intressant eftersom om vi tar hänsyn till hur ett recurrent neural network fungerar och att recurrent neural network har uppnått mycket bra resultat inom binär bearbetning av dokument, borde det likväl vara möjligt att använda ett recurrent neural network för semantisk segmentering av dokument och även här uppnå bra resultat. Syfte. Syftet med arbetet är att undersöka om ett recurrent neural network kan uppnå ett likvärdigt resultat jämfört med ett convolutional neural network för semantisk segmentering av dokument. Vidare är syftet även att undersöka om en kombination av ett convolutional neural network och ett recurrent neural network kan ge ett bättre resultat än att bara endast använda ett recurrent neural network. Metod. För att kunna avgöra om ett recurrent neural network är ett lämpligt alternativ för semantisk segmentering av dokument utvärderas prestanda resultatet för tre olika modeller av recurrent neural network. Därefter jämförs dessa resultat med prestanda resultatet för ett convolutional neural network. Vidare utförs förbehandling av bilder och multi klassificering för att modellerna i slutändan ska kunna producera mätbara resultat av uppskattnings bilder. Resultat. Genom att utvärdera prestanda resultaten för modellerna kan vi i en jämförelse med den bästa modellen och ett convolutional neural network uppmäta en prestanda skillnad på 2.7%. Noterbart i det här fallet är att den bästa modellen uppvisar en jämnare fördelning av prestanda. För de två modellerna som uppvisade en lägre prestanda kan slutsatsen dras att deras utfall beror på en lägre modell komplexitet. Vidare vid en jämförelse av dessa två modeller, där den ena har en kombination av ett convolutional neural network och ett recurrent neural network medan den andra endast har ett recurrent neural network uppmäts en prestanda skillnad på 4.9%. Slutsatser. Resultatet antyder att ett recurrent neural network förmodligen är ett lämpligt alternativ till ett convolutional neural network för semantisk segmentering av dokument. Vidare dras slutsatsen att en kombination av de båda varianterna bidrar till ett bättre prestanda resultat.
APA, Harvard, Vancouver, ISO, and other styles
19

Salem, Jaber. "A SIMPLE FPGA - BASED ARCHITECTURE DESIGN OF RECONFIGURABLE NEURAL NETWORK." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1154.

Full text
Abstract:
In contrast with analog design, digital design and implementation of any logic circuit suffer much from the difficulty in terms of economy and implementation. Neural networks are artificial systems inspired by the brain's cognitive behavior, which can learn tasks with some degree of complexity, such as, optimization problems, text and speech recognition. Since the topology of neural networks is highly crucial to the performance, the reconfigurable ability of the neural network hardware is very essential. Reconfigurability factually means several different designs can be implemented on a single architecture. Therefore, this work proposes an efficient architecture to implement the reconfigurable back propagation and Hopfield neural networks. We specifically adopted the reconfigurable artificial neural networks approach to show how it is possible to build an efficient chip. Simple neural network models with an appropriate training were used to behave as traditional logic functions in the bit- level. In order to further reduce the hardware, memories-sharing method has been adopted. Also, a comparison between the proposed and traditional networks shows that the proposed network has significantly reduced the time delay and power consumption. Xilinx - ISE is used to synthesize our design. VHDL code is used to build the architecture. The architecture code is then downloaded to FPGAs (Field Programmable Gate Array) to implement the design. FPGAs are strong tools to implement ANNs as one can exploit concurrency and rapidly reconfigure to adapt the weights and topologies of an ANN. Also, XPower, as one of the best tools in Xilinx, was used to measure the total required power by our architecture. Finally, the results showed that the proposed reconfigurable architecture leads to a considerable decrease in the consumed power to almost 43% as well as the total time delay. Also, the architecture can easily be scalable as a future work and is able to cope with several network sizes with the same hardware.
APA, Harvard, Vancouver, ISO, and other styles
20

Cunanan, Kevin. "Developing a Recurrent Neural Network with High Accuracy for Binary Sentiment Analysis." Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/cmc_theses/1835.

Full text
Abstract:
Sentiment analysis has taken on various machine learning approaches in order to optimize accuracy, precision, and recall. However, Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) account for the context of a sentence by using previous predictions as additional input for future sentence predictions. Our approach focused on developing an LSTM RNN that could perform binary sentiment analysis for positively and negatively labeled sentences. In collaboration with Mariam Salloum, I developed a collection of programs to classify individual sentences as either positive or negative. This paper additionally looks into machine learning, neural networks, data preprocessing, implementation, and resulting comparisons.
APA, Harvard, Vancouver, ISO, and other styles
21

Mastrogiuseppe, Francesca. "From dynamics to computations in recurrent neural networks." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE048/document.

Full text
Abstract:
Le cortex cérébral des mammifères est constitué de larges et complexes réseaux de neurones. La tâche de ces assemblées de cellules est d’encoder et de traiter, le plus précisément possible, l'information sensorielle issue de notre environnement extérieur. De façon surprenante, les enregistrements électrophysiologiques effectués sur des animaux en comportement ont montré que l’activité corticale est excessivement irrégulière. Les motifs temporels d’activité ainsi que les taux de décharge moyens des cellules varient considérablement d’une expérience à l’autre, et ce malgré des conditions expérimentales soigneusement maintenues à l’identique. Une hypothèse communément répandue suggère qu'une partie importante de cette variabilité émerge de la connectivité récurrente des réseaux. Cette hypothèse se fonde sur la modélisation des réseaux fortement couplés. Une étude classique [Sompolinsky et al, 1988] a en effet montré qu'un réseau de cellules aux connections aléatoires exhibe une transition de phase : l’activité passe d'un point fixe ou le réseau est inactif, à un régime chaotique, où les taux de décharge des cellules fluctuent au cours du temps et d’une cellule à l’autre. Ces analyses soulèvent néanmoins de nombreuse questions : de telles fluctuations sont-elles encore visibles dans des réseaux corticaux aux architectures plus réalistes? De quelle façon cette variabilité intrinsèque dépend-elle des paramètres biophysiques des cellules et de leurs constantes de temps ? Dans quelle mesure de tels réseaux chaotiques peuvent-ils sous-tendre des computations ? Dans cette thèse, on étudiera la dynamique et les propriétés computationnelles de modèles de circuits de neurones à l’activité hétérogène et variable. Pour ce faire, les outils mathématiques proviendront en grande partie des systèmes dynamiques et des matrices aléatoires. Ces approches seront couplées aux méthodes statistiques des champs moyens développées pour la physique des systèmes désordonnées. Dans la première partie de cette thèse, on étudiera le rôle de nouvelles contraintes biophysiques dans l'apparition d’une activité irrégulière dans des réseaux de neurones aux connections aléatoires. Dans la deuxième et la troisième partie, on analysera les caractéristiques de cette variabilité intrinsèque dans des réseaux partiellement structurées supportant des calculs simples comme la prise de décision ou la création de motifs temporels. Enfin, inspirés des récents progrès dans le domaine de l’apprentissage statistique, nous analyserons l’interaction entre une architecture aléatoire et une structure de basse dimension dans la dynamique des réseaux non-linéaires. Comme nous le verrons, les modèles ainsi obtenus reproduisent naturellement un phénomène communément observé dans des enregistrements électrophysiologiques : une dynamique de population de basse dimension combinée avec représentations neuronales irrégulières, à haute dimension, et mixtes<br>The mammalian cortex consists of large and intricate networks of spiking neurons. The task of these complex recurrent assemblies is to encode and process with high precision the sensory information which flows in from the external environment. Perhaps surprisingly, electrophysiological recordings from behaving animals have pointed out a high degree of irregularity in cortical activity. The patterns of spikes and the average firing rates change dramatically when recorded in different trials, even if the experimental conditions and the encoded sensory stimuli are carefully kept fixed. One current hypothesis suggests that a substantial fraction of that variability emerges intrinsically because of the recurrent circuitry, as it has been observed in network models of strongly interconnected units. In particular, a classical study [Sompolinsky et al, 1988] has shown that networks of randomly coupled rate units can exhibit a transition from a fixed point, where the network is silent, to chaotic activity, where firing rates fluctuate in time and across units. Such analysis left a large number of questions unsolved: can fluctuating activity be observed in realistic cortical architectures? How does variability depend on the biophysical parameters and time scales? How can reliable information transmission and manipulation be implemented with such a noisy code? In this thesis, we study the spontaneous dynamics and the computational properties of realistic models of large neural circuits which intrinsically produce highly variable and heterogeneous activity. The mathematical tools of our analysis are inherited from dynamical systems and random matrix theory, and they are combined with the mean field statistical approaches developed for the study of physical disordered systems. In the first part of the dissertation, we study how strong rate irregularities can emerge in random networks of rate units which obey some among the biophysical constraints that real cortical neurons are subject to. In the second and third part of the dissertation, we investigate how variability is characterized in partially structured models which can support simple computations like pattern generation and decision making. To this aim, inspired by recent advances in networks training techniques, we address how random connectivity and low-dimensional structure interact in the non-linear network dynamics. The network models that we derive naturally capture the ubiquitous experimental observations that the population dynamics is low-dimensional, while neural representations are irregular, high-dimensional and mixed
APA, Harvard, Vancouver, ISO, and other styles
22

Hong, Frank Shihong. "Structural knowledge in simple recurrent network?" 1999. https://scholarworks.umass.edu/theses/2348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chung, Xiang-Hong, and 鍾相宏. "A Forecasting Method with Multivariable Analysis for Prevention of Dengue Outbreaks Based on Simple Recurrent Neural Network Techniques." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/92555340063226151968.

Full text
Abstract:
碩士<br>國立高雄應用科技大學<br>電機工程系博碩士班<br>105<br>Dengue fever was a disease that feels both familiar and fearful for the countries located in the tropical and subtropical region, especially the breeding grounds and territorial expansion of mosquito day by day caused by the climate warming and the development of international trade and traffic. Consider the case of Taiwan, there were serious outbreaks of dengue fever in the south of Taiwan in 2014 and 2015, the number of dengue cases was unprecedented, resulting in many death cases. Although the epidemic of dengue fever has been eased in 2016 and doesn't have serious outbreaks until now, we still need to keep the warning attitude for the future days. Therefore, we discussed a multi-category classifier based on neural network techniques in this work and simulated the prediction of dengue outbreaks through classifying the new data by the classification models. We collected and unified the historical data related the dengue outbreaks during the period from 2007 through 2016, and the data, including infected case, environmental factors, climate factors and event factors in this work. Through the official epidemic level definition combine with the classify method of the Simple Recurrent Neural Network (SimpleRNN) for learning to forecast the class of dengue outbreaks monthly. Although the predicted result couldn’t match in real outbreaks level perfectly, it was still being able to warn the epidemic early one month and remind the related unit started to prevent a kind of level of epidemic outbreaks. In the final, we obtain the experience by the resulting discussion from experiment to improve the prediction method of related research in the future.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Ming Jang, and 林明璋. "Research on Dynamic Recurrent Neural Network." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/70522525556782624102.

Full text
Abstract:
碩士<br>國立政治大學<br>應用數學研究所<br>82<br>Our task in this paper is to discuss the Recurrent Neural Network. We construct a singal layer neural network and apply three different learning rules to simulate circular trajectory and figure eight. Also, we present the proof of convergence.
APA, Harvard, Vancouver, ISO, and other styles
25

CHEN, HUNG-PEI, and 陳虹霈. "Integrating Convolutional Neural Network and Recurrent Neural Network for Automatic Text Classification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4jqh8z.

Full text
Abstract:
碩士<br>東吳大學<br>數學系<br>108<br>With the rapid development of huge data research area, the demand for processing textual information is increasing. Text classification is still a hot research in the field of natural language processing. In the traditional text mining process, we often use the "Bag-of-Words" model, which discards the order of the words in the sentence, mainly concerned with the frequency of occurrence of the words. TF-IDF (term frequency–inverse document frequency) is one of the techniques for feature extraction commonly used in text exploration and classification. Therefore, we combine convolutional neural network and recurrent neural network to consider the semantics and order of the words in the sentence for text classification. We apply 20Newsgroups news group as our test dataset. The performance of the result achieves an accuracy of 86.3% on the test set and improves about 3% comparing with the traditional model.
APA, Harvard, Vancouver, ISO, and other styles
26

Kühn, Simone [Verfasser]. "Simulation of mental models with recurrent neural networks / vorgelegt von Simone Kühn." 2006. http://d-nb.info/98116384X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Neng-Jie, and 楊能傑. "An Optimal Recurrent Fuzzy Neural Network Controller." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/22893053061456487124.

Full text
Abstract:
碩士<br>中原大學<br>電機工程研究所<br>90<br>In this thesis, an optimal recurrent fuzzy neural network controller is by an adaptive genetic algorithm. The recurrent fuzzy neural network has recurrent connections representing memory elements and uses a generalized dynamic backpropagation algoruthm to adjust fuzzy parameters on-line. Usually, the learning rate and the initial parameter values are chosen randomly or by experience, therefore is human resources consuming and inefficient. An adaptive genetic algorithm is used instead to optimize them. The adaptive genetic algorithm adjust the probability of crossover and mutation adaptively according to fitness values, therefore can avoid falling into local optimum and speed up convergence. The optimal recurrent fuzzy neural network controller is applied to the simulation of a second-ordeer linear system, a nonlinear system, a highly nonlinear system with instantaneous loads. The simulation results show that the learning rate as well as other fuzzy parameters are important factor for the optimal design. Certainly, with the optimal design, every simulation achieve the lowest sum of squared error and the design process done automatically by computer programs.
APA, Harvard, Vancouver, ISO, and other styles
28

LIN, CHENG-YANG, and 林政陽. "Recurrent Neural Network-based Microphone Howling Suppression." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hd839v.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電子工程系<br>107<br>When using the karaoke system to sing, it is often too close the microphone and power of the amplified speaker is too large, causing a positive feedback and howling making the singer and the listener to be uncomfortable. Generally, to solve the microphone howling, often using a frequency shift to interrupt the resonance, or using a band-stop filter to remedy afterwards. But both may cause sound quality damage. Therefore, we want to use the adaptive feedback cancellation algorithm. Using the input source of the amplified speaker as the reference signal to automatically estimate the feedback signals that may record in different signal-to-noise. And eliminate the signal gain before howling occurs directly from the source. Based on the above ideas, in this paper, the howling elimination algorithm of normalized least mean square (NLMS) is realized, especially considering the nonlinear distortion of the sound amplification system, and the advanced algorithm based on recurrent neural network (RNN) is proposed. And in the experiment, test the time-domain or frequency-domain processing separately, and use NLMS or RNN, a total of four different combinations, the convergence speed and computational demand of different algorithms under different temperament and different environmental spatial response situations and howling suppression effect. The experimental results show that: (1) the convergence in the time domain is faster, (2) Stable effect in the frequency domain (3) Time domain RNN is best at eliminating effects, but there are too large calculations.
APA, Harvard, Vancouver, ISO, and other styles
29

Abdolzadeh, Vida. "Efficient Implementation of Recurrent Neural Network Accelerators." Tesi di dottorato, 2020. http://www.fedoa.unina.it/13225/1/Abdolzadeh_Vida_32.pdf.

Full text
Abstract:
In this dissertation, we propose an accelerator for the implementation of Lthe ong Short-Term Memory layer in Recurrent Neural Networks. We analyze the effect of quantization on the accuracy of the network and we derive an architecture that improves the throughput and latency of the accelerator. The proposed technique only requires one training process, hence reducing the design time. We present the implementation results of the proposed accelerator. The performance compares favorably with other solutions presented in Literature. The goal of this thesis is to choose which circuit is better in terms of precision, area and timing. In addition, to verify that the chosen circuit works perfectly as activation functions, it is converted in Vivado HLS using C and then integrated in an LSTM Layer. A Speech recognition application has been used to test the system. The results are compared with the ones computed using the same Layer in Matlab to obtain the accuracy and to decide if the precision of the Non-Linear functions is sufficient.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Hui-Hua, and 王慧華. "Adaptive Learning Rates in Diagonal Recurrent Neural Network." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/50105668211095009187.

Full text
Abstract:
碩士<br>大同工學院<br>機械工程學系<br>84<br>In this paper, the ideal best adaptive learning rates arederived out for diagonal recurrent neural network. The adaptivelearning rates are chosen for fitting error convergence requirements.And the convergence requirements are discussed then modified for a practical control system. Finally the simulation results are shownin diagonal recurrent neural network based control system with the modified adaptive learning rates.
APA, Harvard, Vancouver, ISO, and other styles
31

Liao, Yuan-Fu, and 廖元甫. "Isolated Mandarin Speech Recognition Using Recurrent Neural Network." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/68290588901248152864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Thirion, Jan Willem Frederik. "Recurrent neural network-enhanced HMM speech recognition systems." Diss., 2002. http://hdl.handle.net/2263/29149.

Full text
Abstract:
Please read the abstract in the section 00front of this document<br>Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2006.<br>Electrical, Electronic and Computer Engineering<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
33

Tsai, Yao-Cheng, and 蔡曜丞. "Acoustic Echo Cancellation Based on Recurrent Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/jgk3ea.

Full text
Abstract:
碩士<br>國立中央大學<br>通訊工程學系<br>107<br>Acoustic echo cancellation is a common problem in speech and signal processing until now. Application scenarios such as telephone conference, hands-free handsets and mobile communications. In the past we used adaptive filters to deal with acoustic echo cancellation, and today we can use deep learning to solve complex problems in acoustic echo cancellation. The method proposed in this work is to consider acoustic echo cancellation as a problem of speech separation, instead of the traditional adaptive filter to estimate acoustic echo. And use the recurrent neural network architecture in deep learning to train the model. Since the recurrent neural network has a good ability to simulate time-varying functions, it can play a role in solving the problem of acoustic echo cancellation. We train a bidirectional long short-term memory network and a bidirectional gated recurrent unit. Features are extracted from single-talk speech and double-talk speech. Adjust weights to control the ratio between double-talk speech and single-talk speech, and estimate the ideal ratio mask. This way to separate the signal, in order to achieve the purpose of removing the echo. The experimental results show that the method has good effect in echo cancellation.
APA, Harvard, Vancouver, ISO, and other styles
34

Hu, Hsiao-Chun, and 胡筱君. "Recurrent Neural Network based Collaborative Filtering Recommender System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ytva33.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>資訊工程系<br>107<br>As the rapid development of e-commerce, Collaborative Filtering Recommender System has been widely applied to major network platforms. Predict customers’ preferences accurately through recommender system could solve the problem of information overload for users and reinforce their dependence on the network platform. Since the recommender system based on collaborative filtering has the ability to recommend products that are abstract or difficult to describe in words, research related to collaborative filtering has attracted more and more attention. In this paper, we propose a deep learning model framework for collaborative filtering recommender system. We use Recurrent Neural Network as the most important part of this framework which makes our model have the ability to consider the timestamp of implicit feedbacks from each user. This ability then significantly improve the performance of our models when making personalization item recommendations. In addition, we also propose a training data format for Recurrent Neural Network. This format makes our recommender system became the first Recurrent Neural Network model that can consider both positive and negative implicit feedback instance during the training process. Through conducted experiments on the two real-world datasets, MovieLens-1m and Pinterest, we verify that our model can finish the training process during a shorter time and have better recommendation performance than the current deep learning based Collaborative Filtering model.
APA, Harvard, Vancouver, ISO, and other styles
35

Chiu, Yi-Feng, and 邱一峰. "STUDY ON SELF-CONSTRUCTING FUZZY NEURAL NETWORK CONTROLLER USING RECURRENT NEURAL NETWORK LEARNING STRATEGY." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/38808034711756082416.

Full text
Abstract:
碩士<br>大同大學<br>電機工程學系(所)<br>101<br>In this thesis, the self-constructing fuzzy neural network controller (SCFNN) using recurrent neural network (RNN) learning strategy is proposed. For back-propagation (BP) algorithm of the SCFNN controller, the exact calculation of the Jacobian of the system cannot be determined. In this thesis, the RNN learning strategy is proposed to replace the error term of SCFNN controller. After the training of the RNN learning strategy, that will receive the relation between controlling signal and result of the nonlinear of the plant completely. Moreover, the structure and the parameter-learning phases are preformed concurrently and on-line in the SCRFNN. The SCFNN controller is designed to achieve the tracking control of an electronic throttle. The proposed controller, there are two processes that one is structure learning phase and another is parameter learning phase. The structure learning is based on the partition of input space, and the parameter learning is based on the supervised gradient-decent method using BP algorithm. Mahalanobis distance (M-distance) method in this thesis is employed as the criterion to identify the Gaussian function will be generated / eliminated or not. Finally, the simulation results of the electronic throttle valve are provided to demonstrate the performance and effectiveness of the proposed controller.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Chung-Hao, and 王仲豪. "STUDY ON SELF-CONSTRUCTING FUZZY NEURAL NETWORK CONTROLLER USING RECURRENT WAVELET NEURAL NETWORK LEARNING STRATEGY." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/66373384738532600320.

Full text
Abstract:
碩士<br>大同大學<br>電機工程學系(所)<br>102<br>In this thesis, the self-constructing fuzzy neural network controller (SCFNN) using recurrent wavelet neural network (RWNN) learning strategy is proposed. SCFNN has been proven over the years to simulate the relationship between input and output of the nonlinear dynamic system. Nevertheless, there are still has the drawback of training retard in this control method. The RWNN approach with a widely similar range of nature since the formation of wavelet transform through the dilation and translation of mother wavelet, it has capability to resolve time domain and scaled and very suitable to describe the function of the nonlinear phenomenon. Importing the adaptable of RWNN learning strategy can improve the learning capability for SCFNN controller. The proposed controller has two learning phase, that is structure learning and parameter learning. In the former, Mahalanobis distance method is used as the basis for identify the function of Gaussian is generated or eliminated. The latter is based on the gradient-decent method to update parameters; the both learning phases are synchronized and real-time executed in parallel. In this study, the electronic throttle system as a control plant of nonlinear dynamic in order to achieve the throttle angle control, the simulation shows that the proposed control method has good capability of identification system and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, Chun-Hung, and 張俊弘. "Pricing Euro Currency Options—Comparison of Back-Propagation Neural Network Modeland Recurrent Neural Network Model." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/08966045306928572228.

Full text
Abstract:
碩士<br>中原大學<br>企業管理研究所<br>92<br>During the past four decades, Options have become one of the most popular derivatives products in the financial market. The accuracy of pricing option has been an interesting topic since Black and Scholes’ model in 1973. The target of this investigation is Euro currency option. The study uses two artificial neural network models (i.e., back-propagation neural network and recurrent neural network ) and employs four volatility variables (i.e., historical volatility, implied volatility, GARCH volatility and non-volatility) in order to compare the pricing performance of all kinds of association, and to analyze the valuation abilities of these two artificial neural network models and the applicability of volatility variables. Furthermore, this work verifies that whether the volatility is the key input under the learning mechanism of the artificial neural network models. The empirical results show that there are some limitations to forecast the accurate valuation for the long-term period on both neural network models. After reducing the length of forecast periods, the implied volatility variable in both artificial neural network models produced the smallest error, while non-volatility variable resulted in the largest error of four volatility variables. Regarding the other two volatility variables, this study finds that, under the back-propagation neural network model, GARCH volatility is just inferior to implied volatility, but the performance of historical volatility is better than GARCH volatility under the recurrent neural network model. In summary, this work suggests that different volatilities chosen will cause various impacts. Therefore, appropriate volatility used seems to be more important than the adoption of which artificial neural network models.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Jyun-Hong, and 李俊宏. "Object Mask and Boundary Guided Recurrent Convolution Neural Network." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/cz2j2t.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊工程學系<br>104<br>Convolution neural network (CNN) has outstanding performance on recognition, CNN not only enhance the effectiveness of the whole-image classification, but also makes the identification of local task upgrade. The Full convolution neural network (FCN) also makes the improvement on semantic image segmentation, compared to the traditional way using region proposal combined super vector machine, and significantly improved the accuracy of semantic segmentation. In our paper, we combined two network to improve accuracy. One produces mask, and the other one classifies label of pixel. One of our proposed is that, we change the joint images of domain transform in DT-EdgeNet [19]. Due to the joint images of DT-EdgeNet are edges. These edges include the edges of object, which do not belong to the training set. So we guess that result of [19] after domain transform mind be influence by these edges. Our mask net can produce score map of background, object and boundary. These results do not include object belong to the training set. Therefore, we can reduce the influence of non-class object. Our mask net can also produce mask to optimize spatial information. Our other proposal is that we concatenate different pixel stride of OBG-FCN [18]. By adding this concatenate layer to train net, we can enhance the accuracy of object of boundary. In the end, we tested our proposed architecture on Pascal VOC2012, and got 6.6% higher than baseline on mean IOU.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Bo-Yuan, and 黃柏元. "The Composite Design of Recurrent Neural Network H∞ - Compensator." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35654951335458184154.

Full text
Abstract:
碩士<br>國立成功大學<br>系統及船舶機電工程學系碩博士班<br>93<br>In this study, a composite design of Recurrent Neural Network (RNN) H∞-Compensator is proposed for tracking the desired input. The composite control system is composed of an H∞ compensator, which is proposed by Hwang 【3】 and Doyle 【6】, and a back-propagation RNN compensator.  In order to make the controlled system robust, the H∞ control law is relatively conservative in the solution process. To speed up the convergence of tracking errors and match the prescribed performance, the recurrent neural network with self-learning algorithm is used to improve the performance of the H∞-compensator. The back-propagation algorithm in the proposed RNN-H∞ compensator is applied to minimize the calculating time of the predicting parameters.  Computer simulation results show that the desired performance can easily be achieved by using the proposed RNN-H∞ compensator under the presence of disturbances.
APA, Harvard, Vancouver, ISO, and other styles
40

Hau-Lung, Huang, and 黃浩倫. "Real Time Learning Recurrent Neural Network for Flow Estimation." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/90765984108789147121.

Full text
Abstract:
碩士<br>國立臺灣大學<br>農業工程學研究所<br>87<br>This research presents an alternative approach of the Artificial Neural Network (ANN) model to estimate streamflow. The architecture of Recurrent Neural Network (RNN) that we used provides a representation of dynamic internal feedback loops in the system to store information for later use. The Real-Time Recurrent Learning (RTRL) algorithm is implanted to enhance the learning efficiency. The main feature of the RTRL is that it doesn''t need a lot of historical examples for training. Combining the RNN and RTRL to model watershed rainfall-runoff process will complement traditional techniques in the streamflow estimation.
APA, Harvard, Vancouver, ISO, and other styles
41

Peng, Chung-chi, and 彭中麒. "Recurrent Neural Network Control for a Synchronous Reluctance Motor." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/21986022062786916763.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>電機工程系碩士班<br>101<br>This thesis develops a digital signal processor (dSPACE inc. DS1104) based synchronous reluctance motor (SynRM) drive system. Elman neural network and modified Elman neural network controller are proposed in the SynRM when the SynRM has parameters variations and external disturbances. Recurrent Neural Network (RNN) and Elman neural network (ENN) are compared which ENN has faster convergence for special recurrent structure. The on-line parameters learning of the neural network used the back-propagation (BP) algorithm. We use the discrete-type Lyapunov function to guarantee the output error convergence. Finally, the proposed controller algorithms are shown in experimental results effectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Tsai-Wei, and 盧采威. "Tikhonov regularization for deep recurrent neural network acoustic modeling." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/70636533678066549649.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>102<br>Deep learning has been widely demonstrated to achieve high performance in many classification tasks. Deep neural network is now a new trend in the areas of automatic speech recognition. In this dissertation, we deal with the issue of model regularization in deep recurrent neural network and develop the deep acoustic models for speech recognition in noisy environments. Our idea is to compensate the variations of input speech data in the restricted Boltzmann machine (RBM) which is applied as a pre-training stage for feature learning and acoustic modeling. We implement the Tikhonov regularization in pre-training procedure and build the invariance properties in acoustic neural network model. The regularization based on weight decay is further combined with Tikhonov regularization to increase the mixing rate of the alternating Gibbs Markov chain so that the contrastive divergence training tends to approximate the maximum likelihood learning. In addition, the backpropagation through time (BPTT) algorithm is developed in modified truncated minibatch training for recurrent neural network. This algorithm is not implemented in the recurrent weights but also in the weights between previous layer and recurrent layer. In the experiments, we carry out the proposed methods using the open-source Kaldi toolkit. The experimental results using the speech corpora of Resource Management (RM) and Aurora4 show that the ideas of hybrid regularization and BPTT training do improve the performance of deep neural network acoustic model for robust speech recognition.
APA, Harvard, Vancouver, ISO, and other styles
43

CHEN, JYUN-HE, and 陳均禾. "System Identification and Classification Using Elman Recurrent Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/825p2n.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>電機工程系<br>107<br>In recent years, the fast development of Artificial Intelligence has promoted the technological progress. That the three major technologies, Machine Learning, Deep Learning, and Natural Language Processing. Machine Learning is the largest part. The use of software programming through artificial neural networks allows computers to emulate learning abilities like the human brain. In this thesis, in order to understand the learning effect of artificial neural networks on classification problems and nonlinear system identification, an Elman neural network with self-feedback factor is used. In this thesis, in order to study the classification problem and system identification problem, six algorithms, i.e., RTRL, GA, PSO, BBO, IWO and Hybrid IWO/BBO methods, are utilized to learn the weight of Elman neural network. To explore the effectiveness of algorithms and neural network architectures, four classification problems are used, Breast Cancer Data Set, Parkinsons Data Set, SPECT Heart Data Set, and Lung Cancer Data Set. Three nonlinear system identification problems are used, Nonlinear plant, Henon system and Mackey-Glass time series. Finally, the MSE, STD and the Classification rate, are used in the experimental classification problem. The MSE, STD and NDEI, are used to compare and analyze the system identification problem.
APA, Harvard, Vancouver, ISO, and other styles
44

CHEN, SHEN-CHI, and 陳順麒. "On the Recurrent Neural Network Based Intrusion Detection System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/75tb39.

Full text
Abstract:
碩士<br>逢甲大學<br>資訊工程學系<br>107<br>With the advancement of modern science and technology, numerous applications of the Internet of Things are developing faster and faster. Smart grid is one of the examples which provides full communication, monitor, and control abilities to the components in the power systems in order to meet the increasing demands of reliable energy. In such systems, many components can be monitored and controlled remotely. As a result, they could be vulnerable to malicious cyber-attacks if there exist exploitable loopholes. In the power system, the disturbances caused by cyber-attacks are mixed with those caused by natural events. It is crucial for the intrusion detection systems in the smart grid to classify the types of disturbances and pinpoint the attacks with high accuracy. The amount of information in a smart grid system is much larger than before, and the amount of computation of the big data increases accordingly. Many analyzing techniques have been proposed to extract useful information in these data and deep learning is one of them. It can be applied to “learn” a model from a large set of training data and classify unknown events from subsequent data. In this paper, we apply the methods of recurrent neural network (RNN) algorithm as well as two other variants to train models for intrusion detection in smart grid. Our experiment results showed that RNN can achieves high accuracy and precision on a set of real data collected from an experimental power system network.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsieh, Tsung-Che, and 謝宗哲. "Recurrent Neural Network with Attention Mechanism for Language Model." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/76y7wc.

Full text
Abstract:
碩士<br>國立臺中科技大學<br>資訊管理系碩士班<br>106<br>The rapid growth of the Internet promotes the growth of textual data, and people get the information they need from the amount of textual data to solve problems. The textual data may include some potential information like the opinions of the crowd, the opinions of the product, or some market-relevant information. However, some problems that point to "How to get features from the text” must be solved. The model of extracting the text features by using the neural Network method is called neural network language model (NNLM). The features are based on n-gram Model concept, which are the co-occurrence relationship between the vocabulary. The word vectors are important because the sentence vectors or the document vectors still have to understand the relationship between the words, and based on this, this study discussing the word vectors. This study assumes that the words contains "the meaning in sentences" and "the position of grammar". This study uses RNN (recurrent neural network) with attention mechanism to establish a language model. This study uses Penn Treebank (PTB), WikiText-2 (WT2) and NLPCC2017 text dataset. According to these dataset, the proposed models provide the better performance by the perplexity(PPL).
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Chi-Jui, and 黃麒瑞. "Motor Fault Detection bu Using Recurrent Neural Network Autoencoder." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/5dsset.

Full text
Abstract:
碩士<br>國立交通大學<br>機械工程系所<br>107<br>This research proposes a two-layer analysis architecture of machine learning and deep learning to predict the motor failure modes. The data were obtained from a self-built motor testing platform. The first layer analysis model integrates Recurrent Neural Network (RNN) with Autoencoder (AE) to analyze the data and perform the corresponding dimension reduction. The procedure is to input the data into the model sequentially, then, make the comparisons by using three different neurons, which are Basic RNN, Long Short-Term Memory, and Gated Recurrent Unit, respectively, combined with AE. As the specific neuron is determined, it carries out the experiments by using the various Hyperparameters in order to get the most suitable one to optimize the model. The second layer one adopts Artificial Neural Network, Support Vector Machine, Random Forest, and XGBoost algorithms to classify the dimension-reduction data into the corresponding fault categories. In the meantime, the Principal Components and Linear Discriminant Analyses are used to further perform the second dimension reduction, such that the different fault types of data can be visualized on a plane. The accuracy of the testing data via the fifteen-category fault detection model by using the single-layer ANN can reach 99%. After the second dimension reduction through LDA, the different fault types of data can be clearly identified in a picture. It indicates that the data after dimension reduction via the first layer model developed by this research still can maintain the high-dimensional data information. The second layer model can provide excellent prediction performance. Those demonstrate that the architecture proposed by this study is good enough to be applied to time-series data analysis.
APA, Harvard, Vancouver, ISO, and other styles
47

Duan, Chi-Huai, and 段智懷. "A Comparison of Feedforward Neural Network and Recurrent Neural Network for small region Typhoon-Rainfall forecasting." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/22318543281738835946.

Full text
Abstract:
碩士<br>逢甲大學<br>水利工程所<br>93<br>Abstract Typhoon is the strongest obvious disaster in Taiwan region, the disaster caused in Taiwan mainly regards typhoon rainfall. However, typhoon rainfall main under the influence of typhoon and water factors complicatedly with mutual influence. The variety of these factors present nonlinear variety, it increases the difficulty of typhoon rainfall forecasting. So far, the empirical formulas or the numerical models that still have no ability carries on the description for typhoon rainfall completely. This study tries to use Feedforward Neural Network and Recurrent Neural Network of Back-Propagation Network to build typhoon rainfall forecasting model, through the processing ability of neural network to deal with the non-linear relationships and memorize complicated typhoon structure, to reach the purpose of forecasting typhoon rainfall. In tradition, BPN forecasts rainfall with good precision as usually, but it has to retrain a new model when there is new information about rainfall. It wastes a lot of time and causes inconvenience in use. Due to this reason, we try to use Online Learning method to build RNN model that renew the weights and show the model’s change more confidently and fast when there is new information about rainfall. Verification from 9 Typhoon events, RNN model is more robust and efficient than BPN model.
APA, Harvard, Vancouver, ISO, and other styles
48

Rossi, Alberto. "Siamese and Recurrent neural networks for Medical Image Processing." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238384.

Full text
Abstract:
In recent years computer vision applications have been pervaded by deep convolutional neural networks (CNNs). These networks allow practitioners to achieve the state of the art performance at least for the segmentation and classification of images and in object localization, but in each of these cases the obtained results are directly correlated with the size of the training set, the quality of the annotations, the network depth and the power of modern GPUs. The same rules apply to medical image analysis, although, in this case, collecting tagged images is more difficult than ever, due to the scarcity of data — because of privacy policies and acquisition difficulties — and to the need of experts in the field to make annotations. Very recently, scientific interest in the study and application of CNNs to medical imaging has grown significantly, opening up to challenging new tasks but also raising fundamental issues that are still open. Is there a way to use deep networks for image retrieval in a database to compare and analyze a new image? Are CNNs robust enough to be trusted by doctors? How can small institutions, with limited funds, manage expensive equipments, such as modern GPUs, needed to train very deep neural networks? This thesis investigates many of the issues described above, adopting two deep learning architectures, namely siamese networks and recurrent neural networks. We start with the use of siamese networks to build a Content–Based Image Retrieval system for prostate MRI, to provide radiologists with a tool for comparing multi–parametric MRI in order to facilitate a new diagnosis. Moreover, an investigation is proposed on the use of a composite loss classifier for prostate MRI, based on siamese networks, to increase robustness to random noise and adversarial attacks, yielding more reliable results. Finally, a new method for intra–procedural registration of prostatic MRIs based on siamese networks was developed. The use of recurrent neural networks is then explored for skin lesion classification and age estimation based on brain MRI. In particular, a new devised recurrent architecture, called C–FRPN, is employed for classifying natural images of nevis and melanomas allowing good performance with a reduced computational load. Similar conclusion can be drawn for the case brain MRI, where 3D images can be sliced and processed by recurrent architectures in an efficient though reliable way.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Chih-Ming, and 張志明. "Control of Web-fed Machine Using Recurrent Neural Network Coltroller." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/07172837269301260133.

Full text
Abstract:
碩士<br>國立成功大學<br>航空太空工程學系<br>88<br>In our daily life, there are many products similar to web-fed machine, such as audio recorder, VHS recorder, tape-making machine and paper-making machine. In these systems, the quality of products is very sensitive to web tension. Too large tension will overreach the web, leading to deformation, and even break. However, in conveying process, too small tension will bend the web and make the advance manufacture process difficult. Besides, the conveying speed is another key-factor to product quality. Too higher speed can cause large variations in tension and make the control difficult. Too slow speed will lead to poor manufacture efficiency. In this paper, the web-fed machine model is constructed such that the SMC(Sliding mode control) can be employed to control the tension and speed of the moving web. A recurrent neural network is used as an estimator to estimate disturbances and uncertainties such that the operation range becomes more flexible, and the system performance can be improved. The results show that the proposed method is feasible to web-fed machine control. The manufacture efficiency and the product quality of the web-fed machine can also be improved.
APA, Harvard, Vancouver, ISO, and other styles
50

"Recurrent neural network for optimization with application to computer vision." Chinese University of Hong Kong, 1993. http://library.cuhk.edu.hk/record=b5887839.

Full text
Abstract:
by Cheung Kwok-wai.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.<br>Includes bibliographical references (leaves [146-154]).<br>Chapter Chapter 1 --- Introduction<br>Chapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1<br>Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2<br>Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3<br>Chapter 1.4 --- Objective of the Research --- p.1-6<br>Chapter 1.5 --- Plan of the thesis --- p.1-7<br>Chapter Chapter 2 --- Background<br>Chapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1<br>Chapter 2.2 --- Hopfield network model --- p.2-3<br>Chapter 2.2.1 --- Neuron's transfer function --- p.2-3<br>Chapter 2.2.2 --- Updating sequence --- p.2-6<br>Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1<br>Chapter 2.4 --- Generalized Hopfield network --- p.2-13<br>Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13<br>Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13<br>Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15<br>Chapter Chapter 3 --- Recurrent neural network for optimization<br>Chapter 3.1 --- Mapping to Neural Network formulation --- p.3-1<br>Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5<br>Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6<br>Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8<br>Chapter 3.2.3 --- Numerical Example --- p.3-10<br>Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12<br>Chapter 3.3.1 --- Simulated Annealing --- p.3-13<br>Chapter 3.3.2 --- Mean Field Annealing --- p.3-15<br>Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16<br>Chapter 3.3.4 --- Correcting Current Method --- p.3-16<br>Chapter 3.4 --- Conclusions --- p.3-17<br>Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling Network<br>Chapter 4.1 --- Tunneling Algorithm --- p.4-1<br>Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1<br>Chapter 4.1.2 --- Tunneling Phase --- p.4-2<br>Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8<br>Chapter 4.2.1 --- Network Specifications --- p.4-8<br>Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9<br>Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12<br>Chapter 4.3.1 --- Tunneling network stability --- p.4-12<br>Chapter 4.3.2 --- Global convergence property --- p.4-15<br>Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15<br>Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16<br>Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18<br>Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20<br>Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21<br>Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24<br>Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30<br>Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31<br>Chapter 4.4.1 --- Simulation Experiments --- p.4-32<br>Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37<br>Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37<br>Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38<br>Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48<br>Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48<br>Chapter 4.5.2 --- Alternative implementation theory --- p.4-52<br>Chapter 4.6 --- Conclusions --- p.4-54<br>Chapter Chapter 5 --- Recurrent Neural Network for Gaussian Filtering<br>Chapter 5.1 --- Introduction --- p.5-1<br>Chapter 5.1.1 --- Silicon Retina --- p.5-3<br>Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5<br>Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7<br>Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8<br>Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9<br>Chapter 5.2.1 --- One Dimensional Case --- p.5-9<br>Chapter 5.2.2 --- Two Dimensional Case --- p.5-13<br>Chapter 5.3 --- Simulation Results and Discussions --- p.5-14<br>Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14<br>Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14<br>Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15<br>Chapter 5.4 --- Conclusions --- p.5-16<br>Chapter Chapter 6 --- Recurrent Neural Network for Boundary Detection<br>Chapter 6.1 --- Introduction --- p.6-1<br>Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3<br>Chapter 6.2.1 --- Problem Formulation --- p.6-3<br>Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4<br>Chapter 6.2.3 --- Neural Network formulation --- p.6-5<br>Chapter 6.3 --- Simulation Results and Discussions --- p.6-7<br>Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7<br>Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9<br>Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10<br>Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10<br>Chapter 6.4 --- Conclusions --- p.6-11<br>Chapter Chapter 7 --- Conclusions and Future Researches<br>Chapter 7.1 --- Contributions and Conclusions --- p.7-1<br>Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3<br>References --- p.R-l<br>Appendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1<br>Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1<br>Appendix III Details on reshaping strategy --- p.A3-1
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography