To see the other types of publications on this topic, follow the link: Backpropagation through time (BPTT).

Journal articles on the topic 'Backpropagation through time (BPTT)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Backpropagation through time (BPTT).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lady, Silk Moonlight, Riyanto Trilaksono Bambang, Bagus Harianto Bambang, and Faizah Fiqqih. "Implementation of recurrent neural network for the forecasting of USD buy rate against IDR." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (2023): 4567–81. https://doi.org/10.11591/ijece.v13i4.pp4567-4581.

Full text
Abstract:
This study implements a recurrent neural network (RNN) by comparing two RNN network structures, namely Elman and Jordan using the backpropagation through time (BPTT) programming algorithm in the training and forecasting process in foreign exchange forecasting cases. The activation functions used are the linear transfer function, the tan-sigmoid transfer function (Tansig), and the log-sigmoid transfer function (Logsig), which are applied to the hidden and output layers. The application of the activation function results in the log-sigmoid transfer function being the most appropriate activation function for the hidden layer, while the linear transfer function is the most appropriate activation function for the output layer. Based on the results of training and forecasting the USD against IDR currency, the Elman BPTT method is better than the Jordan BPTT method, with the best iteration being the 4000 th iteration for both. The lowest root mean square error (RMSE) values for training and forecasting produced by Elman BPTT were 0.073477 and 122.15 the following day, while the Jordan backpropagation RNN method yielded 0.130317 and 222.96 also the following day.
APA, Harvard, Vancouver, ISO, and other styles
2

Moonlight, Lady Silk, Bambang Riyanto Trilaksono, Bambang Bagus Harianto, and Fiqqih Faizah. "Implementation of recurrent neural network for the forecasting of USD buy rate against IDR." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (2023): 4567. http://dx.doi.org/10.11591/ijece.v13i4.pp4567-4581.

Full text
Abstract:
<span lang="EN-US">This study implements a recurrent neural network (RNN) by comparing two RNN network structures, namely Elman and Jordan using the backpropagation through time (BPTT) programming algorithm in the training and forecasting process in foreign exchange forecasting cases. The activation functions used are the linear transfer function, the tan-sigmoid transfer function (Tansig), and the log-sigmoid transfer function (Logsig), which are applied to the hidden and output layers. The application of the activation function results in the log-sigmoid transfer function being the most appropriate activation function for the hidden layer, while the linear transfer function is the most appropriate activation function for the output layer. Based on the results of training and forecasting the USD against IDR currency, the Elman BPTT method is better than the Jordan BPTT method, with the best iteration being the 4000<sup>th</sup> iteration for both. The lowest root mean square error (RMSE) values for training and forecasting produced by Elman BPTT were 0.073477 and 122.15 the following day, while the Jordan backpropagation RNN method yielded 0.130317 and 222.96 also the following day.</span><p> </p>
APA, Harvard, Vancouver, ISO, and other styles
3

PALANGPOUR, PARVIZ, GANESH K. VENAYAGAMOORTHY, and KEVIN J. DUFFY. "PREDICTION OF ELEPHANT MOVEMENT IN A GAME RESERVE USING NEURAL NETWORKS." New Mathematics and Natural Computation 05, no. 02 (2009): 421–39. http://dx.doi.org/10.1142/s1793005709001404.

Full text
Abstract:
A large number of South Africa's elephants can be found on small wildlife reserves. The large nutritional demands and destructive foraging behavior of elephants can threaten rare species of vegetation. If conservation management is to protect threatened species of vegetation, knowing how long elephants will stay in one area of a reserve as well as which area they will move to next could be useful. The goal of this study was to train a recurrent neural network to predict an elephant herd's next position in the Pongola Game Reserve. Accurate predictions would provide a useful tool in assessing future impact of elephant populations on different areas of the reserve. Particle swarm optimization (PSO), PSO initialized backpropagation (PSO-BP) and PSO initialized backpropagation through time (PSO-BPTT) algorithms are used to adapt the recurrent neural network's weights. The effectiveness of PSO, PSO-BP and PSO-BPTT for training a recurrent neural network for elephant migration prediction is compared and PSO-BPTT produces the most accurate predictions at the expense of more computational cost.
APA, Harvard, Vancouver, ISO, and other styles
4

SUKSMONO, ANDRIYAN BAYU, and AKIRA HIROSE. "BEAMFORMING OF ULTRA-WIDEBAND PULSES BY A COMPLEX-VALUED SPATIO-TEMPORAL MULTILAYER NEURAL NETWORK." International Journal of Neural Systems 15, no. 01n02 (2005): 85–91. http://dx.doi.org/10.1142/s0129065705000128.

Full text
Abstract:
We present a neuro-beamformer of ultra-wideband (UWB) pulses employing complex-valued spatio-temporal multilayer neural network, where complex-valued backpropagation through time (CV-BPTT) is used as a learning algorithm. The system performance is evaluated with a UWB monocycle pulse. Simulation results in suppressing multiple UWB interferers and in steering to multiple desired UWB pulses, demonstrates the applicability of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomonaga, Sutashu, Haruo Mizutani, and Kenji Doya. "Training Recurrent Neural Networks with Inherent Missing Data for Wearable Device Applications (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29512–13. https://doi.org/10.1609/aaai.v39i28.35307.

Full text
Abstract:
Wearable devices are transforming healthcare by providing continuous, real-time physiological data for monitoring and analysis. However, data often suffer from noise and significant missing values due to operational constraints and user compliance. Traditional approaches address these issues through data imputation during pre-processing, introducing biases and inaccuracies. We propose a novel method enabling Recurrent Neural Networks (RNNs) to inherently handle missing data without imputation. By implementing teacher-forcing during Backpropagation Through Time (BPTT) when data are available and switching to autonomous mode otherwise, our approach leverages RNNs' dynamics to model physiological signals accurately. We demonstrate our method's effectiveness using the Lorenz 63 system as a surrogate dataset, achieving robust reconstructions with 80% missing data.
APA, Harvard, Vancouver, ISO, and other styles
6

Pratheeksha, P., B. M. Pranav, and Nasreen Azra. "Memory Optimization Techniques in Neural Networks: A Review." International Journal of Engineering and Advanced Technology (IJEAT) 10, no. 6 (2021): 44–48. https://doi.org/10.35940/ijeat.F2991.0810621.

Full text
Abstract:
Deep neural networks have been continuously evolving towards larger and more complex models to solve challenging problems in the field of AI. The primary bottleneck that restricts new network architectures is memory consumption. Running or training DNNs heavily relies on the hardware (CPUs, GPUs, or FPGA) which are either inadequate in terms of memory or hard-to-extend. This would further make it difficult to scale. In this paper, we review some of the latest memory footprint reduction techniques which would enable faster low model complexity. Additionally, it improves accuracy by increasing the batch size and developing wider and deeper neural networks with the same set of hardware resources. The paper emphasizes on memory optimization methods specific to CNN and RNN training.
APA, Harvard, Vancouver, ISO, and other styles
7

Alzubi, Abdallah, David Lin, Johan Reimann, and Fadi Alsaleem. "G-CTRNN: A Trainable Low-Power Continuous-Time Neural Network for Human Activity Recognition in Healthcare Applications." Applied Sciences 15, no. 13 (2025): 7508. https://doi.org/10.3390/app15137508.

Full text
Abstract:
Continuous-time Recurrent Neural Networks (CTRNNs) are well-suited for modeling temporal dynamics in low-power neuromorphic and analog computing systems, making them promising candidates for edge-based human activity recognition (HAR) in healthcare. However, training CTRNNs remains challenging due to their continuous-time nature and the need to respect physical hardware constraints. In this work, we propose G-CTRNN, a novel gradient-based training framework for analog-friendly CTRNNs designed for embedded healthcare applications. Our method extends Backpropagation Through Time (BPTT) to continuous domains using TensorFlow’s automatic differentiation, while enforcing constraints on time constants and synaptic weights to ensure hardware compatibility. We validate G-CTRNN on the WISDM human activity dataset, which simulates realistic wearable sensor data for healthcare monitoring. Compared to conventional RNNs, G-CTRNN achieves superior classification accuracy with fewer parameters and greater stability—enabling continuous, real-time HAR on low-power platforms such as MEMS computing networks. The proposed framework provides a pathway toward on-device AI for remote patient monitoring, elderly care, and personalized healthcare in resource-constrained environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Doyle, H. R., and B. Parmanto. "Recurrent Neural Networks for Predicting Outcomes after Liver Transplantation: Representing Temporal Sequence of Clinical Observations." Methods of Information in Medicine 40, no. 05 (2001): 386–91. http://dx.doi.org/10.1055/s-0038-1634197.

Full text
Abstract:
Summary Objectives: This paper investigates a version of recurrent neural network with the backpropagation through time (BPTT) algorithm for predicting liver transplant graft failure based on a time series sequence of clinical observations. The objective is to improve upon the current approaches to liver transplant outcome prediction by developing a more complete model that takes into account not only the preoperative risk assessment, but also the early postoperative history. Methods: A 6-fold cross-validation procedure was used to measure the performance of the networks. The data set was divided into a learning set and a test set by maintaining the same proportion of positive and negative cases in the original set. The effects of network complexity on overfitting were investigated by constructing two types of networks with different numbers of hidden units. For each type of network, 10 individual networks were trained on the learning set and used to form a committee. The performance of the networks was measured exhaustively with respect to both the entire training and test sets. Results: The networks were capable of learning the time series problem and achieved good performances of 90% correct classification on the learning set and 78% on the test set. The prediction accuracy increases as more information becomes progressively available after the operation with the daily improvement of 10% on the learning set and 5% on the test set. Conclusions: Recurrent neural networks trained with BPTT algorithm are capable of learning to represent temporal behavior of the time series prediction task. This model is an improvement upon the current model that does not take into account postoperative temporal information.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Jianhua, and Yongyue Wang. "Coordinated Frequency Control for Electric Vehicles and a Thermal Power Unit via an Improved Recurrent Neural Network." Energies 18, no. 3 (2025): 533. https://doi.org/10.3390/en18030533.

Full text
Abstract:
With the advancement of intelligent power generation and consumption technologies, an increasing number of renewable energy sources (RESs), smart loads, and electric vehicles (EVs) are being integrated into smart grids. This paper proposes a coordinated frequency control strategy for hybrid power systems with RESs, smart loads, EVs, and a thermal power unit (TPU), in which EVs and the TPU participate in short-term frequency regulation (FR) jointly. All EVs provide FR auxiliary services as controllable loads; specifically, the EV aggregations operate in charging mode when participating in FR. The proposed coordinated frequency control strategy is implemented by an improved recurrent neural network (IRNN), which combines a recurrent neural network with a functional-link layer. The weights and biases of the IRNN are trained by an improved backpropagation through time (BPTT) algorithm, in which a chaotic competitive swarm optimizer (CCSO) is proposed to optimize the learning rates. Finally, the simulation results verify the superiority of the coordinated frequency control strategy.
APA, Harvard, Vancouver, ISO, and other styles
10

Skaruz, Jarosław. "Database security: combining neural networks and classification approach." Studia Informatica, no. 23 (December 22, 2020): 95–115. http://dx.doi.org/10.34739/si.2019.23.06.

Full text
Abstract:
In the paper we present a new approach based on application of neural networks to detect SQL attacks. SQL attacks are those attacks that take the advantage of using SQL statements to be performed. The problem of detection of this class of attacks is transformed to time series prediction problem. SQL queries are used as a source of events in a protected environment. To differentiate between normal SQL queries and those sent by an attacker, we divide SQL statements into tokens and pass them to our detection system, which predicts the next token, taking into account previously seen tokens. In the learning phase tokens are passed to a recurrent neural network (RNN) trained by backpropagation through time (BPTT) algorithm. Then, two coefficients of the rule are evaluated. The rule is used to interpret RNN output. In the testing phase RNN with the rule is examined against attacks and legal data to find out how evaluated rule affects efficiency of detecting attacks. All experiments were conducted on Jordan network. Experimental results show the relationship between the rule and a length of SQL queries.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Xiaomei, Jinfei Wang, Xingrui Huang, Yang Wang, and Xianyong Xiao. "Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM." Sensors 25, no. 8 (2025): 2607. https://doi.org/10.3390/s25082607.

Full text
Abstract:
The detection of forced oscillations, especially distinguishing them from natural oscillations, has emerged as a major concern in power system stability monitoring. Deep learning (DL) holds significant potential for detecting forced oscillations correctly. However, existing artificial neural networks (ANNs) face challenges when employed in edge devices for timely detection due to their inherent complex computations and high power consumption. This paper proposes a novel hybrid network that integrates a spiking recurrent neural network (SRNN) with long short-term memory (LSTM). The SRNN achieves computational and energy efficiency, while the integration with LSTM is conducive to effectively capturing temporal dependencies in time-series oscillation data. The proposed hybrid network is trained using the backpropagation-through-time (BPTT) optimization algorithm, with adjustments made to address the discontinuous gradient in the SRNN. We evaluate our proposed model on both simulated and real-world oscillation datasets. Overall, the experimental results demonstrate that the proposed model can achieve higher accuracy and superior performance in distinguishing forced oscillations from natural oscillations, even in the presence of strong noise, compared to pure LSTM and other SRNN-related models.
APA, Harvard, Vancouver, ISO, and other styles
12

Pan, Shulin, Ke Yan, Haiqiang Lan, José Badal, and Ziyu Qin. "A Sparse Spike Deconvolution Algorithm Based on a Recurrent Neural Network and the Iterative Shrinkage-Thresholding Algorithm." Energies 13, no. 12 (2020): 3074. http://dx.doi.org/10.3390/en13123074.

Full text
Abstract:
Conventional sparse spike deconvolution algorithms that are based on the iterative shrinkage-thresholding algorithm (ISTA) are widely used. The aim of this type of algorithm is to obtain accurate seismic wavelets. When this is not fulfilled, the processing stops being optimum. Using a recurrent neural network (RNN) as deep learning method and applying backpropagation to ISTA, we have developed an RNN-like ISTA as an alternative sparse spike deconvolution algorithm. The algorithm is tested with both synthetic and real seismic data. The algorithm first builds a training dataset from existing well-logs seismic data and then extracts wavelets from those seismic data for further processing. Based on the extracted wavelets, the new method uses ISTA to calculate the reflection coefficients. Next, inspired by the backpropagation through time (BPTT) algorithm, backward error correction is performed on the wavelets while using the errors between the calculated reflection coefficients and the reflection coefficients corresponding to the training dataset. Finally, after performing backward correction over multiple iterations, a set of acceptable seismic wavelets is obtained, which is then used to deduce the sequence of reflection coefficients of the real data. The new algorithm improves the accuracy of the deconvolution results by reducing the effect of wrong seismic wavelets that are given by conventional ISTA. In this study, we account for the mechanism and the derivation of the proposed algorithm, and verify its effectiveness through experimentation using theoretical and real data.
APA, Harvard, Vancouver, ISO, and other styles
13

Mashoshin, O. F., H. Huseynov, and A. S. Zasukhin. "Methodology for diagnosing the technical condition of aviation gas turbine engines using recurrent neural networks (RNN) and long short-term memory networks (LSTM)." Civil Aviation High Technologies 27, no. 6 (2025): 21–41. https://doi.org/10.26467/2079-0619-2024-27-6-21-41.

Full text
Abstract:
This study presents a method for diagnosing the technical condition of aviation gas turbine engines (GTE) using recurrent neural networks (RNN) and long short-term memory networks (LSTM). The primary focus is on comparing the effectiveness of these models for forecasting key operating parameters of GTEs, such as vibrations, turbine-inlet temperatures, and rotor speeds of low and high pressure. The research involved thorough data cleaning and normalization, including handling missing values, normalization using Min-Max Scaling, outlier removal, data decorrelation, and time series smoothing. The RNN and LSTM models were trained using the backpropagation through time (BPTT) algorithm to accurately forecast GTE operating parameters. The results show that both models demonstrate high forecasting accuracy, but the RNN models perform better in most parameters. For vibration parameters (VIB_N1FNT1, VIB_N1FNT2, VIB_N2FNT1, and VIB_N2FNT2), RNN models achieved lower RMSE and MAE values, confirming their higher accuracy. For temperature parameters (EGT1 and EGT2), RNN models also showed higher accuracy rates. Meanwhile, LSTM models achieved better results for some rotor speed parameters (N21 and N22). The findings emphasize the necessity of choosing the appropriate model based on the nature of data and the specifics of the parameters to be forecast. Future research may focus on developing hybrid approaches that combine the advantages of both models to achieve optimal results in diagnosing the technical condition of GTEs.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Guangkui, Xu Yang, Xisheng Yang, et al. "Typical Damage Prediction and Reliability Analysis of Superheater Tubes in Power Station Boilers Based on Multisource Data Analysis." Energies 15, no. 3 (2022): 1005. http://dx.doi.org/10.3390/en15031005.

Full text
Abstract:
The superheater and re-heater piping components in supercritical thermal power units are prone to creep and fatigue failure fracture after extensive use due to the high pressure and temperature environment. Therefore, safety assessment for superheaters and re-heaters in such an environment is critical. However, the actual service operation data is frequently insufficient, resulting in low accuracy of the safety assessment. Based on such problems, in order to address the issues of susceptibility of superheater and re-heater piping components to creep, inaccurate fatigue failure fracture, and creep–fatigue coupling rupture in a safety assessment, their remaining life prediction and reliability, as well as the lack of actual service operation data, multisource heterogeneous data generated from actual service of power plants combined with deep learning technology was used in this paper. As such, three real-time operating conditions’ data (temperature, pressure, and stress amplitude) during equipment operation are predicted by training a deep learning architecture long short-term memory (LSTM) neural network suitable for processing time-series data and a backpropagation through time (BPTT) algorithm is used to optimize the model and compared with the actual physical model. Damage assessment and life prediction of final superheater tubes of power station boilers are carried out. The Weibull distribution model is used to obtain the trend of cumulative failure risk change and assess and predict the safety condition of the overall system of pressurized components of power station boilers.
APA, Harvard, Vancouver, ISO, and other styles
15

Uyulan, Caglar, Türker Tekin Ergüzel, and Nevzat Tarhan. "Entropy-based feature extraction technique in conjunction with wavelet packet transform for multi-mental task classification." Biomedical Engineering / Biomedizinische Technik 64, no. 5 (2019): 529–42. http://dx.doi.org/10.1515/bmt-2018-0105.

Full text
Abstract:
Abstract Event-related mental task information collected from electroencephalography (EEG) signals, which are functionally related to different brain areas, possesses complex and non-stationary signal features. It is essential to be able to classify mental task information through the use in brain-computer interface (BCI) applications. This paper proposes a wavelet packet transform (WPT) technique merged with a specific entropy biomarker as a feature extraction tool to classify six mental tasks. First, the data were collected from a healthy control group and the multi-signal information comprised six mental tasks which were decomposed into a number of subspaces spread over a wide frequency spectrum by projecting six different wavelet basis functions. Later, the decomposed subspaces were subjected to three entropy-type statistical measure functions to extract the feature vectors for each mental task to be fed into a backpropagation time-recurrent neural network (BPTT-RNN) model. Cross-validated classification results demonstrated that the model could classify with 85% accuracy through a discrete Meyer basis function coupled with a Renyi entropy biomarker. The classifier model was finally tested in the Simulink platform to demonstrate the Fourier series representation of periodic signals by tracking the harmonic pattern. In order to boost the model performance, ant colony optimization (ACO)-based feature selection method was employed. The overall accuracy increased to 88.98%. The results underlined that the WPT combined with an entropy uncertainty measure methodology is both effective and versatile to discriminate the features of the signal localized in a time-frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
16

Beaufays, Françoise, and Eric A. Wan. "Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity." Neural Computation 6, no. 2 (1994): 296–306. http://dx.doi.org/10.1162/neco.1994.6.2.296.

Full text
Abstract:
We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, real-time backpropagation and backpropagation-through-time. Starting with the flow graph for real-time backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the backpropagation-through-time algorithm. Interreciprocity provides a theoretical argument to verify that both flow graphs implement the same overall weight update.
APA, Harvard, Vancouver, ISO, and other styles
17

Manneschi, Luca, and Eleni Vasilaki. "An alternative to backpropagation through time." Nature Machine Intelligence 2, no. 3 (2020): 155–56. http://dx.doi.org/10.1038/s42256-020-0162-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lillicrap, Timothy P., and Adam Santoro. "Backpropagation through time and the brain." Current Opinion in Neurobiology 55 (April 2019): 82–89. http://dx.doi.org/10.1016/j.conb.2019.01.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zini, Julia El, Yara Rizk, and Mariette Awad. "An Optimized Parallel Implementation of Non-Iteratively Trained Recurrent Neural Networks." Journal of Artificial Intelligence and Soft Computing Research 11, no. 1 (2021): 33–50. http://dx.doi.org/10.2478/jaiscr-2021-0003.

Full text
Abstract:
AbstractRecurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT.In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt-PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.
APA, Harvard, Vancouver, ISO, and other styles
20

Hermans, Michiel, Joni Dambre, and Peter Bienstman. "Optoelectronic Systems Trained With Backpropagation Through Time." IEEE Transactions on Neural Networks and Learning Systems 26, no. 7 (2015): 1545–50. http://dx.doi.org/10.1109/tnnls.2014.2344002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Williams, Ronald J., and Jing Peng. "An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories." Neural Computation 2, no. 4 (1990): 490–501. http://dx.doi.org/10.1162/neco.1990.2.4.490.

Full text
Abstract:
A novel variant of the familiar backpropagation-through-time approach to training recurrent networks is described. This algorithm is intended to be used on arbitrary recurrent networks that run continually without ever being reset to an initial state, and it is specifically designed for computationally efficient computer implementation. This algorithm can be viewed as a cross between epochwise backpropagation through time, which is not appropriate for continually running networks, and the widely used on-line gradient approximation technique of truncated backpropagation through time.
APA, Harvard, Vancouver, ISO, and other styles
22

KOPRINKOVA-HRISTOVA, PETIA. "BACKPROPAGATION THROUGH TIME TRAINING OF A NEURO-FUZZY CONTROLLER." International Journal of Neural Systems 20, no. 05 (2010): 421–28. http://dx.doi.org/10.1142/s0129065710002504.

Full text
Abstract:
The paper considers gradient training of fuzzy logic controller (FLC) presented in the form of neural network structure. The proposed neuro-fuzzy structure allows keeping linguistic meaning of fuzzy rule base. Its main adjustable parameters are shape determining parameters of the linguistic variables fuzzy values as well as that of the used as intersection operator parameterized T-norm. The backpropagation through time method was applied to train neuro-FLC for a highly non-linear plant (a biotechnological process). The obtained results are discussed with respect to adjustable parameters rationality. Conclusions are made with respect to the appropriate intersection operations too.
APA, Harvard, Vancouver, ISO, and other styles
23

Stroeve, Sybert. "An analysis of learning control by backpropagation through time." Neural Networks 11, no. 4 (1998): 709–21. http://dx.doi.org/10.1016/s0893-6080(98)00011-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Finkbeiner, Jan, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, and Emre Neftci. "Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 11996–2005. http://dx.doi.org/10.1609/aaai.v38i11.29087.

Full text
Abstract:
Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs), that excel at accelerating parallel workloads and dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity and recurrence cannot leverage the full power of SIMD processor and are thus at a severe disadvantage compared to today's prominent parallel architectures like Transformers and CNNs, thereby hindering the path towards more sustainable AI. To overcome this limitation, we explore sparse and recurrent model training on a massively parallel multiple instruction multiple data (MIMD) architecture with distributed local memory. We implement a training routine based on backpropagation though time (BPTT) for the brain-inspired class of Spiking Neural Networks (SNNs) that feature binary sparse activations. We observe a massive advantage in using sparse activation tensors with a MIMD processor, the Intelligence Processing Unit (IPU) compared to GPUs. On training workloads, our results demonstrate 5-10x throughput gains compared to A100 GPUs and up to 38x gains for higher levels of activation sparsity, without a significant slowdown in training convergence or reduction in final model performance. Furthermore, our results show highly promising trends for both single and multi IPU configurations as we scale up to larger model sizes. Our work paves the way towards more efficient, non-standard models via AI training hardware beyond GPUs, and competitive large scale SNN models.
APA, Harvard, Vancouver, ISO, and other styles
25

Thi Phuong, HOANG. "RESEARCH ON TECHNIQUES TO ENHANCE DDoS ATTACK PREVENTION USING CUMULATIVE SUM AND BACKPROPAGATION ALGORITHMS." Vinh University Journal of Science 53, no. 4A (2024): 69–78. https://doi.org/10.56824/vujs.2024a091a.

Full text
Abstract:
This paper focuses on enhancing DDoS attack prevention capabilities through the combination of the Cumulative Sum (CUSUM) algorithm and the Backpropagation method, aiming to detect attack indicators early and accurately. The CUSUM algorithm is used to monitor and analyze network traffic over time, identifying unusual fluctuations in traffic without requiring prior knowledge of attack types. Meanwhile, the Backpropagation method is applied to optimize neural networks, enabling the system to learn from previous traffic data and distinguish clearly between legitimate traffic and attack traffic. Compared to previous research methods, this combined approach offers several significant advantages. First, CUSUM provides high-accuracy attack detection, allowing the system to respond promptly. Second, Backpropagation enables the system to improve automatically over time, reducing false alarm rates and enhancing prevention effectiveness. Finally, the feasibility and effectiveness of the solution are demonstrated through real-world experiments, showing improved detection rates and faster response times compared to traditional methods. Keywords: Network attack; CUSUM algorithm; Backpropagation algorithm; Anti-spoofing; DDoS attack
APA, Harvard, Vancouver, ISO, and other styles
26

NUSEIRAT, A. M. AL-FAHED, and R. ABU-ZITAR. "HYBRID TRAJECTORY PLANNING USING REINFORCEMENT AND BACKPROPAGATION THROUGH TIME TECHNIQUES." Cybernetics and Systems 34, no. 8 (2003): 747–65. http://dx.doi.org/10.1080/716100275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ozbay, Serkan. "Modified Backpropagation Algorithm with Multiplicative Calculus in Neural Networks." Elektronika ir Elektrotechnika 29, no. 3 (2023): 55–61. http://dx.doi.org/10.5755/j02.eie.34105.

Full text
Abstract:
Backpropagation is one of the most widely used algorithms for training feedforward deep neural networks. The algorithm requires a differentiable activation function and it performs computations of the gradient proceeding backwards through the feedforward deep neural network from the last layer through to the first layer. In order to calculate the gradient at a specific layer, the gradients of all layers are combined via the chain rule of calculus. One of the biggest disadvantages of the backpropagation is that it requires a large amount of training time. To overcome this issue, this paper proposes a modified backpropagation algorithm with multiplicative calculus. Multiplicative calculus provides an alternative to the classical calculus and it defines new kinds of derivative and integral forms in multiplicative form rather than addition and subtraction forms. The performance analyzes are discussed in various case studies and the results are given comparatively with classical backpropagation algorithm. It is found that the proposed modified backpropagation algorithm converges in less time to the solution and thus provides fast training in the given case studies. It is also shown that the proposed algorithm avoids the local minima problem.
APA, Harvard, Vancouver, ISO, and other styles
28

Werbos, P. J. "Backpropagation through time: what it does and how to do it." Proceedings of the IEEE 78, no. 10 (1990): 1550–60. http://dx.doi.org/10.1109/5.58337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Vihikan, Wayan Oger, I. Ketut Gede Darma Putra, and I. Putu Arya Dharmaadi. "Foreign Tourist Arrivals Forecasting Using Recurrent Neural Network Backpropagation through Time." TELKOMNIKA (Telecommunication Computing Electronics and Control) 15, no. 3 (2017): 1257. http://dx.doi.org/10.12928/telkomnika.v15i3.5993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bersini, H., and V. Gorrini. "A simplification of the backpropagation-through-time algorithm for optimal neurocontrol." IEEE Transactions on Neural Networks 8, no. 2 (1997): 437–41. http://dx.doi.org/10.1109/72.557698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Perfetti, R. "Virtual template expansion in cellular neural networks using backpropagation through time." Electronics Letters 33, no. 4 (1997): 307. http://dx.doi.org/10.1049/el:19970204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wayan, Oger Vihikan, Ketut Gede Darma Putra I, and Putu Arya Dharmaadi I. "Foreign Tourist Arrivals Forecasting Using Recurrent Neural Network Backpropagation through Time." TELKOMNIKA Telecommunication, Computing, Electronics and Control 15, no. 3 (2017): 1257–64. https://doi.org/10.12928/TELKOMNIKA.v15i3.5993.

Full text
Abstract:
Bali as an icon of tourism in Indonesia has been visited by many foreign tourists. Thus, Bali is one of the provinces that contribute huge foreign exchange for Indonesia. However, this potential could be threatened by the effectuation of the ASEAN Economic Community as it causes stricter competition among ASEAN countries including in tourism field. To resolve this issue, Balinese government need to forecast the arrival of foreign tourist to Bali in order to help them strategizing tourism plan. However, they do not have an appropriate method to do this. To overcome this problem, this study contributed a forecasting method using Recurrent Neural Network Backpropagation Through Time. We also compare this method with Single Moving Average method. The results showed that proposed method outperformed Single Moving Average in 10 countries tested with 80%, 70%, and 70% better MSE results for 1, 3 and 6 months ahead forecast respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Wan, Eric A., and Françoise Beaufays. "Diagrammatic Derivation of Gradient Algorithms for Neural Networks." Neural Computation 8, no. 1 (1996): 182–201. http://dx.doi.org/10.1162/neco.1996.8.1.182.

Full text
Abstract:
Deriving gradient algorithms for time-dependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagation-through-time without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Esmersoy, Cengiz, and Douglas Miller. "Backprojection versus backpropagation in multidimensional linearized inversion." GEOPHYSICS 54, no. 7 (1989): 921–26. http://dx.doi.org/10.1190/1.1442722.

Full text
Abstract:
Seismic migration can be viewed as either backprojection (diffraction‐stack) or backpropagation (wave‐field extrapolation) (e.g., Gazdag and Sguazzero, 1984). Migration by backprojection was the view supporting the first digital methods—the diffraction and common tangent stacks of what is now called classical or statistical migration (Lindsey and Hermann, 1970; Rockwell, 1971; Schneider, 1971; Johnson and French, 1982). In this approach, each data point is associated with an isochron surface passing through the scattering object. Data values are then interpreted as projections of reflectivity over the associated isochrons. Dually, each image point is associated with a reflection‐time surface passing through the data traces. The migrated image at that point is obtained as a weighted stack of data lying on the reflection‐time surface (Rockwell, 1971; Schneider, 1971). This amounts to a weighted backprojection in which each data point contributes to image points lying on its associated isochron.
APA, Harvard, Vancouver, ISO, and other styles
35

Wisesty, Untari Novia, Febryanti Sthevanie, and Rita Rismala. "Momentum Backpropagation Optimization for Cancer Detection Based on DNA Microarray Data." International Journal of Artificial Intelligence Research 4, no. 2 (2021): 127. http://dx.doi.org/10.29099/ijair.v4i2.188.

Full text
Abstract:
Early detection of cancer can increase the success of treatment in patients with cancer. In the latest research, cancer can be detected through DNA Microarrays. Someone who suffers from cancer will experience changes in the value of certain gene expression. In previous studies, the Genetic Algorithm as a feature selection method and the Momentum Backpropagation algorithm as a classification method provide a fairly high classification performance, but the Momentum Backpropagation algorithm still has a low convergence rate because the learning rate used is still static. The low convergence rate makes the training process need more time to converge. Therefore, in this research an optimization of the Momentum Backpropagation algorithm is done by adding an adaptive learning rate scheme. The proposed scheme is proven to reduce the number of epochs needed in the training process from 390 epochs to 76 epochs compared to the Momentum Backpropagation algorithm. The proposed scheme can gain high accuracy of 90.51% for Colon Tumor data, and 100% for Leukemia, Lung Cancer, and Ovarian Cancer data.
APA, Harvard, Vancouver, ISO, and other styles
36

Kusnadi, Adhi, Ivranza Zuhdi Pane, and Fenina Adline Twince Tobing. "Enhancing facial recognition accuracy through feature extractions and artificial neural networks." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1056. https://doi.org/10.11591/ijai.v14.i2.pp1056-1066.

Full text
Abstract:
Facial recognition is a biometric system used to identify individuals through faces. Although this technology has many advantages, it still faces several challenges. One of the main challenges is that the level of accuracy has yet to reach its maximum potential. This research aims to improve facial recognition performance by applying the discrete cosine transform (DCT) and Gaussian mixture model (GMM), which are then trained with backward propagation of errors (backpropagation) and convolutional neural networks (CNN). The research results show low DCT and GMM feature extraction accuracy with backpropagation of 4.88%. However, the combination of DCT, GMM, and CNN feature extraction produces an accuracy of up to 98.2% and a training time of 360 seconds on the Olivetti Research Laboratory (ORL) dataset, an accuracy of 98.9% and a training time of 1210 seconds on the Yale dataset, and 100% accuracy and training time 1749 seconds on the Japanese female facial expression (JAFFE) dataset. This improvement is due to the combination of DCT, GMM, and CNN's ability to remove noise and study images accurately. This research is expected to significantly contribute to overcoming accuracy challenges and increasing the flexibility of facial recognition systems in various practical situations, as well as the potential to improve security and reliability in security and biometrics.
APA, Harvard, Vancouver, ISO, and other styles
37

Adhi, Kusnadi, Zuhdi Pane Ivranza, and Adline Twince Tobing Fenina. "Enhancing facial recognition accuracy through feature extractions and artificial neural networks." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1056–66. https://doi.org/10.11591/ijai.v14.i2.pp1056-1066.

Full text
Abstract:
Facial recognition is a biometric system used to identify individuals through faces. Although this technology has many advantages, it still faces several challenges. One of the main challenges is that the level of accuracy has yet to reach its maximum potential. This research aims to improve facial recognition performance by applying the discrete cosine transform (DCT) and Gaussian mixture model (GMM), which are then trained with backward propagation of errors (backpropagation) and convolutional neural networks (CNN). The research results show low DCT and GMM feature extraction accuracy with backpropagation of 4.88%. However, the combination of DCT, GMM, and CNN feature extraction produces an accuracy of up to 98.2% and a training time of 360 seconds on the Olivetti Research Laboratory (ORL) dataset, an accuracy of 98.9% and a training time of 1210 seconds on the Yale dataset, and 100% accuracy and training time 1749 seconds on the Japanese female facial expression (JAFFE) dataset. This improvement is due to the combination of DCT, GMM, and CNN's ability to remove noise and study images accurately. This research is expected to significantly contribute to overcoming accuracy challenges and increasing the flexibility of facial recognition systems in various practical situations, as well as the potential to improve security and reliability in security and biometrics.
APA, Harvard, Vancouver, ISO, and other styles
38

Fairbank, Michael, Eduardo Alonso, and Danil Prokhorov. "An Equivalence Between Adaptive Dynamic Programming With a Critic and Backpropagation Through Time." IEEE Transactions on Neural Networks and Learning Systems 24, no. 12 (2013): 2088–100. http://dx.doi.org/10.1109/tnnls.2013.2271778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

BEIGY, HAMID, and MOHAMMAD R. MEYBODI. "BACKPROPAGATION ALGORITHM ADAPTATION PARAMETERS USING LEARNING AUTOMATA." International Journal of Neural Systems 11, no. 03 (2001): 219–28. http://dx.doi.org/10.1142/s0129065701000655.

Full text
Abstract:
Despite of the many successful applications of backpropagation for training multi–layer neural networks, it has many drawbacks. For complex problems it may require a long time to train the networks, and it may not train at all. Long training time can be the result of the non-optimal parameters. It is not easy to choose appropriate value of the parameters for a particular problem. In this paper, by interconnection of fixed structure learning automata (FSLA) to the feedforward neural networks, we apply learning automata (LA) scheme for adjusting these parameters based on the observation of random response of neural networks. The main motivation in using learning automata as an adaptation algorithm is to use its capability of global optimization when dealing with multi-modal surface. The feasibility of proposed method is shown through simulations on three learning problems: exclusive-or, encoding problem, and digit recognition. The simulation results show that the adaptation of these parameters using this method not only increases the convergence rate of learning but it increases the likelihood of escaping from the local minima.
APA, Harvard, Vancouver, ISO, and other styles
40

Samal, Prasanta, K. Sunil, Imran Jamadar, and R. Srinidhi. "Ai-enhanced fault diagnosis in rolling element bearings: A comprehensive vibration analysis approach." FME Transactions 52, no. 3 (2024): 450–60. http://dx.doi.org/10.5937/fme2403450s.

Full text
Abstract:
This research presents a comprehensive approach for bearing fault diagnosis using artificial intelligence (AI), particularly through the application of artificial neural networks (ANNs). By integrating these networks into vibration analysis, the approach aims to meet the critical need for prompt fault detection. The methodology comprises three key steps: vibration signal acquisition, feature extraction, and fault classification. Experiments were conducted to acquire vibration signals for the test bearings on a machinery fault simulator. Six time-domain features were extracted using MATLAB, creating a comprehensive dataset for training the ANN models with three algorithms: Levenberg-Marquardt backpropagation (LMBP), scaled conjugate gradient backpropagation (SCGBP), and Bayesian regularization backpropagation (BRBP). The BRBP algorithm achieved the highest correct classification rate (97.2%), followed by LMBP (90%) and SCGBP (83.6%). To evaluate their efficacy in bearing fault classification, these three networks were simulated, revealing that BRBP could predict all four classes of bearings with zero errors.
APA, Harvard, Vancouver, ISO, and other styles
41

Nurdiawan, Odi, Fathurrohman Fathurrohman, and Ahmad Faqih. "Optimisasi Model Backpropagation untuk Meningkatkan Deteksi Kejang Epilepsi pada Sinyal Electroencephalogram." INFORMATION SYSTEM FOR EDUCATORS AND PROFESSIONALS : Journal of Information System 9, no. 2 (2024): 151. https://doi.org/10.51211/isbi.v9i2.3187.

Full text
Abstract:
Epilepsy is a chronic neurological disorder characterized by recurrent seizures caused by abnormal electrical activity in the brain. Fast and accurate seizure detection is crucial to support medical intervention and improve patients' quality of life. Currently, Electroencephalogram (EEG) signals are widely used to diagnose epilepsy as they record brain electrical activity in real-time. However, manual analysis of EEG signals requires time and precision, necessitating a more effective automated solution. This study aims to optimize the Backpropagation model for detecting epileptic seizures using EEG data. The research involved collaboration between Telkom University, Sumber Waras Hospital, and the University of Bonn. The EEG data collected was processed through Discrete Cosine Transform (DCT) to extract important features before being used to train the artificial neural network (ANN) model. The model was trained and tested using varying numbers of epochs to measure its accuracy. The results show that the Backpropagation model achieved optimal accuracy of 91.15% at 100 epochs and increased to 93.05% at 200 epochs. Although accuracy improved with more epochs, the longer computational time posed a risk of overfitting. This research demonstrates that the Backpropagation algorithm can be optimized to detect epileptic seizures accurately and efficiently. The implication for Sumber Waras Hospital is that this model can be implemented in EEG monitoring systems to detect seizures in real-time, supporting faster medical intervention and reducing reliance on manual analysis. Thus, this study contributes to providing a more efficient diagnostic solution and enhancing healthcare services for epilepsy patients.
APA, Harvard, Vancouver, ISO, and other styles
42

Novianta, Muhammad Andang, Syafrudin, Budi Warsito, and Siti Rachmawati. "Monitoring river water quality through predictive modeling using artificial neural networks backpropagation." AIMS Environmental Science 11, no. 4 (2024): 649–64. http://dx.doi.org/10.3934/environsci.2024032.

Full text
Abstract:
<p>Predicting river water quality in the Special Region of Yogyakarta (DIY) is crucial. In this research, we modeled a river water quality prediction system using the artificial neural network (ANN) backpropagation method. Backpropagation is one of the developments of the multilayer perceptron (MLP) network, which can reduce the level of prediction error by adjusting the weights based on the difference in output and the desired target. Water quality parameters included biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), dissolved oxygen (DO), total phosphate, fecal coliforms, and total coliforms. The research object was the upstream, downstream, and middle parts of the Oya River. The data source was secondary data from the DIY Environment and Forestry Service. Data were in the form of time series data for 2013–2023. Descriptive data results showed that the water quality of the Oya River in 2020–2023 was better than in previous years. However, increasing community and industrial activities can reduce water quality. This was concluded based on the prediction results of the ANN backpropagation method with a hidden layer number of 4. The prediction results for period 3 in 2023 and period 1 in 2024 are that 1) the concentrations of BOD, fecal coli, and total coli will increase and exceed quality standards, 2) COD and TSS concentrations will increase but will still be below quality standards, 3) DO and total phosphate concentrations will remain constant and still on the threshold of quality standards. The possibility of several water quality parameters increasing above the quality standards remains, so the potential for contamination of the Oya River is still high. Therefore, early prevention of river water pollution is necessary.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Yanbo, Chuankun Li, Shuai Li, Xun Cai, Mao Ye, and Hui Yuan. "Variable Rate Independently Recurrent Neural Network (IndRNN) for Action Recognition." Applied Sciences 12, no. 7 (2022): 3281. http://dx.doi.org/10.3390/app12073281.

Full text
Abstract:
Recurrent neural networks (RNNs) have been widely used to solve sequence problems due to their capability of modeling temporal dependency. Despite the rich varieties of RNN models proposed in the literature, the problem of different sampling rates or performing speeds in sequence tasks has not been explicitly considered in the network and the corresponding training and testing processes. This paper addresses the problem of different sampling rates or performing speeds in the skeleton-based action recognition with RNNs. Specifically, the recently proposed independently recurrent neural network (IndRNN) is used as the RNN network due to its well-behaved and easily regulated gradient backpropagation through time. Samples are extracted with variable sampling rates and thus of different lengths, then processed by IndRNN with different time steps. In order to accommodate the differences in terms of gradients introduced by the backpropagation through time under variable time steps, a learning rate adjustment method is further proposed in the paper. Different learning rate adjustment factors are obtained for different layers by analyzing the gradient behavior under IndRNN. Experiments on skeleton-based action recognition are conducted to verify its effectiveness, and the results show that the proposed variable rate IndRNN network can significantly improve the performance over the RNN models under the conventional training strategies.
APA, Harvard, Vancouver, ISO, and other styles
44

Simak, Vojtech, Jan Andel, Dusan Nemec, and Juraj Kekelak. "Online Calibration of Inertial Sensors Based on Error Backpropagation." Sensors 24, no. 23 (2024): 7525. http://dx.doi.org/10.3390/s24237525.

Full text
Abstract:
Global satellite navigation systems (GNSSs) are the most-used technology for the localization of vehicles in the outdoor environment, but in the case of a densely built-up area or during passage through a tunnel, the satellite signal is not available or has poor quality. Inertial navigation systems (INSs) allow localization dead reckoning, but they have an integration error that grows over time. Inexpensive inertial measurement units (IMUs) are subject to thermal-dependent error and must be calibrated almost continuously. This article proposes a novel method of online (continuous) calibration of inertial sensors with the aid of the data from the GNSS receiver during the vehicle’s route. We performed data fusion using an extended Kalman filter (EKF) and calibrated the input sensors through error backpropagation. The algorithm thus calibrates the INS sensors while the GNSS receiver signal is good, and after a GNSS failure, for example in tunnels, the position is predicted only by low-cost inertial sensors. Such an approach significantly improved the localization precision in comparison with offline calibrated inertial localization with the same sensors.
APA, Harvard, Vancouver, ISO, and other styles
45

Mazumdar, J., and R. G. Harley. "Recurrent Neural Networks Trained With Backpropagation Through Time Algorithm to Estimate Nonlinear Load Harmonic Currents." IEEE Transactions on Industrial Electronics 55, no. 9 (2008): 3484–91. http://dx.doi.org/10.1109/tie.2008.925315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

CHUNG, F. L., and T. LEE. "A NODE PRUNING ALGORITHM FOR BACKPROPAGATION NETWORKS." International Journal of Neural Systems 03, no. 03 (1992): 301–14. http://dx.doi.org/10.1142/s0129065792000231.

Full text
Abstract:
Backpropagation (BP) networks are a class of artificial neural network model that has been widely used in many areas of interest. One difficulty in adopting this model is the need to predetermine a suitable network size, particularly, the number of hidden nodes. A common approach is to start with an oversized network and then the size is reduced by eliminating unnecessary nodes and links. In this paper, starting from the viewpoint of pattern classification theory, a study on the characteristics of hidden nodes in oversized networks is reported. The study shows that four categories of excessive hidden nodes can be found in an oversized network. A new node pruning algorithm to attain appropriate size BP networks is then proposed by detecting and removing those excessive nodes. Moreover, the algorithm is extended to cater for the larger class of problems, i.e. real-to-real mapping. Unlike previous works, the concept of “excessiveness” advocated here has strong indications whether a node can be removed without impairing the performance of original network and hence the proposed algorithm is useful in obtaining a network that is optimized with both the network size and performance. The effectiveness of the proposed algorithm has been demonstrated through the N-bit parity problems and the experiment in predicting the chaotic time series.
APA, Harvard, Vancouver, ISO, and other styles
47

Muhammad Ariff Haikal Ridzwan, Nor Hanisah Baharudin, Tunku Muhammad Nizar Tunku Mansur, Rosnazri Ali, and Mohd Syahril Noor Shah. "Hybrid Conjugate Gradient Backpropagation of GCPV based DSTATCOM for Power Conditioning." Journal of Advanced Research in Applied Sciences and Engineering Technology 46, no. 2 (2024): 64–80. http://dx.doi.org/10.37934/araset.46.2.6480.

Full text
Abstract:
This paper studies the performance of a hybrid conjugate gradient backpropagation (HCGBP) grid-connected solar photovoltaic (GCPV) based DSTATCOM. This paper proposes a hybrid control algorithm of instantaneous reactive power theory and conjugate gradient backpropagation neural network for an application of a grid-connected solar PV (GCPV) based DSTATCOM for three-phase three-wire system. The fundamental weighted value of active power components of load currents, which is necessary for estimating reference source currents, is extracted using a conjugate gradient backpropagation control algorithm. The performance of the proposed control algorithm has reduced the THD of the line current up to 1.32%. It is proven that HCGBP has better efficiency, faster response and easy to implement. The steady-state performance of the three-phase GCPV-DSTATCOM under non-linear load has been analysed through simulation and Hardware-in-loop (HIL) simulation based on real time DSP system using Texas Instrument TI C2000 32-bit microcontroller in MATLAB/Simulink. Furthermore, the simulation results have shown that the THD of the line current at the PCC has reduced less than 8%, according to the IEEE standard 519:2014.
APA, Harvard, Vancouver, ISO, and other styles
48

Farooq, Muhammad Umar, Abdul Mannan Zafar, Warda Raheem, Muhammad Irfan Jalees, and Ashraf Aly Hassan. "Assessment of Algorithm Performance on Predicting Total Dissolved Solids Using Artificial Neural Network and Multiple Linear Regression for the Groundwater Data." Water 14, no. 13 (2022): 2002. http://dx.doi.org/10.3390/w14132002.

Full text
Abstract:
Estimating groundwater quality parameters through conventional methods is time-consuming through laboratory measurements for megacities. There is a need to develop models that can help decision-makers make policies for sustainable groundwater reserves. The current study compared the efficiency of multivariate linear regressions (MLR) and artificial neural network (ANN) models in the prediction of groundwater parameters for total dissolved solids (TDS) for three sub-divisions in Lahore, Pakistan. The data for this study were collected every quarter of a year for six years. ANN was applied to investigate the feasibility of feedforward, backpropagation neural networks with three training functions T-BR (Bayesian regularization backpropagation), T-LM (Levenberg–Marquardt backpropagation), and T-SCG (scaled conjugate backpropagation). Two activation functions were used to analyze the performance of algorithmic training functions, i.e., Logsig and Tanh. Input parameters of pH, electrical conductivity (EC), calcium (Ca2+), magnesium (Mg2+), chloride (Cl−), and sulfate (SO42−) was used to predict TDS as an output parameter. The computed values of TDS by ANN and MLR were in close agreement with their respective measured values. Comparative analysis of ANN and MLR showed that TDS root means square error (RMSE) for city sub-division and Pearson’s coefficient of correlation (r) for ANN and MLR were 2.9% and 0.981 and 4.5% and 0.978, respectively. Similarly, for the Farrukhabad sub-division, RMSE and r for ANN were 4.9% and 0.952, while RMSE and r for MLR were 5.5% and 0.941, respectively. For the Shahadra sub-division, RMSE was 10.8%, r was 0.869 for ANN, RMSE was 11.3%, and r was 0.860 for MLR. The results exhibited that the ANN model showed less error in results than MLR. Therefore, ANN can be employed successfully as a groundwater quality prediction tool for TDS assessment.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Yanshuang, Caixia Tian, Baohua Guo, Meixia Wang, Zhezhe Zhang, and Kgaugelo Morobeni. "Multi-Factor Highway Freight Volume Prediction Based on Backpropagation Neural Network." Applied Sciences 14, no. 13 (2024): 5948. http://dx.doi.org/10.3390/app14135948.

Full text
Abstract:
With the development of the times, the traditional single-factor time series prediction cannot meet the needs of actual prediction, and it is necessary to comprehensively consider the influence of various variables on prediction results. Therefore, we use MATLAB R2022a to predict the multi-factor highway freight volume. According to the relevant data of highway freight volume in Chinese history, the BP neural network prediction model of highway freight volume is established, and the model is coded and calculated in the MATLAB software environment. Through repeated training of the data, the predicted value is finally obtained. The results show that the prediction accuracy of the BP neural network model based on multi-factor prediction is very high. Through the example analysis of China’s highway freight volume, the original data are accurately fitted, and the validity of the highway freight volume prediction model based on BP neural network is proved. Through the prediction of freight volume, the investment in infrastructure construction is improved to promote the development of transportation industry and the progress of social economy.
APA, Harvard, Vancouver, ISO, and other styles
50

García Acevedo, Franklin, Juan Rojas Serrano, Alejandro Vásquez Vega, Diego Parra Peñaranda, and Erney Castro Becerra. "Estimating missing data in historic series of global radiation through neural network algorithms." Sistemas y Telemática 14, no. 37 (2016): 9–22. http://dx.doi.org/10.18046/syt.v14i37.2239.

Full text
Abstract:
In data processing time series of meteorological data problems, you are incomplete in some time intervals; it addresses the issue commonly using the autoregressive integrated moving average (ARIMA) or the method by regression analysis (interpolation), both with certain limitations under particular conditions. This paper presents the results of an investigation aimed at solving the problem using neural networks reported. The analysis of a time series of global radiation obtained at the Francisco de Paula Santander University (UFPS) is presented, with basis in the recorded data by the weather station attached to the Department of Fluids and Thermals. Having a series of ten-year study for 125,658 records of temperature, radiation and energy with a percentage of 9.98 missing data, which were duly cleared and completed by a neural network using algorithms backpropagation in the mathematical software MATLAB
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!