Добірка наукової літератури з теми "Recurrent neural networks BLSTM"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Recurrent neural networks BLSTM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Recurrent neural networks BLSTM":

1

Guo, Yanbu, Bingyi Wang, Weihua Li, and Bei Yang. "Protein secondary structure prediction improved by recurrent neural networks integrated with two-dimensional convolutional neural networks." Journal of Bioinformatics and Computational Biology 16, no. 05 (October 2018): 1850021. http://dx.doi.org/10.1142/s021972001850021x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Protein secondary structure prediction (PSSP) is an important research field in bioinformatics. The representation of protein sequence features could be treated as a matrix, which includes the amino-acid residue (time-step) dimension and the feature vector dimension. Common approaches to predict secondary structures only focus on the amino-acid residue dimension. However, the feature vector dimension may also contain useful information for PSSP. To integrate the information on both dimensions of the matrix, we propose a hybrid deep learning framework, two-dimensional convolutional bidirectional recurrent neural network (2C-BRNN), for improving the accuracy of 8-class secondary structure prediction. The proposed hybrid framework is to extract the discriminative local interactions between amino-acid residues by two-dimensional convolutional neural networks (2DCNNs), and then further capture long-range interactions between amino-acid residues by bidirectional gated recurrent units (BGRUs) or bidirectional long short-term memory (BLSTM). Specifically, our proposed 2C-BRNNs framework consists of four models: 2DConv-BGRUs, 2DCNN-BGRUs, 2DConv-BLSTM and 2DCNN-BLSTM. Among these four models, the 2DConv- models only contain two-dimensional (2D) convolution operations. Moreover, the 2DCNN- models contain 2D convolutional and pooling operations. Experiments are conducted on four public datasets. The experimental results show that our proposed 2DConv-BLSTM model performs significantly better than the benchmark models. Furthermore, the experiments also demonstrate that the proposed models can extract more meaningful features from the matrix of proteins, and the feature vector dimension is also useful for PSSP. The codes and datasets of our proposed methods are available at https://github.com/guoyanb/JBCB2018/ .
2

Zhong, Cheng, Zhonglian Jiang, Xiumin Chu, and Lei Liu. "Inland Ship Trajectory Restoration by Recurrent Neural Network." Journal of Navigation 72, no. 06 (May 17, 2019): 1359–77. http://dx.doi.org/10.1017/s0373463319000316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The quality of Automatic Identification System (AIS) data is of fundamental importance for maritime situational awareness and navigation risk assessment. To improve operational efficiency, a deep learning method based on Bi-directional Long Short-Term Memory Recurrent Neural Networks (BLSTM-RNNs) is proposed and applied in AIS trajectory data restoration. Case studies have been conducted in two distinct reaches of the Yangtze River and the capability of the proposed method has been evaluated. Comparisons have been made between the BLSTM-RNNs-based method and the linear method and classic Artificial Neural Networks. Satisfactory results have been obtained by all methods in straight waterways while the BLSTM-RNNs-based method is superior in meandering waterways. Owing to the bi-directional prediction nature of the proposed method, ship trajectory restoration is favourable for complicated geometry and multiple missing points cases. The residual error of the proposed model is computed through Euclidean distance which decreases to an order of 10 m. It is considered that the present study could provide an alternative method for improving AIS data quality, thus ensuring its completeness and reliability.
3

KADARI, REKIA, YU ZHANG, WEINAN ZHANG, and TING LIU. "CCG supertagging with bidirectional long short-term memory networks." Natural Language Engineering 24, no. 1 (September 4, 2017): 77–90. http://dx.doi.org/10.1017/s1351324917000250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractNeural Network-based approaches have recently produced good performances in Natural language tasks, such as Supertagging. In the supertagging task, a Supertag (Lexical category) is assigned to each word in an input sequence. Combinatory Categorial Grammar Supertagging is a more challenging problem than various sequence-tagging problems, such as part-of-speech (POS) tagging and named entity recognition due to the large number of the lexical categories. Specifically, simple Recurrent Neural Network (RNN) has shown to significantly outperform the previous state-of-the-art feed-forward neural networks. On the other hand, it is well known that Recurrent Networks fail to learn long dependencies. In this paper, we introduce a new neural network architecture based on backward and Bidirectional Long Short-Term Memory (BLSTM) Networks that has the ability to memorize information for long dependencies and benefit from both past and future information. State-of-the-art methods focus on previous information, whereas BLSTM has access to information in both previous and future directions. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short-Term Memory (LSTM) networks are more precise and successful than both unidirectional and bidirectional standard RNNs. Experiment results reveal the effectiveness of our proposed method on both in-domain and out-of-domain datasets. Experiments show improvements about (1.2 per cent) over standard RNN.
4

Shchetinin, E. Yu. "EMOTIONS RECOGNITION IN HUMAN SPEECH USING DEEP NEURAL NETWORKS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 199 (January 2021): 44–51. http://dx.doi.org/10.14489/vkit.2021.01.pp.044-051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.
5

Dutta, Aparajita, Kusum Kumari Singh, and Ashish Anand. "SpliceViNCI: Visualizing the splicing of non-canonical introns through recurrent neural networks." Journal of Bioinformatics and Computational Biology 19, no. 04 (June 4, 2021): 2150014. http://dx.doi.org/10.1142/s0219720021500141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most of the current computational models for splice junction prediction are based on the identification of canonical splice junctions. However, it is observed that the junctions lacking the consensus dimers GT and AG also undergo splicing. Identification of such splice junctions, called the non-canonical splice junctions, is also essential for a comprehensive understanding of the splicing phenomenon. This work focuses on the identification of non-canonical splice junctions through the application of a bidirectional long short-term memory (BLSTM) network. Furthermore, we apply a back-propagation-based (integrated gradient) and a perturbation-based (occlusion) visualization techniques to extract the non-canonical splicing features learned by the model. The features obtained are validated with the existing knowledge from the literature. Integrated gradient extracts features that comprise contiguous nucleotides, whereas occlusion extracts features that are individual nucleotides distributed across the sequence.
6

Zhang, Ansi, Honglei Wang, Shaobo Li, Yuxin Cui, Zhonghao Liu, Guanci Yang, and Jianjun Hu. "Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation." Applied Sciences 8, no. 12 (November 28, 2018): 2416. http://dx.doi.org/10.3390/app8122416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different but related datasets and then fine-tuned by the target dataset. Extensive experimental results show that transfer learning can in general improve the prediction models on the dataset with a small number of samples. There is one exception that when transferring from multi-type operating conditions to single operating conditions, transfer learning led to a worse result.
7

Li, Yue, Xutao Wang, and Pengjian Xu. "Chinese Text Classification Model Based on Deep Learning." Future Internet 10, no. 11 (November 20, 2018): 113. http://dx.doi.org/10.3390/fi10110113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.
8

Xuan, Wenjing, Ning Liu, Neng Huang, Yaohang Li, and Jianxin Wang. "CLPred: a sequence-based protein crystallization predictor using BLSTM neural network." Bioinformatics 36, Supplement_2 (December 2020): i709—i717. http://dx.doi.org/10.1093/bioinformatics/btaa791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Motivation Determining the structures of proteins is a critical step to understand their biological functions. Crystallography-based X-ray diffraction technique is the main method for experimental protein structure determination. However, the underlying crystallization process, which needs multiple time-consuming and costly experimental steps, has a high attrition rate. To overcome this issue, a series of in silico methods have been developed with the primary aim of selecting the protein sequences that are promising to be crystallized. However, the predictive performance of the current methods is modest. Results We propose a deep learning model, so-called CLPred, which uses a bidirectional recurrent neural network with long short-term memory (BLSTM) to capture the long-range interaction patterns between k-mers amino acids to predict protein crystallizability. Using sequence only information, CLPred outperforms the existing deep-learning predictors and a vast majority of sequence-based diffraction-quality crystals predictors on three independent test sets. The results highlight the effectiveness of BLSTM in capturing non-local, long-range inter-peptide interaction patterns to distinguish proteins that can result in diffraction-quality crystals from those that cannot. CLPred has been steadily improved over the previous window-based neural networks, which is able to predict crystallization propensity with high accuracy. CLPred can also be improved significantly if it incorporates additional features from pre-extracted evolutional, structural and physicochemical characteristics. The correctness of CLPred predictions is further validated by the case studies of Sox transcription factor family member proteins and Zika virus non-structural proteins. Availability and implementation https://github.com/xuanwenjing/CLPred.
9

Brocki, Łukasz, and Krzysztof Marasek. "Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition." Archives of Acoustics 40, no. 2 (June 1, 2015): 191–95. http://dx.doi.org/10.1515/aoa-2015-0021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.
10

Varshney, Abhishek, Samit Kumar Ghosh, Sibasankar Padhy, Rajesh Kumar Tripathy, and U. Rajendra Acharya. "Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals." Electronics 10, no. 9 (May 2, 2021): 1079. http://dx.doi.org/10.3390/electronics10091079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The automated classification of cognitive workload tasks based on the analysis of multi-channel EEG signals is vital for human–computer interface (HCI) applications. In this paper, we propose a computerized approach for categorizing mental-arithmetic-based cognitive workload tasks using multi-channel electroencephalogram (EEG) signals. The approach evaluates various entropy features, such as the approximation entropy, sample entropy, permutation entropy, dispersion entropy, and slope entropy, from each channel of the EEG signal. These features were fed to various recurrent neural network (RNN) models, such as long-short term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent unit (GRU), for the automated classification of mental-arithmetic-based cognitive workload tasks. Two cognitive workload classification strategies (bad mental arithmetic calculation (BMAC) vs. good mental arithmetic calculation (GMAC); and before mental arithmetic calculation (BFMAC) vs. during mental arithmetic calculation (DMAC)) are considered in this work. The approach was evaluated using the publicly available mental arithmetic task-based EEG database. The results reveal that our proposed approach obtained classification accuracy values of 99.81%, 99.43%, and 99.81%, using the LSTM, BLSTM, and GRU-based RNN classifiers, respectively for the BMAC vs. GMAC cognitive workload classification strategy using all entropy features and a 10-fold cross-validation (CV) technique. The slope entropy features combined with each RNN-based model obtained higher classification accuracy compared with other entropy features for the classification of the BMAC vs. GMAC task. We obtained the average classification accuracy values of 99.39%, 99.44%, and 99.63% for the classification of the BFMAC vs. DMAC tasks, using the LSTM, BLSTM, and GRU classifiers with all entropy features and a hold-out CV scheme. Our developed automated mental arithmetic task system is ready to be tested with more databases for real-world applications.

Дисертації з теми "Recurrent neural networks BLSTM":

1

Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mes travaux de thèse s'intéressent à l'utilisation de nouvelles technologies d'intelligence artificielle appliquées à la problématique de la classification automatique des séquences audios selon l'état émotionnel du client au cours d'une conversation avec un téléconseiller. En 2016, l'idée est de se démarquer des prétraitements de données et modèles d'apprentissage automatique existant au sein du laboratoire, et de proposer un modèle qui soit le plus performant possible sur la base de données audios IEMOCAP. Nous nous appuyons sur des travaux existants sur les modèles de réseaux de neurones profonds pour la reconnaissance de la parole, et nous étudions leur extension au cas de la reconnaissance des émotions dans la voix. Nous nous intéressons ainsi à l'architecture neuronale bout-en-bout qui permet d'extraire de manière autonome les caractéristiques acoustiques du signal audio en vue de la tâche de classification à réaliser. Pendant longtemps, le signal audio est prétraité avec des indices paralinguistiques dans le cadre d'une approche experte. Nous choisissons une approche naïve pour le prétraitement des données qui ne fait pas appel à des connaissances paralinguistiques spécialisées afin de comparer avec l'approche experte. Ainsi le signal audio brut est transformé en spectrogramme temps-fréquence à l'aide d'une transformée de Fourier à court-terme. Exploiter un réseau neuronal pour une tâche de prédiction précise implique de devoir s'interroger sur plusieurs aspects. D'une part, il convient de choisir les meilleurs hyperparamètres possibles. D'autre part, il faut minimiser les biais présents dans la base de données (non discrimination) en ajoutant des données par exemple et prendre en compte les caractéristiques de la base de données choisie. Le but est d'optimiser le mieux possible l'algorithme de classification. Nous étudions ces aspects pour une architecture neuronale bout-en-bout qui associe des couches convolutives spécialisées dans le traitement de l'information visuelle, et des couches récurrentes spécialisées dans le traitement de l'information temporelle. Nous proposons un modèle d'apprentissage supervisé profond compétitif avec l'état de l'art sur la base de données IEMOCAP et cela justifie son utilisation pour le reste des expérimentations. Ce modèle de classification est constitué de quatre couches de réseaux de neurones à convolution et un réseau de neurones récurrent bidirectionnel à mémoire court-terme et long-terme (BLSTM). Notre modèle est évalué sur deux bases de données audios anglophones proposées par la communauté scientifique : IEMOCAP et MSP-IMPROV. Une première contribution est de montrer qu'avec un réseau neuronal profond, nous obtenons de hautes performances avec IEMOCAP et que les résultats sont prometteurs avec MSP-IMPROV. Une autre contribution de cette thèse est une étude comparative des valeurs de sortie des couches du module convolutif et du module récurrent selon le prétraitement de la voix opéré en amont : spectrogrammes (approche naïve) ou indices paralinguistiques (approche experte). À l'aide de la distance euclidienne, une mesure de proximité déterministe, nous analysons les données selon l'émotion qui leur est associée. Nous tentons de comprendre les caractéristiques de l'information émotionnelle extraite de manière autonome par le réseau. L'idée est de contribuer à une recherche centrée sur la compréhension des réseaux de neurones profonds utilisés en reconnaissance des émotions dans la voix et d'apporter plus de transparence et d'explicabilité à ces systèmes dont le mécanisme décisionnel est encore largement incompris
This thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values ​​of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood
2

Morillot, Olivier. "Reconnaissance de textes manuscrits par modèles de Markov cachés et réseaux de neurones récurrents : application à l'écriture latine et arabe." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La reconnaissance d’écriture manuscrite est une composante essentielle de l’analyse de document. Une tendance actuelle de ce domaine est de passer de la reconnaissance de mots isolés à celle d’une séquence de mots. Notre travail consiste donc à proposer un système de reconnaissance de lignes de texte sans segmentation explicite de la ligne en mots. Afin de construire un modèle performant, nous intervenons à plusieurs niveaux du système de reconnaissance. Tout d’abord, nous introduisons deux méthodes de prétraitement originales : un nettoyage des images de lignes de texte et une correction locale de la ligne de base. Ensuite, nous construisons un modèle de langage optimisé pour la reconnaissance de courriers manuscrits. Puis nous proposons deux systèmes de reconnaissance à l’état de l’art fondés sur les HMM (Hidden Markov Models) contextuels et les réseaux de neurones récurrents BLSTM (Bi-directional LongShort-Term Memory). Nous optimisons nos systèmes afin de proposer une comparaison de ces deux approches. Nos systèmes sont évalués sur l’écriture cursive latine et arabe et ont été soumis à deux compétitions internationales de reconnaissance d’écriture. Enfin, enperspective de notre travail, nous présentons une stratégie de reconnaissance pour certaines chaînes de caractères hors-vocabulaire
Handwriting recognition is an essential component of document analysis. One of the popular trends is to go from isolated word to word sequence recognition. Our work aims to propose a text-line recognition system without explicit word segmentation. In order to build an efficient model, we intervene at different levels of the recognition system. First of all, we introduce two new preprocessing techniques : a cleaning and a local baseline correction for text-lines. Then, a language model is built and optimized for handwritten mails. Afterwards, we propose two state-of-the-art recognition systems based on contextual HMMs (Hidden Markov Models) and recurrent neural networks BLSTM (Bi-directional Long Short-Term Memory). We optimize our systems in order to give a comparison of those two approaches. Our systems are evaluated on arabic and latin cursive handwritings and have been submitted to two international handwriting recognition competitions. At last, we introduce a strategy for some out-of-vocabulary character strings recognition, as a prospect of future work
3

Żbikowski, Rafal Waclaw. "Recurrent neural networks some control aspects /." Connect to electronic version, 1994. http://hdl.handle.net/1905/180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
5

Zbikowski, Rafal Waclaw. "Recurrent neural networks : some control aspects." Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jacobsson, Henrik. "Rule extraction from recurrent neural networks." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni. Prima di arrivare alla parte pratica di questa tesi è necessario introdurre alcuni concetti fondamentali che saranno necessari per sviluppare l'architettura e il codice del nostro modello. Sia nell'introduzione teorica che nella parte pratica l'attenzione sarà focalizzata sull'uso di RNN (Recurrent Neural Network o Rete Neurale Ricorrente) poiché sono le reti neurali più adatte a questo tipo di problema. Un particolare tipo di RNN, chiamato Long Short Term Memory (LSTM), sarà soggetto dello studio principale di questa tesi e verrà presentata e utilizzata anche una delle sue varianti chiamata Gated Recurrent Unit (GRU). Questa tesi, in conclusione, conferma che LSTM e GRU sono il miglior tipo di rete neurale per le previsioni di serie temporali. Nell'ultima parte analizzeremo le differenze tra l'utilizzo di una CPU e una GPU durante la fase di training della rete neurale.
8

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x. RNN models’ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
9

Brax, Christoffer. "Recurrent neural networks for time-series prediction." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-480.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Recurrent neural networks have been used for time-series prediction with good results. In this dissertation recurrent neural networks are compared with time-delayed feed forward networks, feed forward networks and linear regression models on a prediction task. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns. It is evaluated if more accurate predictions can be made when only using the campaign segments of the data.

Throughout the entire project a knowledge discovery process, identified in the literature has been used to give a structured work-process. The results show that the recurrent network is not better than the other evaluated algorithms, in fact, the time-delayed feed forward neural network showed to give the best predictions. The results also show that more accurate predictions could be made when only using information from campaign segments.

10

Rabi, Gihad. "Visual speech recognition by recurrent neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0010/MQ36169.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Recurrent neural networks BLSTM":

1

Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Salem, Fathi M. Recurrent Neural Networks. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hammer, Barbara. Learning with recurrent neural networks. London: Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

ElHevnawi, Mahmoud, and Mohamed Mysara. Recurrent neural networks and soft computing. Rijeka: InTech, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yi, Zhang. Convergence analysis of recurrent neural networks. Boston: Kluwer Academic Publishers, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Michel, Anthony N. Qualitative analysis and synthesis of recurrent neural networks. New York: Marcel Dekker, Inc., 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Wen. Recurrent neural networks applied to robotic motion control. Ottawa: National Library of Canada, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Recurrent neural networks BLSTM":

1

Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning, 351–71. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yalçın, Orhan Gazi. "Recurrent Neural Networks." In Applied Neural Networks with TensorFlow 2, 161–85. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6513-0_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Calin, Ovidiu. "Recurrent Neural Networks." In Deep Learning Architectures, 543–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Caterini, Anthony L., and Dong Eui Chang. "Recurrent Neural Networks." In Deep Neural Networks in a Mathematical Framework, 59–79. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kamath, Uday, John Liu, and James Whitaker. "Recurrent Neural Networks." In Deep Learning for NLP and Speech Recognition, 315–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14596-5_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Marhon, Sajid A., Christopher J. F. Cameron, and Stefan C. Kremer. "Recurrent Neural Networks." In Intelligent Systems Reference Library, 29–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36657-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Aggarwal, Charu C. "Recurrent Neural Networks." In Neural Networks and Deep Learning, 271–313. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Skansi, Sandro. "Recurrent Neural Networks." In Undergraduate Topics in Computer Science, 135–52. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ketkar, Nikhil. "Recurrent Neural Networks." In Deep Learning with Python, 79–96. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2766-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Salvaris, Mathew, Danielle Dean, and Wee Hyong Tok. "Recurrent Neural Networks." In Deep Learning with Azure, 161–86. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3679-6_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Recurrent neural networks BLSTM":

1

Brueckner, Raymond, and Bjorn Schulter. "Social signal classification using deep blstm recurrent neural networks." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854518.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zheng, Changyan, Xiongwei Zhang, Meng Sun, Jibin Yang, and Yibo Xing. "A Novel Throat Microphone Speech Enhancement Framework Based on Deep BLSTM Recurrent Neural Networks." In 2018 IEEE 4th International Conference on Computer and Communications (ICCC). IEEE, 2018. http://dx.doi.org/10.1109/compcomm.2018.8780872.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Bin, Jianhua Tao, Dawei Zhang, and Yibin Zheng. "A novel pitch extraction based on jointly trained deep BLSTM Recurrent Neural Networks with bottleneck features." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952173.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Kai, Zhi-Jie Yan, and Qiang Huo. "A context-sensitive-chunk BPTT approach to training deep LSTM/BLSTM recurrent neural networks for offline handwriting recognition." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Bin, and Jianhua Tao. "A Novel Research to Artificial Bandwidth Extension Based on Deep BLSTM Recurrent Neural Networks and Exemplar-Based Sparse Representation." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ni, Zhaoheng, Rutuja Ubale, Yao Qian, Michael Mandel, Su-Youn Yoon, Abhinav Misra, and David Suendermann-Oeft. "Unusable Spoken Response Detection with BLSTM Neural Networks." In 2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2018. http://dx.doi.org/10.1109/iscslp.2018.8706635.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ding, Chuang, Pengcheng Zhu, and Lei Xie. "BLSTM neural networks for speech driven head motion synthesis." In Interspeech 2015. ISCA: ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Diao, Enmao, Jie Ding, and Vahid Tarokh. "Restricted Recurrent Neural Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kuo, Che-Yu, and Jen-Tzung Chien. "MARKOV RECURRENT NEURAL NETWORKS." In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2018. http://dx.doi.org/10.1109/mlsp.2018.8517074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Frinken, Volkmar, and Seiichi Uchida. "Deep BLSTM neural networks for unconstrained continuous handwritten text recognition." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333894.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Recurrent neural networks BLSTM":

1

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Fort Belvoir, VA: Defense Technical Information Center, July 1988. http://dx.doi.org/10.21236/ada219114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1366924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lin, Linyu, Joomyung Lee, Bikash Poudel, Timothy McJunkin, Nam Dinh, and Vivek Agarwal. Enhancing the Operational Resilience of Advanced Reactors with Digital Twins by Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), October 2021. http://dx.doi.org/10.2172/1835892.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.

До бібліографії