To see the other types of publications on this topic, follow the link: Backpropagation learning.

Dissertations / Theses on the topic 'Backpropagation learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Backpropagation learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sam, Iat Tong. "Theory of backpropagation type learning of artificial neural networks and its applications." Thesis, University of Macau, 2001. http://umaclib3.umac.mo/record=b1446702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bendelac, Shiri. "Enhanced Neural Network Training Using Selective Backpropagation and Forward Propagation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83714.

Full text
Abstract:
Neural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Full text
Abstract:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Fischer, Manfred M., and Sucharita Gopal. "Learning in Single Hidden Layer Feedforward Network Models: Backpropagation in a Real World Application." WU Vienna University of Economics and Business, 1994. http://epub.wu.ac.at/4192/1/WSG_DP_3994.pdf.

Full text
Abstract:
Leaming in neural networks has attracted considerable interest in recent years. Our focus is on learning in single hidden layer feedforward networks which is posed as a search in the network parameter space for a network that minimizes an additive error function of statistically independent examples. In this contribution, we review first the class of single hidden layer feedforward networks and characterize the learning process in such networks from a statistical point of view. Then we describe the backpropagation procedure, the leading case of gradient descent learning algorithms for the class of networks considered here, as well as an efficient heuristic modification. Finally, we analyse the applicability of these learning methods to the problem of predicting interregional telecommunication flows. Particular emphasis is laid on the engineering judgment, first, in choosing appropriate values for the tunable parameters, second, on the decision whether to train the network by epoch or by pattern (random approximation), and, third, on the overfitting problem. In addition, the analysis shows that the neural network model whether using either epoch-based or pattern-based stochastic approximation outperforms the classical regression approach to modelling telecommunication flows. (authors' abstract)<br>Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
5

Bonnell, Jeffrey A. "Implementation of a New Sigmoid Function in Backpropagation Neural Networks." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1342.

Full text
Abstract:
This thesis presents the use of a new sigmoid activation function in backpropagation artificial neural networks (ANNs). ANNs using conventional activation functions may generalize poorly when trained on a set which includes quirky, mislabeled, unbalanced, or otherwise complicated data. This new activation function is an attempt to improve generalization and reduce overtraining on mislabeled or irrelevant data by restricting training when inputs to the hidden neurons are sufficiently small. This activation function includes a flattened, low-training region which grows or shrinks during back-propagation to ensure a desired proportion of inputs inside the low-training region. With a desired low-training proportion of 0, this activation function reduces to a standard sigmoidal curve. A network with the new activation function implemented in the hidden layer is trained on benchmark data sets and compared with the standard activation function in an attempt to improve area under the curve for the receiver operating characteristic in biological and other classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Aranibar, Luis Alfonso Quiroga. "Learning fuzzy logic from examples." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1176495652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

U, San Cho. "Trading simulations on stock market by backpropagation learning of artificial neural networks and traditional linear regression." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1447318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Fan. "Real-time Process Modelling Based on Big Data Stream Learning." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35823.

Full text
Abstract:
Most control systems now are assumed to be unchangeable, but this is an ideal situation. In real applications, they are often accompanied with many changes. Some of changes are from environment changes, and some are system requirements. So, the goal of thesis is to model a dynamic adaptive real-time control system process with big data stream. In this way, control system model can adjust itself using example measurements acquired during the operation and give suggestion to next arrival input, which also indicates the accuracy of states under control highly depends on quality of the process model.   In this thesis, we choose recurrent neural network to model process because it is a kind of cheap and fast artificial intelligence. In most of existent artificial intelligence, a database is necessity and the bigger the database is, the more accurate result can be. For example, in case-based reasoning, testcase should be compared with all of cases in database, then take the closer one’s result as reference. However, in neural network, it does not need any big database to support and search, and only needs simple calculation instead, because information is all stored in each connection. All small units called neuron are linear combination, but a neural network made up of neurons can perform some complex and non-linear functionalities. For training part, Backpropagation and Kalman filter are used together. Backpropagation is a widely-used and stable optimization algorithm. Kalman filter is new to gradient-based optimization, but it has been proved to converge faster than other traditional first-order-gradient-based algorithms.   Several experiments were prepared to compare new and existent algorithms under various circumstances. The first set of experiments are static systems and they are only used to investigate convergence rate and accuracy of different algorithms. The second set of experiments are time-varying systems and the purpose is to take one more attribute, adaptivity, into consideration.
APA, Harvard, Vancouver, ISO, and other styles
9

Fischer, Manfred M. "Learning in neural spatial interaction models: A statistical perspective." Springer, 2002. http://epub.wu.ac.at/5503/1/neural.pdf.

Full text
Abstract:
In this paper we view learning as an unconstrained non-linear minimization problem in which the objective function is defined by the negative log-likelihood function and the search space by the parameter space of an origin constrained product unit neural spatial interaction model. We consider Alopex based global search, as opposed to local search based upon backpropagation of gradient descents, each in combination with the bootstrapping pairs approach to solve the maximum likelihood learning problem. Interregional telecommunication traffic flow data from Austria are used as test bed for comparing the performance of the two learning procedures. The study illustrates the superiority of Alopex based global search, measured in terms of Kullback and Leibler's information criterion.
APA, Harvard, Vancouver, ISO, and other styles
10

Thiele, Johannes C. "Deep learning in event-based neuromorphic systems." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS403/document.

Full text
Abstract:
Inférence et apprentissage dans les réseaux de neurones profonds nécessitent une grande quantité de calculs qui, dans beaucoup de cas, limite leur intégration dans les environnements limités en ressources. Les réseaux de neurones évènementiels de type « spike » présentent une alternative aux réseaux de neurones artificiels classiques, et promettent une meilleure efficacité énergétique. Cependant, entraîner les réseaux spike demeure un défi important, particulièrement dans le cas où l’apprentissage doit être exécuté sur du matériel de calcul bio-inspiré, dit matériel neuromorphique. Cette thèse constitue une étude sur les algorithmes d’apprentissage et le codage de l’information dans les réseaux de neurones spike.A partir d’une règle d’apprentissage bio-inspirée, nous analysons quelles propriétés sont nécessaires dans les réseaux spike pour rendre possible un apprentissage embarqué dans un scénario d’apprentissage continu. Nous montrons qu’une règle basée sur le temps de déclenchement des neurones (type « spike-timing dependent plasticity ») est capable d’extraire des caractéristiques pertinentes pour permettre une classification d’objets simples comme ceux des bases de données MNIST et N-MNIST.Pour dépasser certaines limites de cette approche, nous élaborons un nouvel outil pour l’apprentissage dans les réseaux spike, SpikeGrad, qui représente une implémentation entièrement évènementielle de la rétro-propagation du gradient. Nous montrons comment cette approche peut être utilisée pour l’entrainement d’un réseau spike qui est capable d’inférer des relations entre valeurs numériques et des images MNIST. Nous démontrons que cet outil est capable d’entrainer un réseau convolutif profond, qui donne des taux de reconnaissance d’image compétitifs avec l’état de l’art sur les bases de données MNIST et CIFAR10. De plus, SpikeGrad permet de formaliser la réponse d’un réseau spike comme celle d’un réseau de neurones artificiels classique, permettant un entraînement plus rapide.Nos travaux introduisent ainsi plusieurs mécanismes d’apprentissage puissants pour les réseaux évènementiels, contribuant à rendre l’apprentissage des réseaux spike plus adaptés à des problèmes réels<br>Inference and training in deep neural networks require large amounts of computation, which in many cases prevents the integration of deep networks in resource constrained environments. Event-based spiking neural networks represent an alternative to standard artificial neural networks that holds the promise of being capable of more energy efficient processing. However, training spiking neural networks to achieve high inference performance is still challenging, in particular when learning is also required to be compatible with neuromorphic constraints. This thesis studies training algorithms and information encoding in such deep networks of spiking neurons. Starting from a biologically inspired learning rule, we analyze which properties of learning rules are necessary in deep spiking neural networks to enable embedded learning in a continuous learning scenario. We show that a time scale invariant learning rule based on spike-timing dependent plasticity is able to perform hierarchical feature extraction and classification of simple objects of the MNIST and N-MNIST dataset. To overcome certain limitations of this approach we design a novel framework for spike-based learning, SpikeGrad, which represents a fully event-based implementation of the gradient backpropagation algorithm. We show how this algorithm can be used to train a spiking network that performs inference of relations between numbers and MNIST images. Additionally, we demonstrate that the framework is able to train large-scale convolutional spiking networks to competitive recognition rates on the MNIST and CIFAR10 datasets. In addition to being an effective and precise learning mechanism, SpikeGrad allows the description of the response of the spiking neural network in terms of a standard artificial neural network, which allows a faster simulation of spiking neural network training. Our work therefore introduces several powerful training concepts for on-chip learning in neuromorphic devices, that could help to scale spiking neural networks to real-world problems
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Martin Chun-Sheng, and pjcheng@ozemail com au. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030722.172812.

Full text
Abstract:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates can not be both negative. Further, due to variation of the initial MF parameters, i.e. the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search better-fit spread rate for uncertain means and near optimal learnings for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.
APA, Harvard, Vancouver, ISO, and other styles
12

Dalecký, Štěpán. "Neuro-fuzzy systémy." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236066.

Full text
Abstract:
The thesis deals with artificial neural networks theory. Subsequently, fuzzy sets are being described and fuzzy logic is explained. The hybrid neuro-fuzzy system stemming from ANFIS system is designed on the basis of artificial neural networks, fuzzy sets and fuzzy logic. The upper-mentioned systems' functionality has been demonstrated on an inverted pendulum controlling problem. The three controllers have been designed for the controlling needs - the first one is on the basis of artificial neural networks, the second is a fuzzy one, and the third is based on ANFIS system.  The thesis is aimed at comparing the described systems, which the controllers have been designed on the basis of, and evaluating the hybrid neuro-fuzzy system ANFIS contribution in comparison with particular theory solutions. Finally, some experiments with the systems are demonstrated and findings are assessed.
APA, Harvard, Vancouver, ISO, and other styles
13

Martínez, Brito Izacar Jesús. "Quantitative structure fate relationships for multimedia environmental analysis." Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.

Full text
Abstract:
Key physicochemical properties for a wide spectrum of chemical pollutants are unknown. This thesis analyses the prospect of assessing the environmental distribution of chemicals directly from supervised learning algorithms using molecular descriptors, rather than from multimedia environmental models (MEMs) using several physicochemical properties estimated from QSARs. Dimensionless compartmental mass ratios of 468 validation chemicals were compared, in logarithmic units, between: a) SimpleBox 3, a Level III MEM, propagating random property values within statistical distributions of widely recommended QSARs; and, b) Support Vector Regressions (SVRs), acting as Quantitative Structure-Fate Relationships (QSFRs), linking mass ratios to molecular weight and constituent counts (atoms, bonds, functional groups and rings) for training chemicals. Best predictions were obtained for test and validation chemicals optimally found to be within the domain of applicability of the QSFRs, evidenced by low MAE and high q2 values (in air, MAE&#8804;0.54 and q2&#8805;0.92; in water, MAE&#8804;0.27 and q2&#8805;0.92).<br>Las propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE&#8804;0.54 y q2&#8805;0.92; en agua, MAE&#8804;0.27 y q2&#8805;0.92).
APA, Harvard, Vancouver, ISO, and other styles
14

Bělohlávek, Jiří. "Agent pro kurzové sázení." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235980.

Full text
Abstract:
This master thesis deals with design and implementation of betting agent. It covers issues such as theoretical background of an online betting, probability and statistics. In its first part it is focused on data mining and explains the principle of knowledge mining form data warehouses and certain methods suitable for different types of tasks. Second, it is concerned with neural networks and algorithm of back-propagation. All the findings are demonstrated on and supported by graphs and histograms of data analysis, made via SAS Enterprise Miner program. In conclusion, the thesis summarizes all the results and offers specific methods of extension of the agent.
APA, Harvard, Vancouver, ISO, and other styles
15

Mnih, Andriy. "Learning nonlinear constraints with contrastive backpropagation." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=94951&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jhu, Cing-Fu, and 朱清福. "Scheduling Optimization of Backpropagation for Deep Learning on GPU." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/p6fa2n.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>106<br>Many large deep neural network models have been proposed in recent years to achieve more accurate training results. The training of these large models requires a huge amount of memory and communication, which becomes a challenging issue in improving the performance of deep learning. In this paper, we analyze the data access pattern of training a deep neural network and propose a data pinning algorithm that reduces the data usage on GPU and the movement between a GPU and its CPU host. We show that to find an optimal data movement scheduling is NP-complete,and propose a dynamic programming that can find the optimal solution in pseudo polynomial time. That is, we observe the access pattern of the training of the deep neural network and propose specialized GPU data pinning algorithm that minimizes the unnecessary data movements. We then implement our dynamic programming on to train real deep learning models. The experiments show that we can pin up to 20% more data into GPU memory than GeePS, a state of art deep learning framework. We also propose memory reduction technique for back-propagation in deep learning. We analyzed the access pattern of back propagation in deep learning and realized that gradient computation and weight update, two transitionally sequentially done major steps, can be partially overlapped. In addition, we analyzed the semantics of the computation and realized that by delaying weight update we can avoid double buffering due to read/write conflicts in traditional naive parallel implementation. We then implement our techniques and observe up to 75% reduction in memory usage.
APA, Harvard, Vancouver, ISO, and other styles
17

Dolenko, Brion K. "Performance and hardware compatibility of backpropagation and cascade correlation learning algorithms." 1993. http://hdl.handle.net/1993/17526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Barnard, S. J. "Short term load forecasting by a modified backpropagation trained neural network." Thesis, 2012. http://hdl.handle.net/10210/5828.

Full text
Abstract:
M. Ing.<br>This dissertation describes the development of a feedforwa.rd neural network, trained by means of an accelerated backpropagation algorithm, used for the short term load forecasting on real world data. It is argued that the new learning algorithm. I-Prop, - is a faster training - algorithm due to the fact that the learning rate is optimally predicted and changed according to a more efficient formula (without the need for extensive memory) which speeds up the training process. The neural network developed was tested for the month of December 1994, specifically to test the artificial neural network's ability to correctly predict the load during a Public Holiday, as well as the change over from Public Holiday to 'Normal' working day. In conclusion, suggestions are made towards further research in the improvement of the I-Prop algorithm as well as improving the load forecasting technique implemented in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
19

Chia-Chiang, Lin, and 林家強. "Study on Variable-Width Momentum and Output Types in the Backpropagation Learning Algorithm in Neural Networks for Classification Problems." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/49688798408345438564.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電機工程系<br>93<br>Two often considered issues for neural networks trained by the backpropagation (BP) algorithm are the convergence rate and the generalization ability. The slow convergence rate is a severely drawback of BP, and the problem of premature saturation is the main reason of the slow convergence. Adding a momentum term to BP is a common technique to speedup the convergence, but it may also stimulate the occurrence of premature saturation. Premature saturation causes the error to be trapped at the current value and decreases the learning efficiency of neural networks. Many efforts had been put on this problem, and detailed mechanisms and conditions had been fully analyzed. In this thesis, a new structure of momentum, variable-width momentum, is designed to prevent premature saturation and maintain the advantage of momentum. The simulation results illustrate the superiority of the proposed variable-width momentum on convergence rates. Moreover, we discuss the generalization abilities while using the bipolar sigmoid function or using the unipolar sigmoid function as activation function of BP on classification problems. Bipolar possess the concept of “for” and “against”, which then can be used for voting. In our simulation results, we found that such an approach can have better classification ability than that of using the unipolar sigmoid function.
APA, Harvard, Vancouver, ISO, and other styles
20

Carvalho, João Gabriel Marques. "Electricity consumption forecast model for the DEEC based on machine learning tools." Master's thesis, 2020. http://hdl.handle.net/10316/90148.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia<br>Nesta tese apresentaremos o trabalho sobre a criação de uma rede neuronal de aprendizagem automática, capaz de realizar previsões energéticas. Com o aumento do consumo energético, devem desenvolvidas ferramentas capazes de prever o consumo. Esta necessidade levou à pesquisa deste tema.Procura-se explicar a história da aprendizagem automática, o que é a aprendizagem automática e como é que esta funciona. Também se procura explicar os seus antecedentes matemáticos, a utilização de redes neuronais e que ferramentas foram atualmente desenvolvidas; de forma a criar soluções de aprendizagem automática. A aprendizagem automática consiste num programa informático, que após treino é capaz de desempenhar tarefas de forma similar à mente humana. A rede neuronal (ANN) é uma das mais importantes ferramentas de aprendizagem automática, através da qual se pode obter informação fundamental.Para prever o consumo de energia no Departamento de Engenharia Eletrotécnica e de Computadores (DEEC) da Universidade de Coimbra, uma rede neural foi treinada usando dados reais do consumo total das torres do DEEC.Phyton foi a linguagem utilizada e recorreu-se ao logaritmo de regressão de aprendizagem supervisionada. Com esta previsão, comparam-se os dados obtidos com os dados reais, o que permite a sua análise. Os dados usados no treino da rede neuronal vão de 2015/julho/10 a 2017/dezembro/31, num total de 906 dias. Por cada dia do ano existe um máximo de 3 valores, considerando-se assim uma amostra pequena.A comparação final entre os dados reais e os dados previstos foi somente realizada no mês de janeiro de 2018. A partir dos dados obtidos realizaram-se previsões, apesar de um certo nível de discrepância; justificada pela pequena quantidade de dados disponíveis. No futuro, deve-se aumentar os dados de treino de forma a obter um maior número de variáveis de entrada. O principal objetivo proposto nesta tese foi atingido com sucesso. Com toda a pesquisa apresentada, buscou-se criar informação que permitisse ser um marco na criação de melhores soluções. Este é um campo extraordinário que no futuro permitirá elevar os nossos conhecimentos a outros níveis.<br>In this thesis, the design of a machine learning neural network capable of making energy predictions is presented. With the increase in energy consumption, tools for the prediction of energy consumption are gaining great importance and their implementation is required. This concern is the main goal of the presented work.We strive to explain the history of machine learning, what machine learning is and how it works. It is also sought to explain the mathematical background and use of neural networks and what tools have been developed nowadays to create machine learning solutions. Machine learning is a computer program that can perform trained tasks in a similar way as the human mind. The neural network (ANN) is one of the most used and important machine learning solution through which pivotal data can be obtained. For predicting the energy consumption at the Department of Electrical and Computer Engineering (DEEC) of the University of Coimbra, a neural network was trained using real data from the overall consumption of the DEEC towers.Phyton was the language used and the supervised learning regression algorithm utilized. With this prediction, we finally compare our data with real data, so that we may analyze it. The data used in the training of the neural network goes from 2015/July/10 to 2017/December/31, a total of 906 days. For each day of the year, there is a maximum of 3 values, which is considered a small sample, but the only one available The final comparison between real and predicted data was only done for the month of January 2018. From the data achieved, predictions were made, but with a certain level of discrepancy, that is explained with the low amount of data available. In the future, one of the things that should be considered is to enlarge the training datasets, considering a larger amount of input variables. The main goal proposed for this thesis was successfully obtained. With all the presented research it was strived to create text that would allow being a steppingstone in the creation of better solutions. This is an extraordinary field that in the future will be able to elevate our knowledge to a completely different level.
APA, Harvard, Vancouver, ISO, and other styles
21

Juozenaite, Ineta. "Application of machine learning techniques for solving real world business problems : the case study - target marketing of insurance policies." Master's thesis, 2018. http://hdl.handle.net/10362/32410.

Full text
Abstract:
Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business Intelligence<br>The concept of machine learning has been around for decades, but now it is becoming more and more popular not only in the business, but everywhere else as well. It is because of increased amount of data, cheaper data storage, more powerful and affordable computational processing. The complexity of business environment leads companies to use data-driven decision making to work more efficiently. The most common machine learning methods, like Logistic Regression, Decision Tree, Artificial Neural Network and Support Vector Machine, with their applications are reviewed in this work. Insurance industry has one of the most competitive business environment and as a result, the use of machine learning techniques is growing in this industry. In this work, above mentioned machine learning methods are used to build predictive model for target marketing campaign of caravan insurance policies to achieve greater profitability. Information Gain and Chi-squared metrics, Regression Stepwise, R package “Boruta”, Spearman correlation analysis, distribution graphs by target variable, as well as basic statistics of all variables are used for feature selection. To solve this real-world business problem, the best final chosen predictive model is Multilayer Perceptron with backpropagation learning algorithm with 1 hidden layer and 12 hidden neurons.
APA, Harvard, Vancouver, ISO, and other styles
22

"Sentiment Analysis for Long-Term Stock Prediction." Master's thesis, 2016. http://hdl.handle.net/2286/R.I.39401.

Full text
Abstract:
abstract: There have been extensive research in how news and twitter feeds can affect the outcome of a given stock. However, a majority of this research has studied the short term effects of sentiment with a given stock price. Within this research, I studied the long-term effects of a given stock price using fundamental analysis techniques. Within this research, I collected both sentiment data and fundamental data for Apple Inc., Microsoft Corp., and Peabody Energy Corp. Using a neural network algorithm, I found that sentiment does have an effect on the annual growth of these companies but the fundamentals are more relevant when determining overall growth. The stocks which show more consistent growth hold more importance on the previous year’s stock price but companies which have less consistency in their growth showed more reliance on the revenue growth and sentiment on the overall company and CEO. I discuss how I collected my research data and used a multi-layered perceptron to predict a threshold growth of a given stock. The threshold used for this particular research was 10%. I then showed the prediction of this threshold using my perceptron and afterwards, perform an f anova test on my choice of features. The results showed the fundamentals being the better predictor of stock information but fundamentals came in a close second in several cases, proving sentiment does hold an effect over long term growth.<br>Dissertation/Thesis<br>Masters Thesis Computer Science 2016
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Dong-Hyun. "Difference target propagation." Thèse, 2018. http://hdl.handle.net/1866/21284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lamarre, Aldo. "Apprentissage de circuits quantiques par descente de gradient classique." Thesis, 2020. http://hdl.handle.net/1866/24322.

Full text
Abstract:
Nous présentons un nouvel algorithme d’apprentissage de circuits quantiques basé sur la descente de gradient classique. Comme ce sujet unifie deux disciplines, nous expliquons les deux domaines aux gens de l’autre discipline. Conséquemment, nous débutons par une présentation du calcul quantique et des circuits quantiques pour les gens en apprentissage automatique suivi d’une présentation des algorithmes d’apprentissage automatique pour les gens en informatique quantique. Puis, pour motiver et mettre en contexte nos résultats, nous passons à une légère revue de littérature en apprentissage automatique quantique. Ensuite, nous présentons notre modèle, son algorithme, ses variantes et quelques résultats empiriques. Finalement, nous critiquons notre implémentation en montrant des extensions et des nouvelles approches possibles. Les résultats principaux se situent dans ces deux dernières parties, qui sont respectivement les chapitres 4 et 5 de ce mémoire. Le code de l’algorithme et des expériences que nous avons créé pour ce mémoire se trouve sur notre github à l’adresse suivante : https://github.com/AldoLamarre/quantumcircuitlearning.<br>We present a new learning algorithm for quantum circuits based on gradient descent. Since this subject unifies two areas of research, we explain each field for people working in the other domain. Consequently, we begin by introducing quantum computing and quantum circuits to machine learning specialists, followed by an introduction of machine learning to quantum computing specialists. To give context and motivate our results we then give a light literature review on quantum machine learning. After this, we present our model, its algorithms and its variants, then discuss our currently achieved empirical results. Finally, we criticize our models by giving extensions and future work directions. These last two parts are our main results. They can be found in chapter 4 and 5 respectively. Our code which helped obtain these results can be found on github at this link : https://github.com/ AldoLamarre/quantumcircuitlearning.
APA, Harvard, Vancouver, ISO, and other styles
25

Considine, Breandan. "Programming tools for intelligent systems." Thesis, 2020. http://hdl.handle.net/1866/24310.

Full text
Abstract:
Les outils de programmation sont des programmes informatiques qui aident les humains à programmer des ordinateurs. Les outils sont de toutes formes et tailles, par exemple les éditeurs, les compilateurs, les débogueurs et les profileurs. Chacun de ces outils facilite une tâche principale dans le flux de travail de programmation qui consomme des ressources cognitives lorsqu’il est effectué manuellement. Dans cette thèse, nous explorons plusieurs outils qui facilitent le processus de construction de systèmes intelligents et qui réduisent l’effort cognitif requis pour concevoir, développer, tester et déployer des systèmes logiciels intelligents. Tout d’abord, nous introduisons un environnement de développement intégré (EDI) pour la programmation d’applications Robot Operating System (ROS), appelé Hatchery (Chapter 2). Deuxièmement, nous décrivons Kotlin∇, un système de langage et de type pour la programmation différenciable, un paradigme émergent dans l’apprentissage automatique (Chapter 3). Troisièmement, nous proposons un nouvel algorithme pour tester automatiquement les programmes différenciables, en nous inspirant des techniques de tests contradictoires et métamorphiques (Chapter 4), et démontrons son efficacité empirique dans le cadre de la régression. Quatrièmement, nous explorons une infrastructure de conteneurs basée sur Docker, qui permet un déploiement reproductible des applications ROS sur la plateforme Duckietown (Chapter 5). Enfin, nous réfléchissons à l’état actuel des outils de programmation pour ces applications et spéculons à quoi pourrait ressembler la programmation de systèmes intelligents à l’avenir (Chapter 6).<br>Programming tools are computer programs which help humans program computers. Tools come in all shapes and forms, from editors and compilers to debuggers and profilers. Each of these tools facilitates a core task in the programming workflow which consumes cognitive resources when performed manually. In this thesis, we explore several tools that facilitate the process of building intelligent systems, and which reduce the cognitive effort required to design, develop, test and deploy intelligent software systems. First, we introduce an integrated development environment (IDE) for programming Robot Operating System (ROS) applications, called Hatchery (Chapter 2). Second, we describe Kotlin∇, a language and type system for differentiable programming, an emerging paradigm in machine learning (Chapter 3). Third, we propose a new algorithm for automatically testing differentiable programs, drawing inspiration from techniques in adversarial and metamorphic testing (Chapter 4), and demonstrate its empirical efficiency in the regression setting. Fourth, we explore a container infrastructure based on Docker, which enables reproducible deployment of ROS applications on the Duckietown platform (Chapter 5). Finally, we reflect on the current state of programming tools for these applications and speculate what intelligent systems programming might look like in the future (Chapter 6).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography