To see the other types of publications on this topic, follow the link: Learning vector quantization neural network.

Dissertations / Theses on the topic 'Learning vector quantization neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Learning vector quantization neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Soflaei, Shahrbabak Masoumeh. "Aggregated Learning: An Information Theoretic Framework to Learning with Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41399.

Full text
Abstract:
Deep learning techniques have achieved profound success in many challenging real-world applications, including image recognition, speech recognition, and machine translation. This success has increased the demand for developing deep neural networks and more effective learning approaches. The aim of this thesis is to consider the problem of learning a neural network classifier and to propose a novel approach to solve this problem under the Information Bottleneck (IB) principle. Based on the IB principle, we associate with the classification problem a representation learning problem, which we call ``IB learning". A careful investigation shows there is an unconventional quantization problem that is closely related to IB learning. We formulate this problem and call it ``IB quantization". We show that IB learning is, in fact, equivalent to the IB quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a vector quantization approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework that we call ``Aggregated Learning (AgrLearn)", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. In other words, AgrLearn can simultaneously optimize against multiple data samples which is different from standard neural networks. In this learning framework, two classes are introduced, ``deterministic AgrLearn (dAgrLearn)" and ``probabilistic AgrLearn (pAgrLearn)". We verify the effectiveness of this framework through extensive experiments on standard image recognition tasks. We show the performance of this framework over a real world natural language processing (NLP) task, sentiment analysis. We also compare the effectiveness of this framework with other available frameworks for the IB learning problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Clayton, Arnshea. "The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/19.

Full text
Abstract:
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramesh, Rohit. "Abnormality detection with deep learning." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118542/1/Rohit_Ramesh_Thesis.pdf.

Full text
Abstract:
This thesis is a step forward in developing the scientific basis for abnormality detection of individuals in crowded environments by utilizing a deep learning method. Such applications for monitoring human behavior in crowds is useful for public safety and security purposes.
APA, Harvard, Vancouver, ISO, and other styles
4

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Full text
Abstract:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested.<br>Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
APA, Harvard, Vancouver, ISO, and other styles
5

Filho, Luiz Soares de Andrade. "Projeto de classificadores de padrÃes baseados em protÃtipos usando evoluÃÃo diferencial." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14230.

Full text
Abstract:
Nesta dissertaÃÃo à apresentada uma abordagem evolucionÃria para o projeto eciente de classificadores baseados em protÃtipos utilizando EvoluÃÃo Diferencial. Para esta finalidade foram reunidos conceitos presentes na famÃlia de redes neurais LVQ (Learning Vector Quantization, introduzida por Kohonen para classificaÃÃo supervisionada, juntamente com conceitos extraÃdos da tÃcnica de clusterizaÃÃo automÃtica proposta por Das et al. baseada na metaheurÃstica EvoluÃÃo Diferencial. A abordagem proposta visa determinar tanto o nÃmero Ãtimo de protÃtipos por classe, quanto as posiÃÃes correspondentes de cada protÃtipo no espaÃo de cobertura do problema. AtravÃs de simulaÃÃes computacionais abrangentes realizadas sobre vÃrios conjuntos de dados comumente utilizados em estudos de comparaÃÃo de desempenho, foi demonstrado que o classificador resultante, denominado LVQ-DE, alcanÃa resultados equivalentes (ou muitas vezes atà melhores) que o estado da arte em classificadores baseados em protÃtipos, com um nÃmero muito menor de protÃtipos.<br>In this Master's dissertation we introduce an evolutionary approach for the eficient design of prototyp e-based classiers using dierential evolution (DE). For this purp ose we amalgamate ideas from the Learning Vector Quantization (LVQ) framework for sup ervised classication by Kohonen (KOHONEN, 2001), with the DE-based automatic clustering approach by Das et al. (DAS; ABRAHAM; KONAR, 2008) in order to evolve sup ervised classiers. The prop osed approach is able to determine b oth the optimal numb er of prototyp es p er class and the corresp onding p ositions of these prototyp es in the data space. By means of comprehensive computer simulations on b enchmarking datasets, we show that the resulting classier, named LVQ-DE, consistently outp erforms state-of-the-art prototyp e-based classiers.
APA, Harvard, Vancouver, ISO, and other styles
6

cruz, Magnus Alencar da. "AvaliaÃÃo de redes neurais competitivas em tarefas de quantizaÃÃo vetorial:um estudo comparativo." Universidade Federal do CearÃ, 2007. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2016.

Full text
Abstract:
nÃo hÃ<br>Esta dissertaÃÃo tem como principal meta realizar um estudo comparativo do desempenho de algoritmos de redes neurais competitivas nÃo-supervisionadas em problemas de quantizaÃÃo vetorial (QV) e aplicaÃÃes correlatas, tais como anÃlise de agrupamentos (clustering) e compressÃo de imagens. A motivaÃÃo para tanto parte da percepÃÃo de que hà uma relativa escassez de estudos comparativos sistemÃticos entre algoritmos neurais e nÃo-neurais de anÃlise de agrupamentos na literatura especializada. Um total de sete algoritmos sÃo avaliados, a saber: algoritmo K -mÃdias e as redes WTA, FSCL, SOM, Neural-Gas, FuzzyCL e RPCL. De particular interesse à a seleÃÃo do nÃmero Ãtimo de neurÃnios. NÃo hà um mÃtodo que funcione para todas as situaÃÃes, restando portanto avaliar a influÃncia que cada tipo de mÃtrica exerce sobre algoritmo em estudo. Por exemplo, os algoritmos de QV supracitados sÃo bastante usados em tarefas de clustering. Neste tipo de aplicaÃÃo, a validaÃÃo dos agrupamentos à feita com base em Ãndices que quantificam os graus de compacidade e separabilidade dos agrupamentos encontrados, tais como Ãndice Dunn e Ãndice Davies-Bouldin (DB). Jà em tarefas de compressÃo de imagens, determinado algoritmo de QV à avaliado em funÃÃo da qualidade da informaÃÃo reconstruÃda, daà as mÃtricas mais usadas serem o erro quadrÃtico mÃdio de quantizaÃÃo (EQMQ) ou a relaÃÃo sinal-ruÃdo de pico (PSNR). Empiricamente verificou-se que, enquanto o Ãndice DB favorece arquiteturas com poucos protÃtipos e o Dunn com muitos, as mÃtricas EQMQ e PSNR sempre favorecem nÃmeros ainda maiores. Nenhuma das mÃtricas supracitadas leva em consideraÃÃo o nÃmero de parÃmetros do modelo. Em funÃÃo disso, esta dissertaÃÃo propÃe o uso do critÃrio de informaÃÃo de Akaike (AIC) e o critÃrio do comprimento descritivo mÃnimo (MDL) de Rissanen para selecionar o nÃmero Ãtimo de protÃtipos. Este tipo de mÃtrica mostra-se Ãtil na busca do nÃmero de protÃtipos que satisfaÃa simultaneamente critÃrios opostos, ou seja, critÃrios que buscam o menor erro de reconstruÃÃo a todo custo (MSE e PSNR) e critÃrios que buscam clusters mais compactos e coesos (Ãndices Dunn e DB). Como conseqÃÃncia, o nÃmero de protÃtipos obtidos pelas mÃtricas AIC e MDL à geralmente um valor intermediÃrio, i.e. nem tÃo baixo quanto o sugerido pelos Ãndices Dunn e DB, nem tÃo altos quanto o sugerido pelas mÃtricas MSE e PSNR. Outra conclusÃo importante à que nÃo necessariamente os algoritmos mais sofisticados do ponto de vista da modelagem, tais como as redes SOM e Neural-Gas, sÃo os que apresentam melhores desempenhos em tarefas de clustering e quantizaÃÃo vetorial. Os algoritmos FSCL e FuzzyCL sÃo os que apresentam melhores resultados em tarefas de quantizaÃÃo vetorial, com a rede FSCL apresentando melhor relaÃÃo custo-benefÃcio, em funÃÃo do seu menor custo computacional. Para finalizar, vale ressaltar que qualquer que seja o algoritmo escolhido, se o mesmo tiver seus parÃmetros devidamente ajustados e seus desempenhos devidamente avaliados, as diferenÃas de performance entre os mesmos sÃo desprezÃveis, ficando como critÃrio de desempate o custo computacional.<br>The main goal of this master thesis was to carry out a comparative study of the performance of algorithms of unsupervised competitive neural networks in problems of vector quantization (VQ) tasks and related applications, such as cluster analysis and image compression. This study is mainly motivated by the relative scarcity of systematic comparisons between neural and nonneural algorithms for VQ in specialized literature. A total of seven algorithms are evaluated, namely: K-means, WTA, FSCL, SOM, Neural-Gas, FuzzyCL and RPCL. Of particular interest is the problem of selecting an adequate number of neurons given a particular vector quantization problem. Since there is no widespread method that works satisfactorily for all applications, the remaining alternative is to evaluate the influence that each type of evaluation metric has on a specific algorithm. For example, the aforementioned vector quantization algorithms are widely used in clustering-related tasks. For this type of application, cluster validation is based on indexes that quantify the degrees of compactness and separability among clusters, such as the Dunn Index and the Davies- Bouldin (DB) Index. In image compression tasks, however, a given vector quantization algorithm is evaluated in terms of the quality of the reconstructed information, so that the most used evaluation metrics are the mean squared quantization error (MSQE) and the peak signal-to-noise ratio (PSNR). This work verifies empirically that, while the indices Dunn and DB or favors architectures with many prototypes (Dunn) or with few prototypes (DB), metrics MSE and PSNR always favor architectures with well bigger amounts. None of the evaluation metrics cited previously takes into account the number of parameters of the model. Thus, this thesis evaluates the feasibility of the use of the Akaikeâs information criterion (AIC) and Rissanenâs minimum description length (MDL) criterion to select the optimal number of prototypes. This type of evaluation metric indeed reveals itself useful in the search of the number of prototypes that simultaneously satisfies conflicting criteria, i.e. those favoring more compact and cohesive clusters (Dunn and DB indices) versus those searching for very low reconstruction errors (MSE and PSNR). Thus, the number of prototypes suggested by AIC and MDL is generally an intermediate value, i.e nor so low as much suggested for the indexes Dunn and DB, nor so high as much suggested one for metric MSE and PSNR. Another important conclusion is that sophisticated models, such as the SOM and Neural- Gas networks, not necessarily have the best performances in clustering and VQ tasks. For example, the algorithms FSCL and FuzzyCL present better results in terms of the the of the reconstructed information, with the FSCL presenting better cost-benefit ratio due to its lower computational cost. As a final remark, it is worth emphasizing that if a given algorithm has its parameters suitably tuned and its performance fairly evaluated, the differences in performance compared to others prototype-based algorithms is minimum, with the coputational cost being used to break ties.
APA, Harvard, Vancouver, ISO, and other styles
7

Pahkasalo, Carolina, and André Sollander. "Adaptive Energy Management Strategies for Series Hybrid Electric Wheel Loaders." Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166284.

Full text
Abstract:
An emerging technology is the hybridization of wheel loaders. Since wheel loaders commonly operate in repetitive cycles it should be possible to use this information to develop an efficient energy management strategy that decreases fuel consumption. The purpose of this thesis is to evaluate if and how this can be done in a real-time online application. The strategy that is developed is based on pattern recognition and Equivalent Consumption Minimization Strategy (ECMS), which together is called Adaptive ECMS (A-ECMS). Pattern recognition uses information about the repetitive cycles and predicts the operating cycle, which can be done with Neural Network or Rule-Based methods. The prediction is then used in ECMS to compute the optimal power distribution of fuel and battery power. For a robust system it is important with stability implementations in ECMS to protect the machine, which can be done by adjusting the cost function that is minimized. The result from these implementations in a quasistatic simulation environment is an improvement in fuel consumption by 7.59 % compared to not utilizing the battery at all.
APA, Harvard, Vancouver, ISO, and other styles
8

Brosnan, Timothy Myers. "Neural network and vector quantization classifiers for recognition and inspection applications." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Khudhair, Ali Dheyaa. "VECTOR QUANTIZATION USING ODE BASED NEURAL NETWORK WITH VARYING VIGILANCE PARAMETER." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/dissertations/478.

Full text
Abstract:
Vector Quantization importance has been increasing and it is becoming a vital element in the process of classification and clustering of different types of information to help in the development of machines learning and decisions making, however the different techniques that implements Vector Quantization have always come short in some aspects. A lot of researchers have turned their heads towards the idea of creating a Vector Quantization mechanism that is fast and can be used to classify data that is rapidly being generated from some source, most of the mechanisms depend on a specific style of neural networks, this research is one of those attempts. One of the dilemmas that this technology faces is the compromise that has to be made between the accuracy of the results and the speed of the classification or quantization process, also the complexity of the suggested algorithms makes it very hard to implement and realize any of them on a hardware that can be used as a fast-online classifier which can keep up with the speed of the information being presented to the system, an example for such information sources would be high speed processors, and computer networks intrusion detection systems. This research focuses on creating a Vector Quantizer using neural networks, the neural network that is used in this study is a novel one and has a unique feature that comes from the fact that it is based solely on a set of ordinary differential equations. The input data will be injected in those equations and the classification would be based on finding the equilibrium points of the system with the presence of those input patterns. The elimination of conditional statements in this neural network would mean that the implementation and the execution of the classification process of this technique would have one single path that can accommodate any value. A single execution path will provide easier algorithm analysis and open the possibility to realizing it on a pure analog circuit that can have an operation speed able to match the speed of incoming information and classify the data in a real time fashion. The details of this dynamical system will be provided in this research, also the shortcomings that we have faced and how we overcame them will be explained in particulars. Also, a drastic change in the way of looking at the speed vs. accuracy compromise has been made and presented in this research, aiming towards creating a technique that can produce accurate results with high speeds.
APA, Harvard, Vancouver, ISO, and other styles
10

Kalmár, Marcus, and Joel Nilsson. "The art of forecasting – an analysis of predictive precision of machine learning models." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-280675.

Full text
Abstract:
Forecasting is used for decision making and unreliable predictions can instill a false sense of condence. Traditional time series modelling is astatistical art form rather than a science and errors can occur due to lim-itations of human judgment. In minimizing the risk of falsely specifyinga process the practitioner can make use of machine learning models. Inan eort to nd out if there's a benet in using models that require lesshuman judgment, the machine learning models Random Forest and Neural Network have been used to model a VAR(1) time series. In addition,the classical time series models AR(1), AR(2), VAR(1) and VAR(2) havebeen used as comparative foundation. The Random Forest and NeuralNetwork are trained and ultimately the models are used to make pre-dictions evaluated by RMSE. All models yield scattered forecast resultsexcept for the Random Forest that steadily yields comparatively precisepredictions. The study shows that there is denitive benet in using Random Forests to eliminate the risk of falsely specifying a process and do infact provide better results than a correctly specied model.
APA, Harvard, Vancouver, ISO, and other styles
11

Rostami, Jako, and Fredrik Hansson. "Time Series Forecasting of House Prices: An evaluation of a Support Vector Machine and a Recurrent Neural Network with LSTM cells." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385823.

Full text
Abstract:
In this thesis, we examine the performance of different forecasting methods. We use dataof monthly house prices from the larger Stockholm area and the municipality of Uppsalabetween 2005 and early 2019 as the time series to be forecast. Firstly, we compare theperformance of two machine learning methods, the Long Short-Term Memory, and theSupport Vector Machine methods. The two methods forecasts are compared, and themodel with the lowest forecasting error measured by three metrics is chosen to be comparedwith a classic seasonal ARIMA model. We find that the Long Short-Term Memorymethod is the better performing machine learning method for a twelve-month forecast,but that it still does not forecast as well as the ARIMA model for the same forecast period.
APA, Harvard, Vancouver, ISO, and other styles
12

Darnald, Johan. "Predicting Attrition in Financial Data with Machine Learning Algorithms." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-225852.

Full text
Abstract:
For most businesses there are costs involved when acquiring new customers and having longer relationships with customers is therefore often more profitable. Predicting if an individual is prone to leave the business is then a useful tool to help any company take actions to mitigate this cost. The event when a person ends their relationship with a business is called attrition or churn. Predicting peoples actions is however hard and many different factors can affect their choices. This paper investigates different machine learning methods for predicting attrition in the customer base of a bank. Four different methods are chosen based on the results they have shown in previous research and these are then tested and compared to find which works best for predicting these events. Four different datasets from two different products and with two different applications are created from real world data from a European bank. All methods are trained and tested on each dataset. The results of the tests are then evaluated and compared to find what works best. The methods found in previous research to most reliably achieve good results in predicting churn in banking customers are the Support Vector Machine, Neural Network, Balanced Random Forest, and the Weighted Random Forest. The results show that the Balanced Random Forest achieves the best results with an average AUC of 0.698 and an average F-score of 0.376. The accuracy and precision of the model are concluded to not be enough to make definite decisions but can be used with other factors such as profitability estimations to improve the effectiveness of any actions taken to prevent the negative effects of churn.<br>För de flesta företag finns det en kostnad involverad i att skaffa nya kunder. Längre relationer med kunder är därför ofta mer lönsamma. Att kunna förutsäga om en kund är nära att lämna företaget är därför ett användbart verktyg för att kunna utföra åtgärder för att minska denna kostnad. Händelsen när en kund avslutar sin relation med ett företag kallas här efter kundförlust. Att förutsäga människors handlingar är däremot svårt och många olika faktorer kan påverka deras val. Denna avhandling undersöker olika maskininlärningsmetoder för att förutsäga kundförluster hos en bank. Fyra metoder väljs baserat på tidigare forskning och dessa testas och jämförs sedan för att hitta vilken som fungerar bäst för att förutsäga dessa händelser. Fyra dataset från två olika produkter och med två olika användningsområden skapas från verklig data ifrån en Europeisk bank. Alla metoder tränas och testas på varje dataset. Resultaten från dessa test utvärderas och jämförs sedan för att få reda på vilken metod som fungerar bäst. Metoderna som enligt tidigare forskning ger de mest pålitliga och bästa resultaten för att förutsäga kundförluster hos banker är stödvektormaskin, neurala nätverk, balanserad slumpmässig skog och vägd slumpmässig skog. Resultatet av testerna visar att en balanserad slumpmässig skog får bäst resultat med en genomsnittlig AUC på 0.698 och ett F-värde på 0.376. Träffsäkerheten och det positiva prediktiva värdet på metoden är inte tillräckligt för att ta definitiva handlingar med men kan användas med andra faktorer så som lönsamhetsuträkningar för att förbättra effektiviteten av handlingar som tas för att minska de negativa effekterna av kundförluster.
APA, Harvard, Vancouver, ISO, and other styles
13

Conti, Matteo. "Machine Learning Based Programming Language Identification." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20875/.

Full text
Abstract:
L'avvento dell'era digitale ha contribuito allo sviluppo di nuovi settori tecnologici, i quali, per diretta conseguenza, hanno portato alla richiesta di nuove figure professionali capaci di assumere un ruolo chiave nel processo d'innovazione tecnologica. L'aumento di questa richiesta ha interessato particolarmente il settore dello sviluppo del software, a seguito della nascita di nuovi linguaggi di programmazione e nuovi campi a cui applicarli. La componente principale di cui è composto un software, infatti, è il codice sorgente, il quale può essere rappresentato come un archivio di uno o più file testuali contenti una serie d'istruzioni scritte in uno o più linguaggi di programmazione. Nonostante molti di questi vengano utilizzati in diversi settori tecnologici, spesso accade che due o più di questi condividano una struttura sintattica e semantica molto simile. Chiaramente questo aspetto può generare confusione nell'identificazione di questo all'interno di un frammento di codice, soprattutto se consideriamo l'eventualità che non sia specificata nemmeno l'estensione dello stesso file. Infatti, ad oggi, la maggior parte del codice disponibile online contiene informazioni relative al linguaggio di programmazione specificate manualmente. All'interno di questo elaborato ci concentreremo nel dimostrare che l'identificazione del linguaggio di programmazione di un file `generico' di codice sorgente può essere effettuata in modo automatico utilizzando algoritmi di Machine Learning e non usando nessun tipo di assunzione `a priori' sull'estensione o informazioni particolari che non riguardino il contenuto del file. Questo progetto segue la linea dettata da alcune ricerche precedenti basate sullo stesso approccio, confrontando tecniche di estrazione delle features differenti e algoritmi di classificazione con caratteristiche molto diverse, cercando di ottimizzare la fase di estrazione delle features in base al modello considerato.
APA, Harvard, Vancouver, ISO, and other styles
14

Gyawali, Sanij. "Dynamic Load Modeling from PSSE-Simulated Disturbance Data using Machine Learning." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100591.

Full text
Abstract:
Load models have evolved from simple ZIP model to composite model that incorporates the transient dynamics of motor loads. This research utilizes the latest trend on Machine Learning and builds reliable and accurate composite load model. A composite load model is a combination of static (ZIP) model paralleled with a dynamic model. The dynamic model, recommended by Western Electricity Coordinating Council (WECC), is an induction motor representation. In this research, a dual cage induction motor with 20 parameters pertaining to its dynamic behavior, starting behavior, and per unit calculations is used as a dynamic model. For machine learning algorithms, a large amount of data is required. The required PMU field data and the corresponding system models are considered Critical Energy Infrastructure Information (CEII) and its access is limited. The next best option for the required amount of data is from a simulating environment like PSSE. The IEEE 118 bus system is used as a test setup in PSSE and dynamic simulations generate the required data samples. Each of the samples contains data on Bus Voltage, Bus Current, and Bus Frequency with corresponding induction motor parameters as target variables. It was determined that the Artificial Neural Network (ANN) with multivariate input to single parameter output approach worked best. Recurrent Neural Network (RNN) is also experimented side by side to see if an additional set of information of timestamps would help the model prediction. Moreover, a different definition of a dynamic model with a transfer function-based load is also studied. Here, the dynamic model is defined as a mathematical representation of the relation between bus voltage, bus frequency, and active/reactive power flowing in the bus. With this form of load representation, Long-Short Term Memory (LSTM), a variation of RNN, performed better than the concurrent algorithms like Support Vector Regression (SVR). The result of this study is a load model consisting of parameters defining the load at load bus whose predictions are compared against simulated parameters to examine their validity for use in contingency analysis.<br>Master of Science<br>Independent system Operators (ISO) and Distribution system operators (DSO) have a responsibility to provide uninterrupted power supply to consumers. That along with the longing to keep operating cost minimum, engineers and planners study the system beforehand and seek to find the optimum capacity for each of the power system elements like generators, transformers, transmission lines, etc. Then they test the overall system using power system models, which are mathematical representation of the real components, to verify the stability and strength of the system. However, the verification is only as good as the system models that are used. As most of the power systems components are controlled by the operators themselves, it is easy to develop a model from their perspective. The load is the only component controlled by consumers. Hence, the necessity of better load models. Several studies have been made on static load modeling and the performance is on par with real behavior. But dynamic loading, which is a load behavior dependent on time, is rather difficult to model. Some attempts on dynamic load modeling can be found already. Physical component-based and mathematical transfer function based dynamic models are quite widely used for the study. These load structures are largely accepted as a good representation of the systems dynamic behavior. With a load structure in hand, the next task is estimating their parameters. In this research, we tested out some new machine learning methods to accurately estimate the parameters. Thousands of simulated data are used to train machine learning models. After training, we validated the models on some other unseen data. This study finally goes on to recommend better methods to load modeling.
APA, Harvard, Vancouver, ISO, and other styles
15

Lai, Guojun, and Bing Li. "Handwritten Document Binarization Using Deep Convolutional Features with Support Vector Machine Classifier." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20090.

Full text
Abstract:
Background. Since historical handwritten documents have played important roles in promoting the development of human civilization, many of them have been preserved through digital versions for more scientific researches. However, various degradations always exist in these documents, which could interfere in normal reading. But, binarized versions can keep meaningful contents without degradations from original document images. Document image binarization always works as a pre-processing step before complex document analysis and recognition. It aims to extract texts from a document image. A desirable binarization performance can promote subsequent processing steps positively. For getting better performance for document image binarization, efficient binarization methods are needed. In recent years, machine learning centered on deep learning has gathered substantial attention in document image binarization, for example, Convolutional Neural Networks (CNNs) are widely applied in document image binarization because of the powerful ability of feature extraction and classification. Meanwhile, Support Vector Machine (SVM) is also used in image binarization. Its objective is to build an optimal hyperplane that could maximize the margin between negative samples and positive samples, which can separate the foreground pixels and the background pixels of the image distinctly. Objectives. This thesis aims to explore how the CNN based process of deep convolutional feature extraction and an SVM classifier can be integrated well to binarize handwritten document images, and how the results are, compared with some state-of-the-art document binarization methods. Methods. To investigate the effect of the proposed method on document image binarization, it is implemented and trained. In the architecture, CNN is used to extract features from input images, afterwards these features are fed into SVM for classification. The model is trained and tested with six different datasets. Then, there is a performance comparison between the proposed model and other binarization methods, including some state-of-the-art methods on other three different datasets. Results. The performance results indicate that the proposed model not only can work well but also perform better than some other novel handwritten document binarization method. Especially, evaluation of the results on DIBCO 2013 dataset indicates that our method fully outperforms other chosen binarization methods on all the four evaluation metrics. Besides, it also has the ability to deal with some degradations, which demonstrates its generalization and learning ability are excellent. When a new kind of degradation appears, the proposed method can address it properly even though it never appears in the training datasets. Conclusions. This thesis concludes that the CNN based component and SVM can be combined together for handwritten document binarization. Additionally, in certain datasets, it outperforms some other state-of-the-art binarization methods. Meanwhile, its generalization and learning ability is outstanding when dealing with some degradations.
APA, Harvard, Vancouver, ISO, and other styles
16

Jabali, Aghyad, and Husein Abdelkadir Mohammedbrhan. "Tyre sound classification with machine learning." Thesis, Högskolan i Gävle, Datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-36209.

Full text
Abstract:
Having enough data about the usage of tyre types on the road can lead to a better understanding of the consequences of studded tyres on the environment. This paper is focused on training and testing a machine learning model which can be further integrated into a larger system for automation of the data collection process. Different machine learning algorithms, namely CNN, SVM, and Random Forest, were compared in this experiment. The method used in this paper is an empirical method. First, sound data for studded and none-studded tyres was collected from three different locations in the city of Gävle/Sweden. A total of 760 Mel spectrograms from both classes was generated to train and test a well-known CNN model (AlexNet) on MATLAB. Sound features for both classes were extracted using JAudio to train and test models that use SVM and Random Forest classifi-ers on Weka. Unnecessary features were removed one by one from the list of features to improve the performance of the classifiers. The result shows that CNN achieved accuracy of 84%, SVM has the best performance both with and without removing some audio features (i.e 94% and 92%, respectively), while Random Forest has 89 % accuracy. The test data is comprised of 51% of the studded class and 49% of the none-studded class and the result of the SVM model has achieved more than 94 %. Therefore, it can be considered as an acceptable result that can be used in practice.
APA, Harvard, Vancouver, ISO, and other styles
17

Davrieux, Sebastian. "Studio e realizzazione di un sistema per la Sentiment Analysis basato su reti neurali ?deep?" Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Questo lavoro di tesi ha portato alla realizzazione di un sistema di polarity classification per Twitter in lingua italiana. Dato un insieme di keyword, l'obiettivo posto era quello di effettuare la ricerca ed il recupero di tweet inerenti, analizzare i risultati definendo la polarità di ogni tweet e mostrarli graficamente all'utente; ponendo particolare attenzione alla qualità dell'analisi dei tweet ed affidando il recupero e la visualizzazione grafica a sistemi esistenti. Gli studi e gli approfondimenti effettuati hanno portato alla realizzazione di un sistema di classificazione supervisionato. Il primo passo del sistema implementato consiste in un preprocessing che sfrutta le caratteristiche intrinsiche di Twitter: emoticons, emoji, hashtag, ecc. Una volta terminato il preprocessing, i tweet sono stati rappresentati vettorialmente utilizzando il metodo Paragraph Vector, nello specifico l'implementazione Doc2Vec presente nella libreria Gensim. La classificazione avviene utilizzando due Convolutional Neural Network (CNN), la prima determina se un tweet è positivo o no e la seconda agisce nello stesso modo, ma determinando se è negativo o meno. In questo modo i tweet, mediante la combinazione dei risultati di entrambi i classificatori, vengono divisi in quattro categorie: positivo, negativo, neutro e misto La valutazione del sistema è stata effettuata utilizzando i tweet di addestramento e test, forniti nella campagna di valutazione EVALITA 2016. L'idea implementata è innovativa, dato che non è mai stato presentato ad EVALITA un sistema che unisse il risultato di un modello Doc2Vec con un classificatore CNN. Il modello implementato si sarebbe classificato in seconda posizione, dimostrando le sue ottime prestazioni.
APA, Harvard, Vancouver, ISO, and other styles
18

Kratzert, Ludvig. "Adversarial Example Transferabilty to Quantized Models." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177590.

Full text
Abstract:
Deep learning has proven to be a major leap in machine learning, allowing completely new problems to be solved. While flexible and powerful, neural networks have the disadvantage of being large and demanding high performance from the devices on which they are run. In order to deploy neural networks on more, and simpler, devices, techniques such as quantization, sparsification and tensor decomposition have been developed. These techniques have shown promising results, but their effects on model robustness against attacks remain largely unexplored. In this thesis, Universal Adversarial Perturbations (UAP) and the Fast Gradient Sign Method (FGSM) are tested against VGG-19 as well as versions of it compressed using 8-bit quantization, TensorFlow’s float16 quantization, and 8-bit and 4-bit single layer quantization as introduced in this thesis. The results show that UAP transfers well to all quantized models, while the transferability of FGSM is high to the float16 quantized model, lower to the 8-bit models, and high to the 4-bit SLQ model. We suggest that this disparity arises from the universal adversarial perturbations’ having been trained on multiple examples rather than just one, which has previously been shown to increase transferability. The results also show that quantizing a single layer, the first layer in this case, can have a disproportionate impact on transferability.<br><p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Park, Samuel M. "A Comparison of Machine Learning Techniques to Predict University Rates." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1564790014887692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Uziela, Karolis. "Protein Model Quality Assessment : A Machine Learning Approach." Doctoral thesis, Stockholms universitet, Institutionen för biokemi och biofysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-137695.

Full text
Abstract:
Many protein structure prediction programs exist and they can efficiently generate a number of protein models of a varying quality. One of the problems is that it is difficult to know which model is the best one for a given target sequence. Selecting the best model is one of the major tasks of Model Quality Assessment Programs (MQAPs). These programs are able to predict model accuracy before the native structure is determined. The accuracy estimation can be divided into two parts: global (the whole model accuracy) and local (the accuracy of each residue). ProQ2 is one of the most successful MQAPs for prediction of both local and global model accuracy and is based on a Machine Learning approach. In this thesis, I present my own contribution to Model Quality Assessment (MQA) and the newest developments of ProQ program series. Firstly, I describe a new ProQ2 implementation in the protein modelling software package Rosetta. This new implementation allows use of ProQ2 as a scoring function for conformational sampling inside Rosetta, which was not possible before. Moreover, I present two new methods, ProQ3 and ProQ3D that both outperform their predecessor. ProQ3 introduces new training features that are calculated from Rosetta energy functions and ProQ3D introduces a new machine learning approach based on deep learning. ProQ3 program participated in the 12th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP12) and was one of the best methods in the MQA category. Finally, an important issue in model quality assessment is how to select a target function that the predictor is trying to learn. In the fourth manuscript, I show that MQA results can be improved by selecting a contact-based target function instead of more conventional superposition based functions.<br><p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 3: Manuscript.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Kothawade, Rohan Dilip. "Wine quality prediction model using machine learning techniques." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20009.

Full text
Abstract:
The quality of a wine is important for the consumers as well as the wine industry. The traditional (expert) way of measuring wine quality is time-consuming. Nowadays, machine learning models are important tools to replace human tasks. In this case, there are several features to predict the wine quality but the entire features will not be relevant for better prediction. So, our thesis work is focusing on what wine features are important to get the promising result. For the purposeof classification model and evaluation of the relevant features, we used three algorithms namely support vector machine (SVM), naïve Bayes (NB), and artificial neural network (ANN). In this study, we used two wine quality datasets red wine and white wine. To evaluate the feature importance we used the Pearson coefficient correlation and performance measurement matrices such as accuracy, recall, precision, and f1 score for comparison of the machine learning algorithm. A grid search algorithm was applied to improve the model accuracy. Finally, we achieved the artificial neural network (ANN) algorithm has better prediction results than the Support Vector Machine (SVM) algorithm and the Naïve Bayes (NB) algorithm for both red wine and white wine datasets.
APA, Harvard, Vancouver, ISO, and other styles
22

Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.

Full text
Abstract:
In this paper an analysis is conducted regarding whether a higher classification accuracy of facial expressions are possible. The approach used is that the seven basic emotional states are combined into a binary classification problem. Five different machine learning algorithms are implemented: Support vector machines, Extreme learning Machine and three different Convolutional Neural Networks (CNN). The utilized CNN:S were one conventional, one based on VGG16 and transfer learning and one based on residual theory known as RESNET50. The experiment was conducted on two datasets, one small containing no contamination called JAFFE and one big containing contamination called FER2013. The highest accuracy was achieved with the CNN:s where RESNET50 had the highest classification accuracy. When comparing the classification accuracy with the state of the art accuracy an improvement of around 0.09 was achieved on the FER2013 dataset. This dataset does however include some ambiguities regarding what facial expression is shown. It would henceforth be of interest to conduct an experiment where humans classify the facial expressions in the dataset in order to achieve a benchmark.
APA, Harvard, Vancouver, ISO, and other styles
23

Bodén, Johan. "A Comparative Study of Reinforcement-­based and Semi­-classical Learning in Sensor Fusion." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84784.

Full text
Abstract:
Reinforcement learning has proven itself very useful in certain areas, such as games. However, the approach has been seen as quite limited. Reinforcement-based learning has for instance not been commonly used for classification tasks as it is receiving feedback on how well it did for an action performed on a specific input. This slows the performance convergence rate as compared to other classification approaches which has the input and the corresponding output to train on. Nevertheless, this thesis aims to investigate whether reinforcement-based learning could successfully be employed on a classification task. Moreover, as sensor fusion is an expanding field which can for instance assist autonomous vehicles in understanding its surroundings, it is also interesting to see how sensor fusion, i.e., fusion between lidar and RGB images, could increase the performance in a classification task. In this thesis, a reinforcement-based learning approach is compared to a semi-classical approach. As an example of a reinforcement learning model, a deep Q-learning network was chosen, and a support vector machine classifier built on top of a deep neural network, was chosen as an example of a semi-classical model. In this work, these frameworks are compared with and without sensor fusion to see whether fusion improves their performance. Experiments show that the evaluated reinforcement-based learning approach underperforms in terms of metrics but mainly due to its slow learning process, in comparison to the semi-classical approach. However, on the other hand using reinforcement-based learning to carry out a classification task could still in some cases be advantageous, as it still performs fairly well in terms of the metrics presented in this work, e.g. F1-score, or for instance imbalanced datasets. As for the impact of sensor fusion, a notable improvement can be seen, e.g. when training the deep Q-learning model for 50 episodes, the F1-score increased with 0.1329; especially, when taking into account that the most of the lidar data used in the fusion is lost since this work projects the 3D lidar data onto the same 2D plane as the RGB images.
APA, Harvard, Vancouver, ISO, and other styles
24

Bahceci, Oktay, and Oscar Alsing. "Stock Market Prediction using Social Media Analysis." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166448.

Full text
Abstract:
Stock Forecasting is commonly used in different forms everyday in order to predict stock prices. Sentiment Analysis (SA), Machine Learning (ML) and Data Mining (DM) are techniques that have recently become popular in analyzing public emotion in order to predict future stock prices. The algorithms need data in big sets to detect patterns, and the data has been collected through a live stream for the tweet data, together with web scraping for the stock data. This study examined how three organization's stocks correlate with the public opinion of them on the social networking platform, Twitter. Implementing various machine learning and classification models such as the Artificial Neural Network we successfully implemented a company-specific model capable of predicting stock price movement with 80% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

El-Hage, Sebastian. "Predicting Purchase of Airline Seating Using Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280461.

Full text
Abstract:
With the continuing surge in digitalization within the travel industry and the increased demand of personalized services, understanding customer behaviour is becoming a requirement to survive for travel agencies. The number of cases that addresses this problem are increasing and machine learning is expected to be the enabling technique. This thesis will attempt to train two different models, a multi-layer perceptron and a support vector machine, to reliably predict whether a customer will add a seat reservation with their flight booking. The models are trained on a large dataset consisting of 69 variables and over 1.1 million historical recordings of bookings dating back to 2017. The results from the trained models are satisfactory and the models are able to classify the data with an accuracy of around 70%. This shows that this type of problem is solvable with the techniques used. The results moreover suggest that further exploration of models and additional data could be of interest since this could help increase the level of performance.<br>Med den fortsatta ökningen av digitalisering inom reseindustrin och det faktum att kunder idag visar ett stort behov av skräddarsydda tjänster så stiger även kraven på företag att förstå sina kunders beteende för att överleva. En uppsjö av studier har gjorts där man försökt tackla problemet med att kunna förutse kundbeteende och maskininlärning har pekats ut som en möjliggörande teknik. Inom maskininlärning har det skett en stor utveckling och specifikt inom området djupinlärning. Detta har gjort att användningen av dessa teknologier för att lösa komplexa problem spritt sig till allt fler branscher. Den här studien implementerar en Multi-Layer Perceptron och en Support Vector Machine och tränar dessa på befintliga data för att tillförlitligt kunna avgöra om en kund kommer att köpa en sätesreservation eller inte till sin bokning. Datat som användes bestod av 69 variabler och över 1.1 miljoner historiska bokningar inom tidsspannet 2017 till 2020. Resultaten från studien är tillfredställande då modellerna i snitt lyckas klassificera med en noggrannhet på 70%, men inte optimala. Multi-Layer Perceptronen presterar bäst på båda mätvärdena som användes för att estimera prestandan på modellerna, accuracy och F1 score. Resultaten pekar även på att en påbyggnad av denna studie med mer data och fler klassificeringsmodeller är av intresse då detta skulle kunna leda till en högre nivå av prestanda.
APA, Harvard, Vancouver, ISO, and other styles
26

Dall'Olio, Lorenzo. "Estimation of biological vascular ageing via photoplethysmography: a comparison between statistical learning and deep learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21687/.

Full text
Abstract:
This work aims to exploit the biological ageing phenomena which affects human blood vessels. The analysis is performed starting from a database of photoplethysmographic signals acquired through smartphones. The further step involves a preprocessing phase, where the signals are detrended using a central moving average filter, demoduled using the envelope of the analytic signal obtained from the Hilbert transform, denoised using the central moving average filter over the envelope. After the preprocessing we compared two different approaches. The first one regards Statistical Learning, which involves feature extraction and selection through the usage of statistics and machine learning algorithms. This in order to perform a classification supervised task over the chronological age of the individual, which is used as a proxy for healthy/non healthy vascular ageing. The second one regards Deep Learning, which involves the realisation of a convolutional neural network to perform the same task, but avoiding the feature extraction/selection step and so possible bias introduced by such phases. Doing so we obtained comparable outcomes in terms of area under the curve metrics from a 12 layers ResNet convolutional network and a support vector machine using just covariates together with a couple of extracted features, acquiring clues regarding the possible usage of such features as biomarkers for the vascular ageing process. The two mentioned features can be related with increasing arterial stiffness and increasing signal randomness due to ageing.
APA, Harvard, Vancouver, ISO, and other styles
27

Deaney, Mogammat Waleed. "A Comparison of Machine Learning Techniques for Facial Expression Recognition." University of the Western Cape, 2018. http://hdl.handle.net/11394/6412.

Full text
Abstract:
Magister Scientiae - MSc (Computer Science)<br>A machine translation system that can convert South African Sign Language (SASL) video to audio or text and vice versa would be bene cial to people who use SASL to communicate. Five fundamental parameters are associated with sign language gestures, these are: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare both feature descriptors and machine learning techniques. This research used the Design Science Research (DSR) methodology. A DSR artefact was built which consisted of two phases. The rst phase compared local binary patterns (LBP), compound local binary patterns (CLBP) and histogram of oriented gradients (HOG) using support vector machines (SVM). The second phase compared the SVM to arti cial neural networks (ANN) and random forests (RF) using the most promising feature descriptor|HOG|from the rst phase. The performance was evaluated in terms of accuracy, robustness to classes, robustness to subjects and ability to generalise on both the Binghamton University 3D facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The second showed ANN to be the best choice of machine learning technique closely followed by the SVM and RF.
APA, Harvard, Vancouver, ISO, and other styles
28

Demartines, Pierre. "Analyse de données par réseaux de neurones auto-organisés." Grenoble INPG, 1994. http://www.theses.fr/1994INPG0129.

Full text
Abstract:
Chercher a comprendre des donnees, c'est souvent chercher a trouver de l'information cachee dans un gros volume de mesures redondantes. C'est chercher des dependances, lineaires ou non, entre les variables observees pour pouvoir resumer ces dernieres par un petit nombre de parametres. Une methode classique, l'analyse en composantes principales (acp), est abondamment employee dans ce but. Malheureusement, il s'agit d'une methode exclusivement lineaire, qui est donc incapable de reveler les dependances non lineaires entre les variables. Les cartes auto-organisantes de kohonen sont des reseaux de neurones artificiels dont la fonction peut etre vue comme une extension de l'acp aux cas non-lineaires. L'espace parametrique est represente par une grille de neurones, dont al forme, generaleent carree ou rectangulaire, doit malheureusement etre choisie a priori. Cette forme est souvent inadaptee a celle de l'espace parametriue recherche. Nous liberons cette contrainte avec un nouvel algorithme, nomme vector quantization and projection (vqp), qui est une sorte de carte auto-organisante dont l'espace de sortie est continu et prend automatiquement la forme adequate. Sur le plan mathematique, vqp peut etre defini comme la recherche d'un diffeomorphisme entre l'espace brute des donnees et un espace parametrique inconnu a trouver. Plus intuitivement, il s'agit d'un depliage de la structure des donnees vers un espace de plus petite dimension. Cette dimension, qui correspond au nombre de degres de liberte du phenomene etudie, peut etre determinee par des methodes d'analyse fractale du nuage de donnees. Afin d'illustrer la generalite de l'approche vqp, nous donnons une serie d'exemples d'applications, simulees ou reelles, dans des domaines varies qui vont de la fusion de donnees a l'appariement de graphes, en passant par l'analyse ou la surveillance de procedes industriels, la detection de defauts dans des machines ou le routage adaptatif en telecommunications
APA, Harvard, Vancouver, ISO, and other styles
29

Mancini, Eleonora. "Disruptive Situations Detection on Public Transports through Speech Emotion Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24721/.

Full text
Abstract:
In this thesis, we describe a study on the application of Machine Learning and Deep Learning methods for Voice Activity Detection (VAD) and Speech Emotion Recognition (SER). The study is in the context of a European project whose objective is to detect disruptive situations in public transports. To this end, we developed an architecture, implemented a prototype and ran validation tests on a variety of options. The architecture consists of several modules. The denoising module was realized through the use of a filter and the VAD module through an open-source toolkit, while the SER system was entirely developed in this thesis. For SER architecture we adopted the use of two audio features (MFCC and RMS) and two kind of classifiers, namely CNN and SVM, to detect emotions indicative of disruptive situations such as fighting or shouting. We aggregated several models through ensemble learning. The ensemble was evaluated on several datasets and showed encouraging experimental results, even compared to the baselines of the state-of the-art. The code is available at: https://github.com/helemanc/ambient-intelligence
APA, Harvard, Vancouver, ISO, and other styles
30

Jansson, Daniel, and Rasmus Blomstrand. "REAL-TIME PREDICTION OF SHIMS DIMENSIONS IN POWER TRANSFER UNITS USING MACHINE LEARNING." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-45615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Riera, Villanueva Marc. "Low-power accelerators for cognitive computing." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669828.

Full text
Abstract:
Deep Neural Networks (DNNs) have achieved tremendous success for cognitive applications, and are especially efficient in classification and decision making problems such as speech recognition or machine translation. Mobile and embedded devices increasingly rely on DNNs to understand the world. Smartphones, smartwatches and cars perform discriminative tasks, such as face or object recognition, on a daily basis. Despite the increasing popularity of DNNs, running them on mobile and embedded systems comes with several main challenges: delivering high accuracy and performance with a small memory and energy budget. Modern DNN models consist of billions of parameters requiring huge computational and memory resources and, hence, they cannot be directly deployed on low-power systems with limited resources. The objective of this thesis is to address these issues and propose novel solutions in order to design highly efficient custom accelerators for DNN-based cognitive computing systems. In first place, we focus on optimizing the inference of DNNs for sequence processing applications. We perform an analysis of the input similarity between consecutive DNN executions. Then, based on the high degree of input similarity, we propose DISC, a hardware accelerator implementing a Differential Input Similarity Computation technique to reuse the computations of the previous execution, instead of computing the entire DNN. We observe that, on average, more than 60% of the inputs of any neural network layer tested exhibit negligible changes with respect to the previous execution. Avoiding the memory accesses and computations for these inputs results in 63% energy savings on average. In second place, we propose to further optimize the inference of FC-based DNNs. We first analyze the number of unique weights per input neuron of several DNNs. Exploiting common optimizations, such as linear quantization, we observe a very small number of unique weights per input for several FC layers of modern DNNs. Then, to improve the energy-efficiency of FC computation, we present CREW, a hardware accelerator that implements a Computation Reuse and an Efficient Weight Storage mechanism to exploit the large number of repeated weights in FC layers. CREW greatly reduces the number of multiplications and provides significant savings in model memory footprint and memory bandwidth usage. We evaluate CREW on a diverse set of modern DNNs. On average, CREW provides 2.61x speedup and 2.42x energy savings over a TPU-like accelerator. In third place, we propose a mechanism to optimize the inference of RNNs. RNN cells perform element-wise multiplications across the activations of different gates, sigmoid and tanh being the common activation functions. We perform an analysis of the activation function values, and show that a significant fraction are saturated towards zero or one in popular RNNs. Then, we propose CGPA to dynamically prune activations from RNNs at a coarse granularity. CGPA avoids the evaluation of entire neurons whenever the outputs of peer neurons are saturated. CGPA significantly reduces the amount of computations and memory accesses while avoiding sparsity by a large extent, and can be easily implemented on top of conventional accelerators such as TPU with negligible area overhead, resulting in 12% speedup and 12% energy savings on average for a set of widely used RNNs. Finally, in the last contribution of this thesis we focus on static DNN pruning methodologies. DNN pruning reduces memory footprint and computational work by removing connections and/or neurons that are ineffectual. However, we show that prior pruning schemes require an extremely time-consuming iterative process that requires retraining the DNN many times to tune the pruning parameters. Then, we propose a DNN pruning scheme based on Principal Component Analysis and relative importance of each neuron's connection that automatically finds the optimized DNN in one shot.<br>Les xarxes neuronals profundes (DNN) han aconseguit un èxit enorme en aplicacions cognitives, i són especialment eficients en problemes de classificació i presa de decisions com ara reconeixement de veu o traducció automàtica. Els dispositius mòbils depenen cada cop més de les DNNs per entendre el món. Els telèfons i rellotges intel·ligents, o fins i tot els cotxes, realitzen diàriament tasques discriminatòries com ara el reconeixement de rostres o objectes. Malgrat la popularitat creixent de les DNNs, el seu funcionament en sistemes mòbils presenta diversos reptes: proporcionar una alta precisió i rendiment amb un petit pressupost de memòria i energia. Les DNNs modernes consisteixen en milions de paràmetres que requereixen recursos computacionals i de memòria enormes i, per tant, no es poden utilitzar directament en sistemes de baixa potència amb recursos limitats. L'objectiu d'aquesta tesi és abordar aquests problemes i proposar noves solucions per tal de dissenyar acceleradors eficients per a sistemes de computació cognitiva basats en DNNs. En primer lloc, ens centrem en optimitzar la inferència de les DNNs per a aplicacions de processament de seqüències. Realitzem una anàlisi de la similitud de les entrades entre execucions consecutives de les DNNs. A continuació, proposem DISC, un accelerador que implementa una tècnica de càlcul diferencial, basat en l'alt grau de semblança de les entrades, per reutilitzar els càlculs de l'execució anterior, en lloc de computar tota la xarxa. Observem que, de mitjana, més del 60% de les entrades de qualsevol capa de les DNNs utilitzades presenten canvis menors respecte a l'execució anterior. Evitar els accessos de memòria i càlculs d'aquestes entrades comporta un estalvi d'energia del 63% de mitjana. En segon lloc, proposem optimitzar la inferència de les DNNs basades en capes FC. Primer analitzem el nombre de pesos únics per neurona d'entrada en diverses xarxes. Aprofitant optimitzacions comunes com la quantització lineal, observem un nombre molt reduït de pesos únics per entrada en diverses capes FC de DNNs modernes. A continuació, per millorar l'eficiència energètica del càlcul de les capes FC, presentem CREW, un accelerador que implementa un eficient mecanisme de reutilització de càlculs i emmagatzematge dels pesos. CREW redueix el nombre de multiplicacions i proporciona estalvis importants en l'ús de la memòria. Avaluem CREW en un conjunt divers de DNNs modernes. CREW proporciona, de mitjana, una millora en rendiment de 2,61x i un estalvi d'energia de 2,42x. En tercer lloc, proposem un mecanisme per optimitzar la inferència de les RNNs. Les cel·les de les xarxes recurrents realitzen multiplicacions element a element de les activacions de diferents comportes, sigmoides i tanh sent les funcions habituals d'activació. Realitzem una anàlisi dels valors de les funcions d'activació i mostrem que una fracció significativa està saturada cap a zero o un en un conjunto d'RNNs populars. A continuació, proposem CGPA per podar dinàmicament les activacions de les RNNs a una granularitat gruixuda. CGPA evita l'avaluació de neurones senceres cada vegada que les sortides de neurones parelles estan saturades. CGPA redueix significativament la quantitat de càlculs i accessos a la memòria, aconseguint en mitjana un 12% de millora en el rendiment i estalvi d'energia. Finalment, en l'última contribució d'aquesta tesi ens centrem en metodologies de poda estàtica de les DNNs. La poda redueix la petjada de memòria i el treball computacional mitjançant l'eliminació de connexions o neurones redundants. Tanmateix, mostrem que els esquemes de poda previs fan servir un procés iteratiu molt llarg que requereix l'entrenament de les DNNs moltes vegades per ajustar els paràmetres de poda. A continuació, proposem un esquema de poda basat en l'anàlisi de components principals i la importància relativa de les connexions de cada neurona que optimitza automàticament el DNN optimitzat en un sol tret sense necessitat de sintonitzar manualment múltiples paràmetres
APA, Harvard, Vancouver, ISO, and other styles
32

Kinto, Eduardo Akira. "Otimização e análise das máquinas de vetores de suporte aplicadas à classificação de documentos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-04112011-151337/.

Full text
Abstract:
A análise das informações armazenadas é fundamental para qualquer tomada de decisão, mas para isso ela deve estar organizada e permitir fácil acesso. Quando temos um volume de dados muito grande, esta tarefa torna-se muito mais complicada do ponto de vista computacional. É fundamental, então, haver mecanismos eficientes para análise das informações. As Redes Neurais Artificiais (RNA), as Máquinas de Vetores-Suporte (Support Vector Machine - SVM) e outros algoritmos são frequentemente usados para esta finalidade. Neste trabalho, iremos explorar o SMO (Sequential Minimal Optimization) e alterá-lo, com a finalidade de atingir um tempo de treinamento menor, mas, ao mesmo tempo manter a capacidade de classificação. São duas as alterações propostas, uma, no seu algoritmo de treinamento e outra, na sua arquitetura. A primeira modificação do SMO proposta neste trabalho é permitir a atualização de candidatos ao vetor suporte no mesmo ciclo de atualização de um coeficiente de Lagrange. Dos algoritmos que codificam o SVM, o SMO é um dos mais rápidos e um dos que menos consome memória. A complexidade computacional do SMO é menor com relação aos demais algoritmos porque ele não trabalha com inversão de uma matriz de kernel. Esta matriz, que é quadrada, costuma ter um tamanho proporcional ao número de amostras que compõem os chamados vetores-suporte. A segunda proposta para diminuir o tempo de treinamento do SVM consiste na subdivisão ordenada do conjunto de treinamento, utilizando-se a dimensão de maior entropia. Esta subdivisão difere das abordagens tradicionais pelo fato de as amostras não serem constantemente submetidas repetidas vezes ao treinamento do SVM. Finalmente, é aplicado o SMO proposto para classificação de documentos ou textos por meio de uma abordagem nova, a classificação de uma-classe usando classificadores binários. Como toda classificação de documentos, a análise dos atributos é uma etapa fundamental, e aqui uma nova contribuição é apresentada. Utilizamos a correlação total ponto a ponto para seleção das palavras que formam o vetor de índices de palavras.<br>Stored data analysis is very important when taking a decision in every business, but to accomplish this task data must be organized in a way it can be easily accessed. When we have a huge amount of information, data analysis becomes a very computational hard job. So, it is essential to have an efficient mechanism for information analysis. Artificial neural networks (ANN), support vector machine (SVM) and other algorithms are frequently used for information analysis, and also in huge volume information analysis. In this work we will explore the sequential minimal optimization (SMO) algorithm, a learning algorithm for the SVM. We will modify it aiming for a lower training time and also to maintaining its classification generalization capacity. Two modifications are proposed to the SMO, one in the training algorithm and another in its architecture. The first modification to the SMO enables more than one Lagrange coefficient update by choosing the neighbor samples of the updating pair (current working set). From many options of SVM implementation, SMO was chosen because it is one of the fastest and less memory consuming one. The computational complexity of the SMO is lower than other types of SVM because it does not require handling a huge Kernel matrix. Matrix inversion is one of the most time consuming step of SVM, and its size is as bigger as the number of support vectors of the sample set. The second modification to the SMO proposes the creation of an ordered subset using as a reference one of the dimensions; entropy measure is used to choose the dimension. This subset creation is different from other division based SVM architectures because samples are not used in more than one training pair set. All this improved SVM is used on a one-class like classification task of documents. Every document classification problem needs a good feature vector (feature selection and dimensionality reduction); we propose in this work a novel feature indexing mechanism using the pointwise total correlation.
APA, Harvard, Vancouver, ISO, and other styles
33

Lantz, Robin. "Time series monitoring and prediction of data deviations in a manufacturing industry." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-100181.

Full text
Abstract:
An automated manufacturing industry makes use of many interacting moving parts and sensors. Data from these sensors generate complex multidimensional data in the production environment. This data is difficult to interpret and also difficult to find patterns in. This project provides tools to get a deeper understanding of Swedsafe’s production data, a company involved in an automated manufacturing business. The project is based on and will show the potential of the multidimensional production data. The project mainly consists of predicting deviations from predefined threshold values in Swedsafe’s production data. Machine learning is a good method of finding relationships in complex datasets. Supervised machine learning classification is used to predict deviation from threshold values in the data. An investigation is conducted to identify the classifier that performs best on Swedsafe's production data. The technique sliding window is used for managing time series data, which is used in this project. Apart from predicting deviations, this project also includes an implementation of live graphs to easily get an overview of the production data. A steady production with stable process values is important. So being able to monitor and predict events in the production environment can provide the same benefit for other manufacturing companies and is therefore suitable not only for Swedsafe. The best performing machine learning classifier tested in this project was the Random Forest classifier. The Multilayer Perceptron did not perform well on Swedsafe’s data, but further investigation in recurrent neural networks using LSTM neurons would be recommended. During the projekt a web based application displaying the sensor data in live graphs is also developed.
APA, Harvard, Vancouver, ISO, and other styles
34

Peng, Danilo. "Application of machine learning in 5G to extract prior knowledge of the underlying structure in the interference channel matrices." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252314.

Full text
Abstract:
The data traffic has been growing drastic over the past few years due to digitization and new technologies that are introduced to the market, such as autonomous cars. In order to meet this demand, the MIMO-OFDM system is used in the fifth generation wireless network, 5G. Designing the optimal wireless network is currently the main research within the area of telecommunication. In order to achieve such a system, multiple factors has to be taken into account, such as the suppression of interference from other users. A traditional method called linear minimum mean square error filter is currently used to suppress the interferences. To derive such a filter, a selection of parameters has to be estimated. One of these parameters is the ideal interference plus noise covariance matrix. By gathering prior knowledge of the underlying structure of the interference channel matrices in terms of the number of interferers and their corresponding bandwidths, the estimation of the ideal covariance matrix could be facilitated. As for this thesis, machine learning algorithms were used to extract these prior knowledge. More specifically, a two or three hidden layer feedforward neural network and a support vector machine with a linear kernel was used. The empirical findings implies promising results with accuracies above 95% for each model.<br>Under de senaste åren har dataanvändningen ökat drastiskt på grund av digitaliseringen och allteftersom nya teknologier introduceras på marknaden, exempelvis självkörande bilar. För att bemöta denna efterfrågan används ett s.k. MIMO-OFDM system i den femte generationens trådlösa nätverk, 5G. Att designa det optimala trådlösa nätverket är för närvarande huvudforskningen inom telekommunikation och för att uppnå ett sådant system måste flera faktorer beaktas, bland annat störningar från andra användare. En traditionell metod som används för att dämpa störningarna kallas för linjära minsta medelkvadratfelsfilter. För att hitta ett sådant filter måste flera olika parametrar estimeras, en av dessa är den ideala störning samt bruskovariansmatrisen. Genom att ta reda på den underliggande strukturen i störningsmatriserna i termer av antal störningar samt deras motsvarande bandbredd, är något som underlättar uppskattningen av den ideala kovariansmatrisen. I följande avhandling har olika maskininlärningsalgoritmer applicerats för att extrahera dessa informationer. Mer specifikt, ett neuralt nätverk med två eller tre gömda lager samt stödvektormaskin med en linjär kärna har använts. De slutliga resultaten är lovande med en noggrannhet på minst 95% för respektive modell.
APA, Harvard, Vancouver, ISO, and other styles
35

Granström, Daria, and Johan Abrahamsson. "Loan Default Prediction using Supervised Machine Learning Algorithms." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252312.

Full text
Abstract:
It is essential for a bank to estimate the credit risk it carries and the magnitude of exposure it has in case of non-performing customers. Estimation of this kind of risk has been done by statistical methods through decades and with respect to recent development in the field of machine learning, there has been an interest in investigating if machine learning techniques can perform better quantification of the risk. The aim of this thesis is to examine which method from a chosen set of machine learning techniques exhibits the best performance in default prediction with regards to chosen model evaluation parameters. The investigated techniques were Logistic Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificial Neural Network and Support Vector Machine. An oversampling technique called SMOTE was implemented in order to treat the imbalance between classes for the response variable. The results showed that XGBoost without implementation of SMOTE obtained the best result with respect to the chosen model evaluation metric.<br>Det är nödvändigt för en bank att ha en bra uppskattning på hur stor risk den bär med avseende på kunders fallissemang. Olika statistiska metoder har använts för att estimera denna risk, men med den nuvarande utvecklingen inom maskininlärningsområdet har det väckt ett intesse att utforska om maskininlärningsmetoder kan förbättra kvaliteten på riskuppskattningen. Syftet med denna avhandling är att undersöka vilken metod av de implementerade maskininlärningsmetoderna presterar bäst för modellering av fallissemangprediktion med avseende på valda modelvaldieringsparametrar. De implementerade metoderna var Logistisk Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificiella neurala nätverk och Stödvektormaskin. En översamplingsteknik, SMOTE, användes för att behandla obalansen i klassfördelningen för svarsvariabeln. Resultatet blev följande: XGBoost utan implementering av SMOTE visade bäst resultat med avseende på den valda metriken.
APA, Harvard, Vancouver, ISO, and other styles
36

Mandal, Sayan. "Applications of Persistent Homology and Cycles." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1591811236244813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Grahn, Fredrik, and Kristian Nilsson. "Object Detection in Domain Specific Stereo-Analysed Satellite Images." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159917.

Full text
Abstract:
Given satellite images with accompanying pixel classifications and elevation data, we propose different solutions to object detection. The first method uses hierarchical clustering for segmentation and then employs different methods of classification. One of these classification methods used domain knowledge to classify objects while the other used Support Vector Machines. Additionally, a combination of three Support Vector Machines were used in a hierarchical structure which out-performed the regular Support Vector Machine method in most of the evaluation metrics. The second approach is more conventional with different types of Convolutional Neural Networks. A segmentation network was used as well as a few detection networks and different fusions between these. The Convolutional Neural Network approach proved to be the better of the two in terms of precision and recall but the clustering approach was not far behind. This work was done using a relatively small amount of data which potentially could have impacted the results of the Machine Learning models in a negative way.
APA, Harvard, Vancouver, ISO, and other styles
38

Louis, Thomas. "Conventionnel ou bio-inspiré ? Stratégies d'optimisation de l'efficacité énergétique des réseaux de neurones pour environnements à ressources limitées." Electronic Thesis or Diss., Université Côte d'Azur, 2025. http://www.theses.fr/2025COAZ4001.

Full text
Abstract:
Intégrer des algorithmes d'intelligence artificielle (IA) directement dans des satellites présente de nombreux défis. Ces systèmes embarqués, fortement limités en consommation d'énergie et en empreinte mémoire, doivent également résister aux interférences. Cela nécessite systématiquement l'utilisation de systèmes sur puce (SoC) afin de combiner deux systèmes dits « hétérogènes » : un microcontrôleur polyvalent et un accélérateur de calcul économe en énergie (comme un FPGA ou un ASIC). Pour relever les défis liés au portage de telles architectures, cette thèse se concentre sur l'optimisation et le déploiement de réseaux de neurones sur des architectures embarquées hétérogènes, dans le but de trouver un compromis entre la consommation d'énergie et la performance de l'IA. Dans le chapitre 2 de cette thèse, une étude approfondie des techniques de compression récentes pour des réseaux de neurones formels (FNN) tels que les MLP ou CNN a tout d'abord été effectuée. Ces techniques, qui permettent de réduire la complexité calculatoire et l'empreinte mémoire de ces modèles, sont essentielles pour leur déploiement dans des environnements aux ressources limitées. Les réseaux de neurones impulsionnels (SNN) ont également été explorés. Ces réseaux bio-inspirés peuvent en effet offrir une plus grande efficacité énergétique par rapport aux FNN. Dans le chapitre 3, nous avons ainsi adapté et élaboré des méthodes de quantification innovantes afin de réduire le nombre de bits utilisés pour représenter les valeurs d'un réseau impulsionnel. Nous avons ainsi pu confronter la quantification des SNN et des FNN, afin d'en comparer et comprendre les pertes et gains respectifs. Néanmoins, réduire l'activité d'un SNN (e.g. le nombre d'impulsions générées lors de l'inférence) améliore directement l'efficacité énergétique des SNN. Dans ce but, nous avons exploité dans le chapitre 4 des techniques de distillation de connaissances et de régularisation. Ces méthodes permettent de réduire l'activité impulsionnelle du réseau tout en préservant son accuracy, ce qui garantit un fonctionnement efficace des SNN sur du matériel à ressources limitées. Dans la dernière partie de cette thèse, nous nous sommes intéressés à l'hybridation des SNN et FNN. Ces réseaux hybrides (HNN) visent à optimiser encore davantage l'efficacité énergétique tout en améliorant les performances. Nous avons également proposé des réseaux multi-timesteps innovants, qui traitent l'information à des latences différentes à travers les couches d'un même SNN. Les résultats expérimentaux montrent que cette approche permet une réduction de la consommation d'énergie globale tout en maintenant les performances sur un ensemble de tâches. Ce travail de thèse constitue une base pour déployer les futures applications des réseaux de neurones dans l'espace. Pour valider nos méthodes, nous fournissons une analyse comparative sur différents jeux de données publics (CIFAR-10, CIFAR-100, MNIST, Google Speech Commands) et sur un jeu de données privé pour la segmentation des nuages. Nos approches sont évaluées sur la base de métriques telles que l'accuracy, la consommation d'énergie ou l'activité du SNN. Ce travail de recherche ne se limite pas aux applications aérospatiales. Nous avons en effet mis en évidence le potentiel des SNN quantifiés, des réseaux de neurones hybrides et des réseaux multi-timesteps pour une variété de scénarios réels où l'efficacité énergétique est cruciale. Ce travail offre ainsi des perspectives intéressantes pour des domaines tels que les dispositifs IoT, les véhicules autonomes et d'autres systèmes nécessitant un déploiement efficace de l'IA<br>Integrating artificial intelligence (AI) algorithms directly into satellites presents numerous challenges. These embedded systems, which are heavily limited in energy consumption and memory footprint, must also withstand interference. This systematically requires the use of system-on-chip (SoC) solutions to combine two so-called “heterogeneous” systems: a versatile microcontroller and an energy-efficient computing accelerator (such as an FPGA or ASIC). To address the challenges related to deploying such architectures, this thesis focuses on optimizing and deploying neural networks on heterogeneous embedded architectures, aiming to balance energy consumption and AI performance.In Chapter 2 of this thesis, an in-depth study of recent compression techniques for feedforward neural networks (FNN) like MLPs or CNNs was conducted. These techniques, which reduce the computational complexity and memory footprint of these models, are essential for deployment in resource-constrained environments. Spiking neural networks (SNN) were also explored. These bio-inspired networks can indeed offer greater energy efficiency compared to FNNs.In Chapter 3, we adapted and developed innovative quantization methods to reduce the number of bits used to represent the values in a spiking network. This allowed us to compare the quantization of SNNs and FNNs, to understand and assess their respective trade-offs in terms of losses and gains. Reducing the activity of an SNN (e.g., the number of spikes generated during inference) directly improves the energy efficiency of SNNs. To this end, in Chapter 4, we leveraged knowledge distillation and regularization techniques. These methods reduce the spiking activity of the network while preserving its accuracy, ensuring effective operation of SNNs on resource-limited hardware.In the final part of this thesis, we explored the hybridization of SNNs and FNNs. These hybrid networks (HNN) aim to further optimize energy efficiency while enhancing performance. We also proposed innovative multi-timestep networks, which process information with different latencies across layers within the same SNN. Experimental results show that this approach enables a reduction in overall energy consumption while maintaining performance across a range of tasks.This thesis serves as a foundation for deploying future neural network applications in space. To validate our methods, we provide a comparative analysis on various public datasets (CIFAR-10, CIFAR-100, MNIST, Google Speech Commands) as well as on a private dataset for cloud segmentation. Our approaches are evaluated based on metrics such as accuracy, energy consumption, or SNN activity. This research extends beyond aerospace applications. We have demonstrated the potential of quantized SNNs, hybrid neural networks, and multi-timestep networks for a variety of real-world scenarios where energy efficiency is critical. This work offers promising prospects for fields such as IoT devices, autonomous vehicles, and other systems requiring efficient AI deployment
APA, Harvard, Vancouver, ISO, and other styles
39

Deivard, Johannes. "How accuracy of estimated glottal flow waveforms affects spoofed speech detection performance." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48414.

Full text
Abstract:
In the domain of automatic speaker verification,  one of the challenges is to keep the malevolent people out of the system.  One way to do this is to create algorithms that are supposed to detect spoofed speech. There are several types of spoofed speech and several ways to detect them, one of which is to look at the glottal flow waveform  (GFW) of a speech signal. This waveform is often estimated using glottal inverse filtering  (GIF),  since, in order to create the ground truth  GFW, special invasive equipment is required.  To the author’s knowledge, no research has been done where the correlation of GFW accuracy and spoofed speech detection (SSD) performance is investigated. This thesis tries to find out if the aforementioned correlation exists or not.  First, the performance of different GIF methods is evaluated, then simple SSD machine learning (ML) models are trained and evaluated based on their macro average precision. The ML models use different datasets composed of parametrized GFWs estimated with the GIF methods from the previous step. Results from the previous tasks are then combined in order to spot any correlations.  The evaluations of the different methods showed that they created GFWs of varying accuracy.  The different machine learning models also showed varying performance depending on what type of dataset that was being used. However, when combining the results, no obvious correlations between GFW accuracy and SSD performance were detected.  This suggests that the overall accuracy of a GFW is not a substantial factor in the performance of machine learning-based SSD algorithms.
APA, Harvard, Vancouver, ISO, and other styles
40

Demus, Justin Cole. "Prognostic Health Management Systems for More Electric Aircraft Applications." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1631047006902809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Masetti, Masha. "Product Clustering e Machine Learning per il miglioramento dell'accuratezza della previsione della domanda: il caso Comer Industries S.p.A." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
I lunghi lead time della catena di fornitura cinese dell’azienda Comer Industries S.p.A la obbligano ad ordinare i materiali con sei mesi di anticipo, data in cui spesso i clienti non sono consapevoli dei quantitativi di materiale che necessiteranno. Al fine di rispondere ai clienti mantenendo l’alto livello di servizio garantito storicamente da Comer Industries, risulta essenziale ordinare il materiale basandosi sulle previsioni della domanda. Tuttavia, attualmente le previsioni non sono sufficientemente accurate. L’obiettivo di questa ricerca è individuare un possibile metodo per incrementare l’accuratezza delle previsioni della domanda. Potrebbe, al fine del miglioramento della forecast accuracy, incidere positivamente l’utilizzo dell’Intelligenza Artificiale? Per rispondere alla domanda di ricerca, si sono implementati l’algoritmo K-Means e l’algoritmo Gerarchico in Visual Basic Application al fine di dividere i prodotti in cluster sulla base dei componenti comuni. Successivamente, si sono analizzati gli andamenti della domanda. Implementando differenti algoritmi di Machine Learning su Google Colaboratory, si sono paragonate le accuratezze ottenute e si è individuato un algoritmo di previsione ottimale per ciascun profilo di domanda. Infine, con le previsioni effettuate, si è potuto identificare con il K-means un miglioramento dell’accuracy di circa il 54,62% rispetto all’accuratezza iniziale ed un risparmio del 47% dei costi per il mantenimento del safety stock, mentre con il Clustering Gerarchico si è rilevato un miglioramento dell’accuracy del 11,15% ed un risparmio del 45% dei costi attuali. Si è, pertanto, concluso che la clusterizzazione dei prodotti potrebbe apportare un contributo positivo all’accuratezza delle previsioni. Inoltre, si è osservato come il Machine Learning potrebbe costituire lo strumento ideale per individuare le soluzioni ottimali sia all’interno degli algoritmi di Clustering sia all’interno dei metodi previsionali.
APA, Harvard, Vancouver, ISO, and other styles
42

Lanzarone, Lorenzo Biagio. "Manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale: una valutazione sperimentale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22853/.

Full text
Abstract:
Nella società è in corso un processo di evoluzione tecnologica, il quale sviluppa una connessione tra l’ambiente fisico e l’ambiente digitale, per scambiare dati e informazioni. Nella presente tesi si approfondisce, nel contesto dell’Industria 4.0, la tematica della manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale, per prevedere in anticipo il verificarsi di un imminente guasto, identificandolo prima ancora che si possa verificare. La presente tesi è divisa in due parti complementari, nella prima parte si approfondiscono gli aspetti teorici relativi al contesto e allo stato dell’arte, mentre nella seconda parte gli aspetti pratici e progettuali. In particolare, la prima parte è dedicata a fornire una panoramica sull’Industria 4.0 e su una sua applicazione, rappresentata dalla manutenzione predittiva. Successivamente vengono affrontate le tematiche inerenti l’intelligenza artificiale e la Data Science, tramite le quali è possibile applicare la manutenzione predittiva. Nella seconda parte invece, si propone un progetto pratico, ossia il lavoro da me svolto durante un tirocinio presso la software house Open Data di Funo di Argelato (Bologna). L’obiettivo del progetto è stato la realizzazione di un sistema informatico di manutenzione predittiva di macchinari industriali per lo stampaggio plastico a iniezione, utilizzando tecniche di intelligenza artificiale. Il fine ultimo è l’integrazione di tale sistema all’interno del software Opera MES sviluppato dall’azienda.
APA, Harvard, Vancouver, ISO, and other styles
43

Rossholm, Andreas. "On Enhancement and Quality Assessment of Audio and Video in Communication Systems." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00604.

Full text
Abstract:
The use of audio and video communication has increased exponentially over the last decade and has gone from speech over GSM to HD resolution video conference between continents on mobile devices. As the use becomes more widespread the interest in delivering high quality media increases even on devices with limited resources. This includes both development and enhancement of the communication chain but also the topic of objective measurements of the perceived quality. The focus of this thesis work has been to perform enhancement within speech encoding and video decoding, to measure influence factors of audio and video performance, and to build methods to predict the perceived video quality. The audio enhancement part of this thesis addresses the well known problem in the GSM system with an interfering signal generated by the switching nature of TDMA cellular telephony. Two different solutions are given to suppress such interference internally in the mobile handset. The first method involves the use of subtractive noise cancellation employing correlators, the second uses a structure of IIR notch filters. Both solutions use control algorithms based on the state of the communication between the mobile handset and the base station. The video enhancement part presents two post-filters. These two filters are designed to improve visual quality of highly compressed video streams from standard, block-based video codecs by combating both blocking and ringing artifacts. The second post-filter also performs sharpening. The third part addresses the problem of measuring audio and video delay as well as skewness between these, also known as synchronization. This method is a black box technique which enables it to be applied on any audiovisual application, proprietary as well as open standards, and can be run on any platform and over any network connectivity. The last part addresses no-reference (NR) bitstream video quality prediction using features extracted from the coded video stream. Several methods have been used and evaluated: Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Least Square Support Vector Machines (LS-SVM), showing high correlation with both MOS and objective video assessment methods as PSNR and PEVQ. The impact from temporal, spatial and quantization variations on perceptual video quality has also been addressed, together with the trade off between these, and for this purpose a set of locally conducted subjective experiments were performed.
APA, Harvard, Vancouver, ISO, and other styles
44

Martínez, Brito Izacar Jesús. "Quantitative structure fate relationships for multimedia environmental analysis." Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.

Full text
Abstract:
Key physicochemical properties for a wide spectrum of chemical pollutants are unknown. This thesis analyses the prospect of assessing the environmental distribution of chemicals directly from supervised learning algorithms using molecular descriptors, rather than from multimedia environmental models (MEMs) using several physicochemical properties estimated from QSARs. Dimensionless compartmental mass ratios of 468 validation chemicals were compared, in logarithmic units, between: a) SimpleBox 3, a Level III MEM, propagating random property values within statistical distributions of widely recommended QSARs; and, b) Support Vector Regressions (SVRs), acting as Quantitative Structure-Fate Relationships (QSFRs), linking mass ratios to molecular weight and constituent counts (atoms, bonds, functional groups and rings) for training chemicals. Best predictions were obtained for test and validation chemicals optimally found to be within the domain of applicability of the QSFRs, evidenced by low MAE and high q2 values (in air, MAE&#8804;0.54 and q2&#8805;0.92; in water, MAE&#8804;0.27 and q2&#8805;0.92).<br>Las propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE&#8804;0.54 y q2&#8805;0.92; en agua, MAE&#8804;0.27 y q2&#8805;0.92).
APA, Harvard, Vancouver, ISO, and other styles
45

Zadeh, Saman Akbar. "Application of advanced algorithms and statistical techniques for weed-plant discrimination." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2020. https://ro.ecu.edu.au/theses/2352.

Full text
Abstract:
Precision agriculture requires automated systems for weed detection as weeds compete with the crop for water, nutrients, and light. The purpose of this study is to investigate the use of machine learning methods to classify weeds/crops in agriculture. Statistical methods, support vector machines, convolutional neural networks (CNNs) are introduced, investigated and optimized as classifiers to provide high accuracy at high vehicular speed for weed detection. Initially, Support Vector Machine (SVM) algorithms are developed for weed-crop discrimination and their accuracies are compared with a conventional data-aggregation method based on the evaluation of discrete Normalised Difference Vegetation Indices (NDVIs) at two different wavelengths. The results of this work show that the discrimination performance of the Gaussian kernel SVM algorithm, with either raw reflected intensities or NDVI values being used as inputs, provides better discrimination accuracy than the conventional discrete NDVI-based aggregation algorithm. Then, we investigate a fast statistical method for CNN parameter optimization, which can be applied in many CNN applications and provides more explainable results. This study specifically applies Taguchi based experimental designs for network optimization in a basic network, a simplified inception network and a simplified Resnet network, and conducts a comparison analysis to assess their respective performance and then to select the hyper parameters and networks that facilitate faster training and provide better accuracy. Results show that, for all investigated CNN architectures, there is a measurable improvement in accuracy in comparison with un-optimized CNNs, and that the Inception network yields the highest improvement (~ 6%) in accuracy compared to simple CNN (~ 5%) and Resnet CNN counterparts (~ 2%). Aimed at achieving weed-crop classification in real-time at high speeds, while maintaining high accuracy, the algorithms are uploaded on both a small embedded NVIDIA Jetson TX1 board for real-time precision agricultural applications, and a larger high throughput GeForce GTX 1080Ti board for aerial crop analysis applications. Experimental results show that for a simplified CNN algorithm implemented on a Jetson TX1 board, an improvement in detection speed of thirty times (60 km/hr) can be achieved by using spectral reflectance data rather than imaging data. Furthermore, with an Inception algorithm implemented on a GeForce GTX 1080Ti board for aerial weed detection, an improvement in detection speed of 11 times (~2300 km/hr) can be achieved, while maintaining an adequate detection accuracy above 80%. These high speeds are attained by reducing the data size, choosing spectral components with high information contents at lower resolution, pre-processing efficiently, optimizing the deep learning networks through the use of simplified faster networks for feature detection and classification, and optimizing computational power with available power and embedded resources, to identify the best fit hardware platforms.
APA, Harvard, Vancouver, ISO, and other styles
46

Ling, Hang Jung. "Écoulement intraventriculaire en échocardiographie Doppler avec réseaux de neurones fondés sur la physique." Electronic Thesis or Diss., Lyon, INSA, 2024. http://www.theses.fr/2024ISAL0087.

Full text
Abstract:
Le cœur, en tant qu'organe central du système cardiovasculaire, est responsable de pomper le sang vers les cellules et tissus de l'organisme. L'évaluation de la santé cardiaque est cruciale pour la prévention précoce des maladies cardiovasculaires. L'échocardiographie, en raison de sa portabilité et de son coût abordable, est couramment utilisée pour évaluer l'efficacité du cœur pendant le remplissage (fonction diastolique) et l'éjection (fonction systolique). Alors que la fonction systolique est mesurée par des paramètres tels que la fraction d’éjection, la fonction diastolique repose sur des métriques liées aux vitesses de la valve mitrale et de l’anneau mitral, ce qui peut parfois donner des résultats de diagnostic discordants. La cartographie du flux vectoriel intraventriculaire (iVFM) offre une alternative en reconstruisant les flux sanguins vectoriels à partir des acquisitions Doppler couleur. Cette méthode permet l'évaluation des schémas de flux sanguin intracardiaque et des caractéristiques de vortex, offrant potentiellement une quantification plus précise de la fonction diastolique. Cependant, l'iVFM nécessite des étapes de prétraitement chronophages, telles que la segmentation du ventricule gauche et la correction des artefacts d'aliasing. Cette thèse propose des techniques d'apprentissage profond (DL) pour automatiser ces processus. Tout d'abord, des modèles 3D de DL ont été développés pour obtenir une segmentation temporellement cohérente du ventricule gauche. Ensuite, des méthodes basées sur le DL ont été entraînées pour corriger les artefacts d'aliasing grâce à des techniques de segmentation et de “deep unfolding”. Enfin, l'iVFM a été réalisé en utilisant des réseaux de neurones informés par la physique (PINNs) et une méthode supervisée guidée par la physique. Les approches proposées basées sur les réseaux de neurones ont montré des performances comparables à l’iVFM original, avec l'avantage supplémentaire que la méthode guidée par la physique est indépendante des conditions aux limites explicites. Ces résultats soulignent le potentiel des PINNs dans l'imagerie Doppler couleur ultrarapide, avec l'intégration les équations de la dynamique des fluides pour améliorer la précision de la reconstruction. L'automatisation de l’iVFM avec des réseaux de neurones renforce sa fiabilité, ouvrant la voie à des applications cliniques et à l'exploration de nouveaux biomarqueurs basés sur les flux<br>The heart, as the central organ of the cardiovascular system, is responsible for pumping blood to all the body’s cells and tissues. Assessing cardiac health is crucial for the early detection and prevention of cardiovascular diseases. Echocardiography, due to its portability and affordability, is commonly used to evaluate the heart’s efficiency during filling (diastolic function) and ejection (systolic function). While systolic function is typically assessed using parameters like the ejection fraction, diastolic function is often measured through mitral valve and annular velocities, which can sometimes result in inconsistent diagnoses. Intraventricular vector flow mapping (iVFM) offers an alternative approach by reconstructing vector blood flow from color Doppler acquisitions. This method allows for the evaluation of intracardiac blood flow patterns and vortex characteristics, providing potentially more accurate quantification of diastolic function. However, iVFM involves time-consuming preprocessing steps, such as left ventricular segmentation and aliasing correction. This thesis introduces deep learning (DL) techniques to automate these processes. First, 3D DL models were developed to achieve temporally consistent left ventricular segmentation. Next, DL-based methods were applied to address aliasing artifacts through segmentation and deep unfolding techniques. Finally, iVFM was performed using physics-informed neural networks (PINNs) and a physics-guided supervised method. The proposed neural network approaches demonstrated performance on par with the original iVFM technique, with the added benefit of the physics-guided method being independent of explicit boundary conditions. These findings underscore the potential application of PINNs in ultrafast color Doppler imaging with the integration of fluid dynamics equations to enhance reconstruction accuracy. Automating the iVFM pipeline with neural networks enhances its reliability, paving the way for clinical applications and the exploration of new flow-based biomarkers
APA, Harvard, Vancouver, ISO, and other styles
47

Hassani, Mujtaba. "CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49007.

Full text
Abstract:
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:
APA, Harvard, Vancouver, ISO, and other styles
48

Bílý, Ondřej. "Moderní řečové příznaky používané při diagnóze chorob." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218971.

Full text
Abstract:
This work deals with the diagnosis of Parkinson's disease by analyzing the speech signal. At the beginning of this work there is described speech signal production. The following is a description of the speech signal analysis, its preparation and subsequent feature extraction. Next there is described Parkinson's disease and change of the speech signal by this disability. The following describes the symptoms, which are used for the diagnosis of Parkinson's disease (FCR, VSA, VOT, etc.). Another part of the work deals with the selection and reduction symptoms using the learning algorithms (SVM, ANN, k-NN) and their subsequent evaluation. In the last part of the thesis is described a program to count symptoms. Further is described selection and the end evaluated all the result.
APA, Harvard, Vancouver, ISO, and other styles
49

Dočekal, Martin. "Porovnání klasifikačních metod." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403211.

Full text
Abstract:
This thesis deals with a comparison of classification methods. At first, these classification methods based on machine learning are described, then a classifier comparison system is designed and implemented. This thesis also describes some classification tasks and datasets on which the designed system will be tested. The evaluation of classification tasks is done according to standard metrics. In this thesis is presented design and implementation of a classifier that is based on the principle of evolutionary algorithms.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsin-Li, Pan, and 潘信利. "Learning Vector Quantization Neural Networks to Medical Diagnosis Problems Research." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/38440289258863126419.

Full text
Abstract:
碩士<br>樹德科技大學<br>資訊管理研究所<br>96<br>Hospital information management has been computerized gradually, and the medical databases are now quite popular in contrast with traditional storage methods. The traditional manual method is not applicable for a large number of information processing. Moreover, medical diagnosis can only rely on past experience of physicians and there are many diversified factors of disease. In this research, the aim is to provide forecast and classification technology by using Artificial Neural Network in order to support the doctors to improve diagnosis with high accuracy. In accordance with this aim, the research method is to address data set of Medical network to classification issues by using Learning Vector Quantization (LVQ) to establish the prediction and classification parameters as well as pre-operating it to choose the meaningful attributes. Furthermore, Taguchi Experimental Design Method (TEDM) approach is also used to adjust the LVQ core parameters in order to obtain the better classification rate and efficient computing processes. This method applied for Medical database with TEDM to find out the ideal parameter portfolio. The experimental results show that, a variety of disease classification, the accurate rate is more than 90 per cent. Moreover, the ideal parameters portfolio found by using TEDM can reduce the number of repeated testing experiment as well as efficiently applied to other disease classification.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography