Academic literature on the topic 'Learning vector quantization neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning vector quantization neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Learning vector quantization neural network"

1

Andriani, Siska, and Kotim Subandi. "Weather Forecast using Learning Vector Quantization Methods." Procedia of Social Sciences and Humanities 1 (March 2, 2021): 69–74. http://dx.doi.org/10.21070/pssh.v1i.22.

Full text
Abstract:
Weather forecasting is one of the important factors in daily life, as it can affect the activities carried out by the community. The study was conducted to optimize weather forecasts using artificial neural network methods. The artificial neural network used is a learning vector quantization (LVQ) method, in which artificial neural networks based on previous research are suitable for prediction. The research is modeling weather forecast optimization using the LVQ method. Models with the best accuracy can be used in terms of weather forecasts. Based on the results of the training that has been done in this study produces the best accuracy on the LVQ method which is to produce 72%.
APA, Harvard, Vancouver, ISO, and other styles
2

Begum, Afsana, Md Masiur Rahman, and Sohana Jahan. "Medical diagnosis using artificial neural networks." Mathematics in Applied Sciences and Engineering 5, no. 2 (2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Full text
Abstract:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Burrascano, P. "Learning vector quantization for the probabilistic neural network." IEEE Transactions on Neural Networks 2, no. 4 (1991): 458–61. http://dx.doi.org/10.1109/72.88165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kozukue, Wakae, and Hideyuki Miyaji. "DEFECT IDENTIFICATION USING LEARNING VECTOR QUANTIZATION NEURAL NETWORK." Proceedings of the International Conference on Motion and Vibration Control 6.2 (2002): 1181–84. http://dx.doi.org/10.1299/jsmeintmovic.6.2.1181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

YAN, HONG. "CONSTRAINED LEARNING VECTOR QUANTIZATION." International Journal of Neural Systems 05, no. 02 (1994): 143–52. http://dx.doi.org/10.1142/s0129065794000165.

Full text
Abstract:
Kohonen’s learning vector quantization (LVQ) is an efficient neural network based technique for pattern recognition. The performance of the method depends on proper selection of the learning parameters. Over-training may cause a degradation in recognition rate of the final classifier. In this paper we introduce constrained learning vector quantization (CLVQ). In this method the updated coefficients in each iteration are accepted only if the recognition performance of the classifier after updating is not decreased for the training samples compared with that before updating, a constraint widely used in many prototype editing procedures to simplify and optimize a nearest neighbor classifier (NNC). An efficient computer algorithm is developed to implement this constraint. The method is verified with experimental results. It is shown that CLVQ outperforms and may even require much less training time than LVQ.
APA, Harvard, Vancouver, ISO, and other styles
6

Soflaei, Masoumeh, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, and Richong Zhang. "Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5810–17. http://dx.doi.org/10.1609/aaai.v34i04.6038.

Full text
Abstract:
We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
7

Pham, D. T., and E. J. Bayro-Corrochano. "Neural Classifiers for Automated Visual Inspection." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 208, no. 2 (1994): 83–89. http://dx.doi.org/10.1243/pime_proc_1994_208_166_02.

Full text
Abstract:
This paper discusses the application of a back-propagation multi-layer perceptron and a learning vector quantization network to the classification of defects in valve stem seals for car engines. Both networks were trained with vectors containing descriptive attributes of known flaws. These attribute vectors (‘signatures’) were extracted from images of the seals captured by an industrial vision system. The paper describes the hardware and techniques used and the results obtained.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Degang, Guo Chen, Hui Wang, and Xiaofeng Liao. "Learning vector quantization neural network method for network intrusion detection." Wuhan University Journal of Natural Sciences 12, no. 1 (2007): 147–50. http://dx.doi.org/10.1007/s11859-006-0258-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Shuo, Xiao Heng Chang, and Qing Hui Wu. "A Study on the Application of Learning Vector Quantization Neural Network in Pattern Classification." Applied Mechanics and Materials 525 (February 2014): 657–60. http://dx.doi.org/10.4028/www.scientific.net/amm.525.657.

Full text
Abstract:
Standard back propagation (BP) neural network has disadvantages such as slow convergence speed, local minimum and difficulty in definition of network structure. In this paper, an learning vector quantization (LVQ) neural network classifier is established, then it is applied in pattern classification of two-dimensional vectors on a plane. To test its classification ability, the classification results of LVQ neural network and BP neural network are compared with each other. The simulation result shows that compared with classification method based on BP neural network, the one based on LVQ neural network has a shorter learning time. Besides, its requirements for learning samples and the number of competing layers are also lower. Therefore it is an effective classification method which is powerful in classification of two-dimensional vectors on a plane.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdulmuhsin, Kamel A., and Iftekhar A. Al-Ani. "Using of Learning Vector Quantization Network for Pan Evaporation Estimation." Tikrit Journal of Engineering Sciences 16, no. 2 (2009): 43–50. http://dx.doi.org/10.25130/tjes.16.2.07.

Full text
Abstract:
A modern technique is presented to study the evaporation process which is considered as an important component of the hydrological cycle. The Pan Evaporation depth is estimated depending upon four metrological factors viz. (temperature, relative humidity, sunshine, and wind speed). Unsupervised Artificial Neural Network has been proposed to accomplish the study goal, specifically, a type called Linear Vector Quantitization, (LVQ). A step by step method is used to cope with difficulties that usually associated with computation procedures inherent in these kind of networks. Such systematic approach may close the gap between the hesitation of the user to make use of the capabilities of these type of neural networks and the relative complexity involving the computations procedures. The results reveal the possibility of using LVQ for of Pan Evaporation depth estimation where a good agreement has been noticed between the outputs of the proposed network and the observed values of the Pan Evaporation depth with a correlation coefficient of 0.986.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Learning vector quantization neural network"

1

Soflaei, Shahrbabak Masoumeh. "Aggregated Learning: An Information Theoretic Framework to Learning with Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41399.

Full text
Abstract:
Deep learning techniques have achieved profound success in many challenging real-world applications, including image recognition, speech recognition, and machine translation. This success has increased the demand for developing deep neural networks and more effective learning approaches. The aim of this thesis is to consider the problem of learning a neural network classifier and to propose a novel approach to solve this problem under the Information Bottleneck (IB) principle. Based on the IB principle, we associate with the classification problem a representation learning problem, which we call ``IB learning". A careful investigation shows there is an unconventional quantization problem that is closely related to IB learning. We formulate this problem and call it ``IB quantization". We show that IB learning is, in fact, equivalent to the IB quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a vector quantization approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework that we call ``Aggregated Learning (AgrLearn)", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. In other words, AgrLearn can simultaneously optimize against multiple data samples which is different from standard neural networks. In this learning framework, two classes are introduced, ``deterministic AgrLearn (dAgrLearn)" and ``probabilistic AgrLearn (pAgrLearn)". We verify the effectiveness of this framework through extensive experiments on standard image recognition tasks. We show the performance of this framework over a real world natural language processing (NLP) task, sentiment analysis. We also compare the effectiveness of this framework with other available frameworks for the IB learning problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Clayton, Arnshea. "The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/19.

Full text
Abstract:
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramesh, Rohit. "Abnormality detection with deep learning." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118542/1/Rohit_Ramesh_Thesis.pdf.

Full text
Abstract:
This thesis is a step forward in developing the scientific basis for abnormality detection of individuals in crowded environments by utilizing a deep learning method. Such applications for monitoring human behavior in crowds is useful for public safety and security purposes.
APA, Harvard, Vancouver, ISO, and other styles
4

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Full text
Abstract:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested.<br>Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
APA, Harvard, Vancouver, ISO, and other styles
5

Filho, Luiz Soares de Andrade. "Projeto de classificadores de padrÃes baseados em protÃtipos usando evoluÃÃo diferencial." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14230.

Full text
Abstract:
Nesta dissertaÃÃo à apresentada uma abordagem evolucionÃria para o projeto eciente de classificadores baseados em protÃtipos utilizando EvoluÃÃo Diferencial. Para esta finalidade foram reunidos conceitos presentes na famÃlia de redes neurais LVQ (Learning Vector Quantization, introduzida por Kohonen para classificaÃÃo supervisionada, juntamente com conceitos extraÃdos da tÃcnica de clusterizaÃÃo automÃtica proposta por Das et al. baseada na metaheurÃstica EvoluÃÃo Diferencial. A abordagem proposta visa determinar tanto o nÃmero Ãtimo de protÃtipos por classe, quanto as posiÃÃes correspondentes de cada protÃtipo no espaÃo de cobertura do problema. AtravÃs de simulaÃÃes computacionais abrangentes realizadas sobre vÃrios conjuntos de dados comumente utilizados em estudos de comparaÃÃo de desempenho, foi demonstrado que o classificador resultante, denominado LVQ-DE, alcanÃa resultados equivalentes (ou muitas vezes atà melhores) que o estado da arte em classificadores baseados em protÃtipos, com um nÃmero muito menor de protÃtipos.<br>In this Master's dissertation we introduce an evolutionary approach for the eficient design of prototyp e-based classiers using dierential evolution (DE). For this purp ose we amalgamate ideas from the Learning Vector Quantization (LVQ) framework for sup ervised classication by Kohonen (KOHONEN, 2001), with the DE-based automatic clustering approach by Das et al. (DAS; ABRAHAM; KONAR, 2008) in order to evolve sup ervised classiers. The prop osed approach is able to determine b oth the optimal numb er of prototyp es p er class and the corresp onding p ositions of these prototyp es in the data space. By means of comprehensive computer simulations on b enchmarking datasets, we show that the resulting classier, named LVQ-DE, consistently outp erforms state-of-the-art prototyp e-based classiers.
APA, Harvard, Vancouver, ISO, and other styles
6

cruz, Magnus Alencar da. "AvaliaÃÃo de redes neurais competitivas em tarefas de quantizaÃÃo vetorial:um estudo comparativo." Universidade Federal do CearÃ, 2007. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2016.

Full text
Abstract:
nÃo hÃ<br>Esta dissertaÃÃo tem como principal meta realizar um estudo comparativo do desempenho de algoritmos de redes neurais competitivas nÃo-supervisionadas em problemas de quantizaÃÃo vetorial (QV) e aplicaÃÃes correlatas, tais como anÃlise de agrupamentos (clustering) e compressÃo de imagens. A motivaÃÃo para tanto parte da percepÃÃo de que hà uma relativa escassez de estudos comparativos sistemÃticos entre algoritmos neurais e nÃo-neurais de anÃlise de agrupamentos na literatura especializada. Um total de sete algoritmos sÃo avaliados, a saber: algoritmo K -mÃdias e as redes WTA, FSCL, SOM, Neural-Gas, FuzzyCL e RPCL. De particular interesse à a seleÃÃo do nÃmero Ãtimo de neurÃnios. NÃo hà um mÃtodo que funcione para todas as situaÃÃes, restando portanto avaliar a influÃncia que cada tipo de mÃtrica exerce sobre algoritmo em estudo. Por exemplo, os algoritmos de QV supracitados sÃo bastante usados em tarefas de clustering. Neste tipo de aplicaÃÃo, a validaÃÃo dos agrupamentos à feita com base em Ãndices que quantificam os graus de compacidade e separabilidade dos agrupamentos encontrados, tais como Ãndice Dunn e Ãndice Davies-Bouldin (DB). Jà em tarefas de compressÃo de imagens, determinado algoritmo de QV à avaliado em funÃÃo da qualidade da informaÃÃo reconstruÃda, daà as mÃtricas mais usadas serem o erro quadrÃtico mÃdio de quantizaÃÃo (EQMQ) ou a relaÃÃo sinal-ruÃdo de pico (PSNR). Empiricamente verificou-se que, enquanto o Ãndice DB favorece arquiteturas com poucos protÃtipos e o Dunn com muitos, as mÃtricas EQMQ e PSNR sempre favorecem nÃmeros ainda maiores. Nenhuma das mÃtricas supracitadas leva em consideraÃÃo o nÃmero de parÃmetros do modelo. Em funÃÃo disso, esta dissertaÃÃo propÃe o uso do critÃrio de informaÃÃo de Akaike (AIC) e o critÃrio do comprimento descritivo mÃnimo (MDL) de Rissanen para selecionar o nÃmero Ãtimo de protÃtipos. Este tipo de mÃtrica mostra-se Ãtil na busca do nÃmero de protÃtipos que satisfaÃa simultaneamente critÃrios opostos, ou seja, critÃrios que buscam o menor erro de reconstruÃÃo a todo custo (MSE e PSNR) e critÃrios que buscam clusters mais compactos e coesos (Ãndices Dunn e DB). Como conseqÃÃncia, o nÃmero de protÃtipos obtidos pelas mÃtricas AIC e MDL à geralmente um valor intermediÃrio, i.e. nem tÃo baixo quanto o sugerido pelos Ãndices Dunn e DB, nem tÃo altos quanto o sugerido pelas mÃtricas MSE e PSNR. Outra conclusÃo importante à que nÃo necessariamente os algoritmos mais sofisticados do ponto de vista da modelagem, tais como as redes SOM e Neural-Gas, sÃo os que apresentam melhores desempenhos em tarefas de clustering e quantizaÃÃo vetorial. Os algoritmos FSCL e FuzzyCL sÃo os que apresentam melhores resultados em tarefas de quantizaÃÃo vetorial, com a rede FSCL apresentando melhor relaÃÃo custo-benefÃcio, em funÃÃo do seu menor custo computacional. Para finalizar, vale ressaltar que qualquer que seja o algoritmo escolhido, se o mesmo tiver seus parÃmetros devidamente ajustados e seus desempenhos devidamente avaliados, as diferenÃas de performance entre os mesmos sÃo desprezÃveis, ficando como critÃrio de desempate o custo computacional.<br>The main goal of this master thesis was to carry out a comparative study of the performance of algorithms of unsupervised competitive neural networks in problems of vector quantization (VQ) tasks and related applications, such as cluster analysis and image compression. This study is mainly motivated by the relative scarcity of systematic comparisons between neural and nonneural algorithms for VQ in specialized literature. A total of seven algorithms are evaluated, namely: K-means, WTA, FSCL, SOM, Neural-Gas, FuzzyCL and RPCL. Of particular interest is the problem of selecting an adequate number of neurons given a particular vector quantization problem. Since there is no widespread method that works satisfactorily for all applications, the remaining alternative is to evaluate the influence that each type of evaluation metric has on a specific algorithm. For example, the aforementioned vector quantization algorithms are widely used in clustering-related tasks. For this type of application, cluster validation is based on indexes that quantify the degrees of compactness and separability among clusters, such as the Dunn Index and the Davies- Bouldin (DB) Index. In image compression tasks, however, a given vector quantization algorithm is evaluated in terms of the quality of the reconstructed information, so that the most used evaluation metrics are the mean squared quantization error (MSQE) and the peak signal-to-noise ratio (PSNR). This work verifies empirically that, while the indices Dunn and DB or favors architectures with many prototypes (Dunn) or with few prototypes (DB), metrics MSE and PSNR always favor architectures with well bigger amounts. None of the evaluation metrics cited previously takes into account the number of parameters of the model. Thus, this thesis evaluates the feasibility of the use of the Akaikeâs information criterion (AIC) and Rissanenâs minimum description length (MDL) criterion to select the optimal number of prototypes. This type of evaluation metric indeed reveals itself useful in the search of the number of prototypes that simultaneously satisfies conflicting criteria, i.e. those favoring more compact and cohesive clusters (Dunn and DB indices) versus those searching for very low reconstruction errors (MSE and PSNR). Thus, the number of prototypes suggested by AIC and MDL is generally an intermediate value, i.e nor so low as much suggested for the indexes Dunn and DB, nor so high as much suggested one for metric MSE and PSNR. Another important conclusion is that sophisticated models, such as the SOM and Neural- Gas networks, not necessarily have the best performances in clustering and VQ tasks. For example, the algorithms FSCL and FuzzyCL present better results in terms of the the of the reconstructed information, with the FSCL presenting better cost-benefit ratio due to its lower computational cost. As a final remark, it is worth emphasizing that if a given algorithm has its parameters suitably tuned and its performance fairly evaluated, the differences in performance compared to others prototype-based algorithms is minimum, with the coputational cost being used to break ties.
APA, Harvard, Vancouver, ISO, and other styles
7

Pahkasalo, Carolina, and André Sollander. "Adaptive Energy Management Strategies for Series Hybrid Electric Wheel Loaders." Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166284.

Full text
Abstract:
An emerging technology is the hybridization of wheel loaders. Since wheel loaders commonly operate in repetitive cycles it should be possible to use this information to develop an efficient energy management strategy that decreases fuel consumption. The purpose of this thesis is to evaluate if and how this can be done in a real-time online application. The strategy that is developed is based on pattern recognition and Equivalent Consumption Minimization Strategy (ECMS), which together is called Adaptive ECMS (A-ECMS). Pattern recognition uses information about the repetitive cycles and predicts the operating cycle, which can be done with Neural Network or Rule-Based methods. The prediction is then used in ECMS to compute the optimal power distribution of fuel and battery power. For a robust system it is important with stability implementations in ECMS to protect the machine, which can be done by adjusting the cost function that is minimized. The result from these implementations in a quasistatic simulation environment is an improvement in fuel consumption by 7.59 % compared to not utilizing the battery at all.
APA, Harvard, Vancouver, ISO, and other styles
8

Brosnan, Timothy Myers. "Neural network and vector quantization classifiers for recognition and inspection applications." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Khudhair, Ali Dheyaa. "VECTOR QUANTIZATION USING ODE BASED NEURAL NETWORK WITH VARYING VIGILANCE PARAMETER." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/dissertations/478.

Full text
Abstract:
Vector Quantization importance has been increasing and it is becoming a vital element in the process of classification and clustering of different types of information to help in the development of machines learning and decisions making, however the different techniques that implements Vector Quantization have always come short in some aspects. A lot of researchers have turned their heads towards the idea of creating a Vector Quantization mechanism that is fast and can be used to classify data that is rapidly being generated from some source, most of the mechanisms depend on a specific style of neural networks, this research is one of those attempts. One of the dilemmas that this technology faces is the compromise that has to be made between the accuracy of the results and the speed of the classification or quantization process, also the complexity of the suggested algorithms makes it very hard to implement and realize any of them on a hardware that can be used as a fast-online classifier which can keep up with the speed of the information being presented to the system, an example for such information sources would be high speed processors, and computer networks intrusion detection systems. This research focuses on creating a Vector Quantizer using neural networks, the neural network that is used in this study is a novel one and has a unique feature that comes from the fact that it is based solely on a set of ordinary differential equations. The input data will be injected in those equations and the classification would be based on finding the equilibrium points of the system with the presence of those input patterns. The elimination of conditional statements in this neural network would mean that the implementation and the execution of the classification process of this technique would have one single path that can accommodate any value. A single execution path will provide easier algorithm analysis and open the possibility to realizing it on a pure analog circuit that can have an operation speed able to match the speed of incoming information and classify the data in a real time fashion. The details of this dynamical system will be provided in this research, also the shortcomings that we have faced and how we overcame them will be explained in particulars. Also, a drastic change in the way of looking at the speed vs. accuracy compromise has been made and presented in this research, aiming towards creating a technique that can produce accurate results with high speeds.
APA, Harvard, Vancouver, ISO, and other styles
10

Kalmár, Marcus, and Joel Nilsson. "The art of forecasting – an analysis of predictive precision of machine learning models." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-280675.

Full text
Abstract:
Forecasting is used for decision making and unreliable predictions can instill a false sense of condence. Traditional time series modelling is astatistical art form rather than a science and errors can occur due to lim-itations of human judgment. In minimizing the risk of falsely specifyinga process the practitioner can make use of machine learning models. Inan eort to nd out if there's a benet in using models that require lesshuman judgment, the machine learning models Random Forest and Neural Network have been used to model a VAR(1) time series. In addition,the classical time series models AR(1), AR(2), VAR(1) and VAR(2) havebeen used as comparative foundation. The Random Forest and NeuralNetwork are trained and ultimately the models are used to make pre-dictions evaluated by RMSE. All models yield scattered forecast resultsexcept for the Random Forest that steadily yields comparatively precisepredictions. The study shows that there is denitive benet in using Random Forests to eliminate the risk of falsely specifying a process and do infact provide better results than a correctly specied model.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Learning vector quantization neural network"

1

P, Dhawan Atam, and United States. National Aeronautics and Space Administration., eds. LVQ and backpropagation neural networks applied to NASA SSME data. National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Learning vector quantization neural network"

1

Visa, Ari. "Stability Study of Learning Vector Quantization." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baras, John S., and Anthony LaVigna. "Convergence of the Vectors in Kohonen’s Learning Vector Quantization." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bennani, Younès, Françoise Fogelman Soulie, and Patrick Gallinari. "Text-Dependent Speaker Identification Using Learning Vector Quantization." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Daoqiang, Songcan Chen, and Zhi-Hua Zhou. "Fuzzy-Kernel Learning Vector Quantization." In Advances in Neural Networks – ISNN 2004. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hammer, B., M. Strickert, and T. Villmann. "Learning Vector Quantization for Multimodal Data." In Artificial Neural Networks — ICANN 2002. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hofmann, Daniela, and Barbara Hammer. "Kernel Robust Soft Learning Vector Quantization." In Artificial Neural Networks in Pattern Recognition. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33212-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Ning, Bernardete Ribeiro, Armando Vieira, João Duarte, and João Neves. "Weighted Learning Vector Quantization to Cost-Sensitive Learning." In Artificial Neural Networks – ICANN 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bauckhage, C., R. Ramamurthy, and R. Sifa. "Hopfield Networks for Vector Quantization." In Artificial Neural Networks and Machine Learning – ICANN 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pandya, Abhijit S., and Robert B. Macy. "Kohonen Networks and Learning Vector Quantization." In Pattern Recognition with Neural Networks in C++. CRC Press, 2021. http://dx.doi.org/10.1201/9780138744274-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Xibin, Frank-Michael Schleif, and Barbara Hammer. "Patch Processing for Relational Learning Vector Quantization." In Advances in Neural Networks – ISNN 2012. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Learning vector quantization neural network"

1

Zheng, Lou, Fan Lei, Tao Weisong, and xu chao. "A nonlinear convolution neural network quantization method." In International Conference on Cloud Computing, Performance Computing, and Deep Learning, edited by Wanyang Dai and Xiangjie Kong. SPIE, 2024. http://dx.doi.org/10.1117/12.3051024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lange-Geisler, Mandy, Klaus Dohmen, and Thomas Villmann. "Learning of Probability Estimates for System and Network Reliability Analysis by Means of Matrix Learning Vector Quantization." In ESANN 2025. Ciaco - i6doc.com, 2025. https://doi.org/10.14428/esann/2025.es2025-67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kozukue, Wakae, and Hideyuki Miyaji. "Structural Identification Using Learning Vector Quantization Neural Network." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/ad-23713.

Full text
Abstract:
Abstract The Learning Vector Quantization (LVQ) neural network is applied to the defect identification problem for structures, which is important when constructing the mathematical model of structures. In this study the eigenmodes of a plate obtained from FEM and the location of a defect contained in that plate are used as the training data for neural network and the position of the defect is identified by giving the unlearned input data to the trained network. As a result the better accuracy is obtained compared to the case when using the backpropagation neural network commonly used in the various studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Brinkrolf, Johannes, and Barbara Hammer. "Federated Learning Vector Quantization." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kohonen, T. "Improved versions of learning vector quantization." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baras, J. S., and A. LaVigna. "Convergence of Kohonen's learning vector quantization." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahalt, Jung, and Krishnamurthy. "Radar target identification using the learning vector quantization neural network." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ravichandran, Jensun, Thomas Villmann, and Marika Kaden. "RecLVQ: Recurrent Learning Vector Quantization." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Qinzhong, Ichiro Hagiwara, and Toshiaki Sekine. "Structural Damage Detection and Identification Using Learning Vector Quantization Neural Network." In ASME 1999 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/detc99/movic-8401.

Full text
Abstract:
Abstract This research deals with the structural damage detection by experimental measured modal parameters, such as the modal frequencies and the modal shapes. Changes of local structural parameters, induced by damage, will affect the local stiffness and cause the change of modal frequencies and modal shapes of structure. Use of these observable values to detect the damage of the structure is feasible and implement. Learning Vector Quantization (LVQ) Neural Network based on pattern classifier is used to detect the location of damage, and a method of releasing the dense of input vector to neural network is proposed to increase the accuracy of detection. Several numerical examples show the proposed method is effective to increase the rate of damage detection. Finally, a practical application example of damage detection for a turbine blade is used to demonstrate and verify the approach developed.
APA, Harvard, Vancouver, ISO, and other styles
10

Damarla, Seshu, and Madhusree Kundu. "Classification of Tea Samples using Learning Vector Quantization Neural Network." In 2020 IEEE Applied Signal Processing Conference (ASPCON). IEEE, 2020. http://dx.doi.org/10.1109/aspcon49795.2020.9276662.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Learning vector quantization neural network"

1

Gabe V. Garcia. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/836575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miles, Gaines E., Yael Edan, F. Tom Turpin, et al. Expert Sensor for Site Specification Application of Agricultural Chemicals. United States Department of Agriculture, 1995. http://dx.doi.org/10.32747/1995.7570567.bard.

Full text
Abstract:
In this work multispectral reflectance images are used in conjunction with a neural network classifier for the purpose of detecting and classifying weeds under real field conditions. Multispectral reflectance images which contained different combinations of weeds and crops were taken under actual field conditions. This multispectral reflectance information was used to develop algorithms that could segment the plants from the background as well as classify them into weeds or crops. In order to segment the plants from the background the multispectrial reflectance of plants and background were studied and a relationship was derived. It was found that using a ratio of two wavelenght reflectance images (750nm and 670nm) it was possible to segment the plants from the background. Once ths was accomplished it was then possible to classify the segmented images into weed or crop by use of the neural network. The neural network developed for this work is a modification of the standard learning vector quantization algorithm. This neural network was modified by replacing the time-varying adaptation gain with a constant adaptation gain and a binary reinforcement function. This improved accuracy and training time as well as introducing several new properties such as hill climbing and momentum addition. The network was trained and tested with different wavelength combinations in order to find the best results. Finally, the results of the classifier were evaluated using a pixel based method and a block based method. In the pixel based method every single pixel is evaluated to test whether it was classified correctly or not and the best weed classification results were 81% and its associated crop classification accuracy is 57%. In the block based classification method, the image was divided into blocks and each block was evaluated to determine whether they contained weeds or not. Different block sizes and thesholds were tested. The best results for this method were 97% for a block size of 8 inches and a pixel threshold of 60. A simulation model was developed to 1) quantify the effectiveness of a site-specific sprayer, 2) evaluate influence of diffeent design parameters on efficiency of the site-specific sprayer. In each iteration of this model, infected areas (weed patches) in the field were randomly generated and the amount of required herbicides for spraying these areas were calculated. The effectiveness of the sprayer was estimated for different stain sizes, nozzle types (conic and flat), nozzle sizes and stain detection levels of the identification system. Simulation results indicated that the flat nozzle is much more effective as compared to the conic nozzle and its relative efficiency is greater for small nozzle sizes. By using a site-specific sprayer, the average ratio between the spraying areas and the stain areas is about 1.1 to 1.8 which can save up to 92% of herbicides, especially when the proportion of the stain areas is small.
APA, Harvard, Vancouver, ISO, and other styles
3

Grossberg, Stephen. Neural Network Models of Vector Coding, Learning, and Trajectory Formation During Planned and Reactive Arm and Eye Movements. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada206737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Puttanapong, Nattapong, Arturo M. Martinez Jr, Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Predicting Poverty Using Geospatial Data in Thailand. Asian Development Bank, 2020. http://dx.doi.org/10.22617/wps200434-2.

Full text
Abstract:
This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. It also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of population living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Emma, Olsson. Kolinlagring med biokol : Att nyttja biokol och hydrokol som kolsänka i östra Mellansverige. Linköping University Electronic Press, 2025. https://doi.org/10.3384/9789180759496.

Full text
Abstract:
Pest inventory of a field is a way of knowing when the thresholds for pest control is reached. It is of increasing interest to use machine learning to automate this process, however, many challenges arise with detection of small insects both in traps and on plants. This thesis investigates the prospects of developing an automatic warning system for notifying a user of when certain pests are detected in a trap. For this, sliding window with histogram of oriented gradients based support vector machine were implemented. Trap detection with neural network models and a check size function were tested for narrowing the detections down to pests of a certain size. The results indicates that with further refinement and more training images this approach might hold potential for fungus gnat and rape beetles. Further, this thesis also investigates detection performance of Mask R-CNN and YOLOv5 on different insects in fields for the purpose of automating the data gathering process. The models showed promise for detection of rape beetles. YOLOv5 also showed promise as a multi-class detector of different insects, where sizes ranged from small rape beetles to larger bumblebees.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography