To see the other types of publications on this topic, follow the link: Linear programming-based discriminant analysis.

Dissertations / Theses on the topic 'Linear programming-based discriminant analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Linear programming-based discriminant analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wilgenbus, Erich Feodor. "The file fragment classification problem : a combined neural network and linear programming discriminant model approach / Erich Feodor Wilgenbus." Thesis, North-West University, 2013. http://hdl.handle.net/10394/10215.

Full text
Abstract:
The increased use of digital media to store legal, as well as illegal data, has created the need for specialized tools that can monitor, control and even recover this data. An important task in computer forensics and security is to identify the true le type to which a computer le or computer le fragment belongs. File type identi cation is traditionally done by means of metadata, such as le extensions and le header and footer signatures. As a result, traditional metadata-based le object type identi cation techniques work well in cases where the required metadata is available and unaltered. However, traditional approaches are not reliable when the integrity of metadata is not guaranteed or metadata is unavailable. As an alternative, any pattern in the content of a le object can be used to determine the associated le type. This is called content-based le object type identi cation. Supervised learning techniques can be used to infer a le object type classi er by exploiting some unique pattern that underlies a le type's common le structure. This study builds on existing literature regarding the use of supervised learning techniques for content-based le object type identi cation, and explores the combined use of multilayer perceptron neural network classi ers and linear programming-based discriminant classi ers as a solution to the multiple class le fragment type identi cation problem. The purpose of this study was to investigate and compare the use of a single multilayer perceptron neural network classi er, a single linear programming-based discriminant classi- er and a combined ensemble of these classi ers in the eld of le type identi cation. The ability of each individual classi er and the ensemble of these classi ers to accurately predict the le type to which a le fragment belongs were tested empirically. The study found that both a multilayer perceptron neural network and a linear programming- based discriminant classi er (used in a round robin) seemed to perform well in solving the multiple class le fragment type identi cation problem. The results of combining multilayer perceptron neural network classi ers and linear programming-based discriminant classi ers in an ensemble were not better than those of the single optimized classi ers.
MSc (Computer Science), North-West University, Potchefstroom Campus, 2013
APA, Harvard, Vancouver, ISO, and other styles
2

Zaeri, Naser. "Computation and memory efficient face recognition using binarized eigenphases and component-based linear discriminant analysis for wide range applications." Thesis, University of Surrey, 2007. http://epubs.surrey.ac.uk/844078/.

Full text
Abstract:
Face recognition finds many important applications in many life sectors and in particular in commercial and law enforcement. This thesis presents two novel methods which make face recognition more practical. In the first method, we propose an attractive solution for efficient face recognition systems that utilize low memory devices. The new technique applies the principal component analysis to the binarized phase spectrum of the Fourier transform of the covariance matrix constructed from the MPEG-7 Fourier Feature Descriptor vectors of the face images. Most of the algorithms proposed for face recognition are computationally exhaustive and hence they can not be used on devices constrained with limited memory; hence our method may play an important role in this area. The second method presented in this thesis proposes a new approach for efficient face representation and recognition by finding the best location component-based linear discriminant analysis. In this regard, the face image is decomposed into a number of components of certain size. Then the proposed scheme finds the best representation of the face image in most efficient way, taking into consideration both the recognition rate and the processing time. Note that the effect of the variation in a face image, when it is taken as a whole, is reduced when it is divided into components. As a result the performance of the system is enhanced. This method should find applications in systems requiring very high recognition and verification rates. Further, we demonstrate a solution to the problem of face occlusion using this method. The experimental results show that both proposed methods enhance the performance of the face recognition system and achieve a substantial saving in the computation time when compared to other known methods. It will be shown that the two proposed methods are very attractive for a wide range of applications for face recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

Umunoza, Gasana Emelyne. "Misclassification Probabilities through Edgeworth-type Expansion for the Distribution of the Maximum Likelihood based Discriminant Function." Licentiate thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175873.

Full text
Abstract:
This thesis covers misclassification probabilities via an Edgeworth-type expansion of the maximum likelihood based discriminant function. When deriving misclassification errors, first the expectation and variance in the population are assumed to be known where the variance is the same across populations and thereafter we consider the case where those parameters are unknown. Cumulants of the discriminant function for discriminating between two multivariate normal populations are derived. Approximate probabilities of the misclassification errors are established via an Edgeworth-type expansion using a standard normal distribution.
APA, Harvard, Vancouver, ISO, and other styles
4

Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235/document.

Full text
Abstract:
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes
These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
APA, Harvard, Vancouver, ISO, and other styles
5

Einestam, Ragnar, and Karl Casserfelt. "PiEye in the Wild: Exploring Eye Contact Detection for Small Inexpensive Hardware." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20696.

Full text
Abstract:
Ögonkontakt-sensorer skapar möjligheten att tolka användarens uppmärksamhet, vilketkan användas av system på en mängd olika vis. Dessa inkluderar att skapa nya möjligheterför människa-dator-interaktion och mäta mönster i uppmärksamhet hos individer.I den här uppsatsen gör vi ett försök till att konstruera en ögonkontakt-sensor med hjälpav en Raspberry Pi, med målet att göra den praktisk i verkliga scenarion. För att fastställaatt den är praktisk satte vi upp ett antal kriterier baserat på tidigare användning avögonkontakt-sensorer. För att möta dessa kriterier valde vi att använda en maskininlärningsmetodför att träna en klassificerare med bilder för att lära systemet att upptäcka omen användare har ögonkontakt eller ej. Vårt mål var att undersöka hur god prestanda vikunde uppnå gällande precision, hastighet och avstånd. Efter att ha testat kombinationerav fyra olika metoder för feature extraction kunde vi fastslå att den bästa övergripandeprecisionen uppnåddes genom att använda LDA-komprimering på pixeldatan från varjebild, medan PCA-komprimering var bäst när input-bilderna liknande de från träningen.När vi undersökte systemets hastighet fann vi att nedskalning av bilder hade en stor effektpå hastigheten, men detta sänkte också både precision och maximalt avstånd. Vi lyckadesminska den negativa effekten som en minskad skala hos en bild hade på precisionen, mendet maximala avståndet som sensorn fungerade på var fortfarande relativ till skalan och iförlängningen hastigheten.
Eye contact detection sensors have the possibility of inferring user attention, which can beutilized by a system in a multitude of different ways, including supporting human-computerinteraction and measuring human attention patterns. In this thesis we attempt to builda versatile eye contact sensor using a Raspberry Pi that is suited for real world practicalusage. In order to ensure practicality, we constructed a set of criteria for the system basedon previous implementations. To meet these criteria, we opted to use an appearance-basedmachine learning method where we train a classifier with training images in order to inferif users look at the camera or not. Our aim was to investigate how well we could detecteye contacts on the Raspberry Pi in terms of accuracy, speed and range. After extensivetesting on combinations of four different feature extraction methods, we found that LinearDiscriminant Analysis compression of pixel data provided the best overall accuracy, butPrincipal Component Analysis compression performed the best when tested on imagesfrom the same dataset as the training data. When investigating the speed of the system,we found that down-scaling input images had a huge effect on the speed, but also loweredthe accuracy and range. While we managed to mitigate the effects the scale had on theaccuracy, the range of the system is still relative to the scale of input images and byextension speed.
APA, Harvard, Vancouver, ISO, and other styles
6

Spinnato, Juliette. "Modèles de covariance pour l'analyse et la classification de signaux électroencéphalogrammes." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4727/document.

Full text
Abstract:
Cette thèse s’inscrit dans le contexte de l’analyse et de la classification de signaux électroencéphalogrammes (EEG) par des méthodes d’analyse discriminante. Ces signaux multi-capteurs qui sont, par nature, très fortement corrélés spatialement et temporellement sont considérés dans le plan temps-fréquence. En particulier, nous nous intéressons à des signaux de type potentiels évoqués qui sont bien représentés dans l’espace des ondelettes. Par la suite, nous considérons donc les signaux représentés par des coefficients multi-échelles et qui ont une structure matricielle électrodes × coefficients. Les signaux EEG sont considérés comme un mélange entre l’activité d’intérêt que l’on souhaite extraire et l’activité spontanée (ou "bruit de fond"), qui est largement prépondérante. La problématique principale est ici de distinguer des signaux issus de différentes conditions expérimentales (classes). Dans le cas binaire, nous nous focalisons sur l’approche probabiliste de l’analyse discriminante et des modèles de mélange gaussien sont considérés, décrivant dans chaque classe les signaux en termes de composantes fixes (moyenne) et aléatoires. Cette dernière, caractérisée par sa matrice de covariance, permet de modéliser différentes sources de variabilité. Essentielle à la mise en oeuvre de l’analyse discriminante, l’estimation de cette matrice (et de son inverse) peut être dégradée dans le cas de grandes dimensions et/ou de faibles échantillons d’apprentissage, cadre applicatif de cette thèse. Nous nous intéressons aux alternatives qui se basent sur la définition de modèle(s) de covariance(s) particulier(s) et qui permettent de réduire le nombre de paramètres à estimer
The present thesis finds itself within the framework of analyzing and classifying electroencephalogram signals (EEG) using discriminant analysis. Those multi-sensor signals which are, by nature, highly correlated spatially and temporally are considered, in this work, in the timefrequency domain. In particular, we focus on low-frequency evoked-related potential-type signals (ERPs) that are well described in the wavelet domain. Thereafter, we will consider signals represented by multi-scale coefficients and that have a matrix structure electrodes × coefficients. Moreover, EEG signals are seen as a mixture between the signal of interest that we want to extract and spontaneous activity (also called "background noise") which is overriding. The main problematic is here to distinguish signals from different experimental conditions (class). In the binary case, we focus on the probabilistic approach of the discriminant analysis and Gaussian mixtures are used, describing in each class the signals in terms of fixed (mean) and random components. The latter, characterized by its covariance matrix, allow to model different variability sources. The estimation of this matrix (and of its inverse) is essential for the implementation of the discriminant analysis and can be deteriorated by high-dimensional data and/or by small learning samples, which is the application framework of this thesis. We are interested in alternatives that are based on specific covariance model(s) and that allow to decrease the number of parameters to estimate
APA, Harvard, Vancouver, ISO, and other styles
7

Marinósson, Sigurour Freyr. "Stability analysis of nonlinear systems with linear programming a Lyapunov functions based approach /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=982323697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

LoPinto, Frank Anthony. "An Agent-Based Distributed Decision Support System Framework for Mediated Negotiation." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/27401.

Full text
Abstract:
Implementing an e-market for limited supply perishable asset (LiSPA) products is a problem at the intersection of online purchasing and distributed decision support systems (DistDSS). In this dissertation, we introduce and define LiSPA products, provide real-world examples, develop a framework for a distributed system to implement an e-market for LiSPA products, and provide proof-of-concept for the two major components of the framework. The DistDSS framework requires customers to instantiate agents that learn their preferences and evaluate products on their behalf. Accurately eliciting and modeling customer preferences in a quick and easy manner is a major hurdle for implementing this agent-based system. A methodology is developed for this problem using conjoint analysis and neural networks. The framework also contains a model component that is addressed in this work. The model component is presented as a mediator of customer negotiation that uses the agent-based preference models mentioned above and employs a linear programming model to maximize overall satisfaction of the total market.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Pal, Anamitra. "PMU-Based Applications for Improved Monitoring and Protection of Power Systems." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51093.

Full text
Abstract:
Monitoring and protection of power systems is a task that has manifold objectives. Amongst others, it involves performing data mining, optimizing available resources, assessing system stresses, and doing data conditioning. The role of PMUs in fulfilling these four objectives forms the basis of this dissertation. Classification and regression tree (CART) built using phasor data has been extensively used in power systems. The splits in CART are based on a single attribute or a combination of variables chosen by CART itself rather than the user. But as PMU data consists of complex numbers, both the attributes, should be considered simultaneously for making critical decisions. An algorithm is proposed here that expresses high dimensional, multivariate data as a single attribute in order to successfully perform splits in CART. In order to reap maximum benefits from placement of PMUs in the power grid, their locations must be selected judiciously. A gradual PMU placement scheme is developed here that ensures observability as well as protects critical parts of the system. In order to circumvent the computational burden of the optimization, this scheme is combined with a topology-based system partitioning technique to make it applicable to virtually any sized system. A power system is a dynamic being, and its health needs to be monitored at all times. Two metrics are proposed here to monitor stress of a power system in real-time. Angle difference between buses located across the network and voltage sensitivity of buses lying in the middle are found to accurately reflect the static and dynamic stress of the system. The results indicate that by setting appropriate alerts/alarm limits based on these two metrics, a more secure power system operation can be realized. A PMU-only linear state estimator is intrinsically superior to its predecessors with respect to performance and reliability. However, ensuring quality of the data stream that leaves this estimator is crucial. A methodology for performing synchrophasor data conditioning and validation that fits neatly into the existing linear state estimation formulation is developed here. The results indicate that the proposed methodology provides a computationally simple, elegant solution to the synchrophasor data quality problem.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Ngoc Anh. "Explicit robust constrained control for linear systems : analysis, implementation and design based on optimization." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLC012/document.

Full text
Abstract:
Les lois de commande affines par morceaux ont attiré une grande attention de la communauté d'automatique de contrôle grâce à leur pertinence pour des systèmes contraints, systèmes hybrides; également pour l'approximation de commandes nonlinéaires. Pourtant, leur mise en oeuvre est soumise à quelques difficultés. Motivé par l'intérêt à cette classe de commandes, cette thèse porte sur leur analyse, mise en oeuvre et synthèse.La première partie de cette thèse a pour but le calcul de la marge de robustesse et de la marge de fragilité pour une loi de commande affine par morceaux donnée et un système linéaire discret. Plus précisément, la marge de robustesse est définie comme l'ensemble des systèmes linéaires à paramètres variants que la loi de commande donnée garde les trajectoires dans de la région faisable. D'ailleurs, la marge de fragilité comprend toutes les variations des coefficients de la commande donnée telle que l'invariance de la région faisable soit encore garantie. Il est montré que si la région faisable donnée est un polytope, ces marges sont aussi des polytopes.La deuxième partie de ce manuscrit est consacrée au problème de l'optimalité inverse pour la classe des fonctions affines par morceaux. C'est-à-dire, l'objective est de définir un problème d'optimisation pour lequel la solution optimale est équivalente à la fonction affine par morceaux donnée. La méthodologie est fondée sur le convex lifting, i.e., un variable auxiliaire, scalaire, qui permet de définir un ensemble convex à partir de la partition d'état de la fonction affine par morceaux donnée. Il est montré que si la fonction affine par morceaux donnée est continue, la solution optimale de ce problème redéfini sera unique. Par contre, si la continuité n'est pas satisfaite, cette fonction affine par morceaux sera une solution optimale parmi les autres du problème redéfini.En ce qui concerne l’application dans la commande prédictive, il sera montré que n'importe quelle loi de commande affine par morceaux continue peut être obtenue par un autre problème de commande prédictive avec l'horizon de prédiction au plus égal à 2. A côté de cet aspect théorique, ce résultat sera utile pour faciliter la mise en oeuvre des lois de commandes affines par morceaux en évitant l'enregistrement de la partition de l'espace d'état. Dans la dernière partie de ce rapport, une famille de convex liftings servira comme des fonctions de Lyapunov. En conséquence, ce "convex lifting" sera déployé pour synthétiser des lois de commande robustes pour des systèmes linéaires incertains, également en présence de perturbations additives bornées. Des lois implicites et explicites seront obtenues en même temps. Cette méthode permet de garantir la faisabilité récursive et la stabilité robuste. Cependant, cette fonction de Lyapunov est limitée à l'ensemble λ −contractive maximal avec une constante scalaire 0 ≤ λ < 1 qui est plus petit que l'ensemble contrôlable maximal. Pour cette raison, une extension de cette méthode pour l'ensemble contrôlable de N − pas, sera présentée. Cette méthode est fondée sur des convex liftings en cascade où une variable auxiliaire sera utilisée pour servir comme une fonction de Lyapunov. Plus précisément, cette variable est non-négative, strictement décroissante pour les N premiers pas et égale toujours à 0 − après. Par conséquent, la stabilité robuste est garantie
Piecewise affine (PWA) feedback control laws have received significant attention due to their relevance for the control of constrained systems, hybrid systems; equally for the approximation of nonlinear control. However, they are associated with serious implementation issues. Motivated from the interest in this class of particular controllers, this thesis is mostly related to their analysis and design.The first part of this thesis aims to compute the robustness and fragility margins for a given PWA control law and a linear discrete-time system. More precisely, the robustness margin is defined as the set of linear time-varying systems such that the given PWA control law keeps the trajectories inside a given feasible set. On a different perspective, the fragility margin contains all the admissible variations of the control law coefficients such that the positive invariance of the given feasible set is still guaranteed. It will be shown that if the given feasible set is a polytope, then so are these robustness/fragility margins.The second part of this thesis focuses on inverse optimality problem for the class of PWA controllers. Namely, the goal is to construct an optimization problem whose optimal solution is equivalent to the given PWA function. The methodology is based on emph convex lifting: an auxiliary 1− dimensional variable which enhances the convexity characterization into recovered optimization problem. Accordingly, if the given PWA function is continuous, the optimal solution to this reconstructed optimization problem will be shown to be unique. Otherwise, if the continuity of this given PWA function is not fulfilled, this function will be shown to be one optimal solution to the recovered problem.In view of applications in linear model predictive control (MPC), it will be shown that any continuous PWA control law can be obtained by a linear MPC problem with the prediction horizon at most equal to 2 prediction steps. Aside from the theoretical meaning, this result can also be of help to facilitate implementation of PWA control laws by avoiding storing state space partition. Another utility of convex liftings will be shown in the last part of this thesis to be a control Lyapunov function. Accordingly, this convex lifting will be deployed in the so-called robust control design based on convex liftings for linear system affected by bounded additive disturbances and polytopic uncertainties. Both implicit and explicit controllers can be obtained. This method can also guarantee the recursive feasibility and robust stability. However, this control Lyapunov function is only defined over the maximal λ −contractive set for a given 0 ≤ λ < 1 which is known to be smaller than the maximal controllable set. Therefore, an extension of the above method to the N-steps controllable set will be presented. This method is based on a cascade of convex liftings where an auxiliary variable will be used to emulate a Lyapunov function. Namely, this variable will be shown to be non-negative, to strictly decrease for N first steps and to stay at 0 afterwards. Accordingly, robust stability is sought
APA, Harvard, Vancouver, ISO, and other styles
11

Dantas, RÃgis FaÃanha. "Modelo de Risco e DecisÃo de CrÃdito Baseado em Estrutura de Capital com InformaÃÃo AssimÃtrica." Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1293.

Full text
Abstract:
nÃo hÃ
Este trabalho se inicia analisando os aspectos teÃricos relacionados ao financiamento das empresas e os riscos atrelados a esta atividade de emprÃstimo realizada pelo sistema financeiro bancÃrio. Dada uma estrutura Ãtima de capital buscada pelas empresas, passa-se a analisar se este parÃmetro ou conjunto de parÃmetros à um bom indicativo para discriminar as empresas quanto ao seu risco de crÃdito analisado pelo mercado financeiro. Em relaÃÃo à gestÃo de risco, serà testado um modelo, tendo como variÃvel explicativa principal a variÃvel(ou conjunto de variÃveis) utilizada como parÃmetro de sinalizaÃÃo ao mercado de limite de risco, dentro dos conceitos de seleÃÃo adversa e modelos de sinalizaÃÃo num ambiente em que impera a informaÃÃo assimÃtrica. Assim, o uso de um sinalizador Ãtimo da estrutura de capital pelos bancos levaria a um equilÃbrio de Nash1 com informaÃÃo assimÃtrica no mercado de fundos emprestÃveis. No desenvolvimento do modelo estatÃstico utilizamos um modelo Logit em virtude da nÃo normalidade e as condiÃÃes de nÃo linearidade do modelo de probabilidade linear, entretanto, a anÃlise discriminante e Probit serÃo testados concomitantemente para efeitos comparativos entre os modelos. Outro ponto importante à a incorporaÃÃo de um modelo de decisÃo de crÃdito com o uso de programaÃÃo Linear Inteira. O uso deste modelo incorpora cenÃrios prospectivos com a taxa de juros, qualificando o ponto de corte(limites de aceitaÃÃo) para tomada de decisÃo. Ressaltamos aqui a importÃncia do uso da anÃlise fatorial no tratamento e configuraÃÃo das variÃveis explicativas, ferramenta nÃo observada para modelagem de risco nas diversas referencias deste trabalho. Diversos mÃtodos estatÃsticos univariados e multivariados, assim como critÃrios qualitativos sÃo usados na discriminaÃÃo e classificaÃÃo do risco, no entanto, o uso da AnÃlise Fatorial qualifica ainda mas as variÃveis independentes usadas, colocando-as em grupos de explicaÃÃo que captam melhor os efeitos dos diversos indicadores econÃmicofinanceiros. Neste trabalho foram revisados os principais modelos de insolvÃncia para avaliaÃÃo de risco de crÃdito no Brasil, concluindo-se com uma proposta de adoÃÃo de um modelo estatÃstico com o uso do modelo Logit e ProgramaÃÃo Linear Inteira, com o objetivo de medir o risco associado ao financiamento e emprÃstimo a clientes.
This work to research the theory about enterprises financial, financial struture, risk of the borrowe (enterprises) to repay the loan, credit of banks. In views of the optimal capital struture, credit analyses examines factors that may lead to default in the repayment of a loan. As for the risk management the general kinds of risks are described, particularly the credit risc and the credit concession models are evaluated. The risc models will have the financial demonstrations of interprises, here can be viewed as a signal, about the concept of asymmetric information. Thus, the signal to leave a nash equilibrium in this credit market. In the development of the statistic model, using the Logit Model because the problems of functional form of the linear probability model, the resÃduos is heteroscedastic and not have normal distribuition. The discriminant analyse, probit e logit will be test. Another important point in this work is the decision model. This model have predtion of interest to improve the decision with the cutoff. Referring to the factorial analyse in the statistic of the independentes variable, the use of factorial analyses is not observations in the reference. Having this purpose in mind a statistic model was developed, using logit regression with factorial analyse in variable and linear programming. This project aims at evaluating the used models and proposing the adoption of new models, for the allowance for dobtful accounts, with the objetive of mensuring the risk related to customers financing and loan activities.
APA, Harvard, Vancouver, ISO, and other styles
12

Sahlin, Jakob. "Line Loss Prediction Model Design at Svenska kraftnät : Line Loss Prediction Based on Regression Analysis on Line Loss Rates and Optimisation Modelling on Nordic Exchange Flows." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193675.

Full text
Abstract:
Forecast and estimation on transmission line losses is a vital task in the daily operation and planning of the Swedish power system. The aim with this thesis is to design a new line loss prediction model at Svenska kraftnät (Svk), which provides a hourly forecast of the transmission line losses the next day for the Swedish bidding areas (SE1-SE4). The final goal is to reduce the additional cost related to inaccurate predictions. The developed model is based on regression analysis on historical line losses and estimated exchange flows between the adjacent bidding areas computed by linear programming. Simulation results for 2015 show that it is, with rather simple estimates and assumptions, possible to increase the prediction accuracy with up to 27% compared with the existing method and to reduce the related costs in a similar way. The study also shows that future modelling has potential to increase the precision even further and recommends a Neural Network approach as the next step.
Prognoser och estimering av stamnätsförluster är en central del i den dagliga driften av det svenska kraftsystemet. Den här uppsatsen har därför syftat till att utveckla en simuleringsmodell som ger en timvisprognos över morgondagens förluster i varje elområde (SE1-SE4). Detta verktyg är senare tänkt att precisera den dagliga upphandlingen av förluster och därmed minska kostnaden kopplad till osäkra prognoser. Den utvecklade modellen bygger på en regressionsanalys av tidigare uppmätta förluster och uppskattade transmissionsflöden mellan de närliggande elområdena beräknad med linjär programmering. Simulerignar för 2015 visar att, det med föhrhållandesvis enkla antaganden och uppskattningar av indata, går att precisera förlusterna med uppemot 27% jämfört med dagens prognos och därmed minska kostnaderna i liknande omfattning. Studien visar också att förbättringspotentialen är stor och rekommende-rar fortsatta studier utifrån en Neurala Nätverk modell.
APA, Harvard, Vancouver, ISO, and other styles
13

Jain, Sumit. "Exploiting contacts for interactive control of animated human characters." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44817.

Full text
Abstract:
One of the common research goals in disciplines such as computer graphics and robotics is to understand the subtleties of human motion and develop tools for recreating natural and meaningful motion. Physical simulation of virtual human characters is a promising approach since it provides a testbed for developing and testing control strategies required to execute various human behaviors. Designing generic control algorithms for simulating a wide range of human activities, which can robustly adapt to varying physical environments, has remained a primary challenge. This dissertation introduces methods for generic and robust control of virtual characters in an interactive physical environment. Our approach is to use the information of the physical contacts between the character and her environment in the control design. We leverage high-level knowledge of the kinematics goals and the interaction with the surroundings to develop active control strategies that robustly adapt to variations in the physical scene. For synthesizing intentional motion requiring long-term planning, we exploit properties of the physical model for creating efficient and robust controllers in an interactive framework. The control design leverages the reference motion capture data and the contact information with the environment for interactive long-term planning. Finally, we propose a compact soft contact model for handling contacts for rigid body virtual characters. This model aims at improving the robustness of existing control methods without adding any complexity to the control design and opens up possibilities for new control algorithms to synthesize agile human motion.
APA, Harvard, Vancouver, ISO, and other styles
14

Yi-ChiangHuang and 黃奕強. "Probabilistic Linear Discriminant Analysis-Based Face Recognition using Factor Analysis." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/53835311445793311667.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系
102
Recently, there is a significant progress in study of face recognition, consequently there are many of face recognition applications appeared nowadays. In order to make the face recognition implemented to real-time system, it is required to reduce the effect of variations for better performance. In my research, I focus on overcome with facial expression variations and illumination variations problems and take Probabilistic Linear Discriminant Analysis as the core of system. The concept is to model the complex distribution caused by those variations with Probabilistic Linear Discriminant Analysis model. In fact, there will be a representation of images that are constant for the same subject, regardless of pose, illumination, and any other variations. We are using these generative models to interpret the generative procedures of data, and then take the most likely matching likelihood result to determine the individual matches. We investigate performance by using the FERET, ORL, and Yale database.
APA, Harvard, Vancouver, ISO, and other styles
15

Yueh-Hsuan, Chiang. "Lighting Condition Class-Based Locally Linear Discriminant Analysis for Face Recognition." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1707200522292600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chiang, Yueh-Hsuan, and 江岳軒. "Lighting Condition Class-Based Locally Linear Discriminant Analysis for Face Recognition." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/17645384797295834053.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
93
We proposed a novel method of face recognition under varying lighting conditions. Face images under different lighting conditions are non-linear separable, image variation due to different lighting conditions is much more significant than that due to different personal identities. The basic idea of our approach is to find a set of lighting condition specific transformations which best separates the face images under varying lighting conditions. The proposed method has several steps, the first one is to find the optimal set of lighting condition classes which best describes the lighting variation, and then we apply a novel soft classification of lighting condition to each training image. With the soft classification result, a set of lighting condition specific linear transformations would be found to complete the recognition task. By the virtue of soft classification and linear transformations, our approach can not only avoid overfittings but also has low computational cost. With our method, face images under varying lighting conditions can be well separated. The proposed method has been tested on several well-known databases, and the experimental results show that the performance of our approach is better than those of conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Deng-Shiang, and 王登祥. "Hybrid Linear Feature Extraction Based on Class-Mean and Covariance Discriminant Analysis." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/16464798957842580069.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
92
In the past decades, the discriminant analysis feature extraction (DAFE) has been successfully applied to a variety of applications for the purpose of data dimension reduction. Although the DAFE method is easy to use, an ineffective feature extraction often occurs due to the weakness of its criterion. In this study, attentions are focused on the problems caused by the design based only on class-mean discriminant information and its overemphasis upon relatively large distances between classes.   We propose a hybrid linear feature extraction that uses both class-mean and covariance discriminant information simultaneously by combining two existing feature extraction methods, the approximate pairwise accuracy criterion (aPAC) and the common mean feature extraction (CMFE). By incorporating a weighting function into the criterion of DAFE, the aPAC can mitigate the problem with an overemphasis upon relatively large distances.   A suboptimum problem has emerged from a direct combination of aPAC and CMFE due to the difficulty in fusing their criteria. To overcome the problem, a parametric multiclass error estimation is developed as an intermediary for the combination of aPAC and CMFE. Based on the new parametric multiclass error estimation method, we have also developed an iterative gradient descent algorithm as a fine-tuning for a feature set in a predetermined size.   Experiments have shown that our proposed methods can take advantage of the complementary information provided by aPAC and CMFE, leading to a satisfactory performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Shu-Yao, Chang, and 張書銚. "A Human Iris Recognition System Based on Direct Linear Discriminant Analysis and the Nearest Feature Classifiers." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/10993447523537954203.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
92
Biometric recognition systems perform personal identification with physiological characteristics. These physiological characteristics usually include the following: faces, irises, retinas, hand textures, and fingerprints. Irises are not easy to be copied and do not change forever. Moreover, everyone has different irises. According to such cues, irises have high quality of uniqueness and stability, and they are great for biometric recognition. In this thesis, we present a human iris recognition system with a high recognition rate. The iris recognition system consists of three major processing phases. First, the system captures images of human’s eyes from a web camera, and obtains iris images from them. We further manipulate the iris images using digital image processing techniques, so that the resulting iris images are suited to recognition. Second, the system makes feature vectors from the iris images. Before extraction of feature vectors, we must unwrap the iris images. In this phase, the problem of rotation invariant is solved. We then adopt direct linear discriminant analysis to extract feature vectors such that the distance between the feature vectors of different classes is the largest but the distance between those in the same class is the smallest. Finally, the system employs the nearest feature classifiers to discriminate the feature vectors. To verify the effectiveness of the proposed methods, we realize a human iris recognition system. The experimental results reveal that the recognition rate achieves 96.47% in the case of fewer sampling feature vectors, whereas it can attain 98.50% if more sampling feature vectors are added to each class.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Cheng-Hsuan, and 李政軒. "A Clustering Algorithm Based on Fuzzy-Type Linear Discriminant Analysis and Spatial-Contextual Support Vector Machines." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/78256999129224272289.

Full text
Abstract:
博士
國立交通大學
電控工程研究所
100
Statistical learning is trying to develop computer algorithms to recognize complex patterns and make decisions based on empirical data automatically. Two major issues are clustering and classification. Clustering organizes patterns into sensible clusters for patterns in the same cluster to be similar in a sense, whereas classification identifies the categories to which new patterns belong based on an available training set of data containing patterns of known categories. This thesis introduces a fuzzy-based clustering and a spatial-contextual classifier. Fuzzy-based clustering defines within- and between-cluster scatter matrices of a fuzzy-type linear discriminant analysis, and the clustering results are based on the Fisher criterion. The proposed clustering algorithm minimizes the within-cluster information and simultaneously maximizes the between-cluster information. For the classification part, a spatial-contextual term was used to modify the decision function and constraints of a support vector machine. Experimental results show that the proposed methods achieve good clustering and classification performance on famous real data sets.
APA, Harvard, Vancouver, ISO, and other styles
20

Lai, Chun-Yen, and 賴君彥. "The study of Entropy Based Classification and Linear Discriminant Analysis on TW50 and mid-cap 100 the selection for the portfolio." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2496wj.

Full text
Abstract:
碩士
嶺東科技大學
高階主管企管碩士在職專班
105
There are many factors that affect the stock market, such as fundamentals, technical, economic, chips, political, and so on. This study considers the before and after influence of politic by the change of political party to find the best portfolio of the largest company of the selecting portfolio in Taiwan stock market. First, it is decide to use the LDA (Linear Discriminant Analysis) to develop the model by using the TW-50 data and the Mid-Cap 100 is used as testing data. Then, the entropy based classification is used as an approach to find the governing factors among of our selecting attributes. Aftermath, the core attributes is used to remodify the model by using TW-50 data to analyze the training sample data. In the aforementioned two acessments, we also calculated the allocation of weight and average methods. Four of the previous cases are drawn and rational analysis is presented.
APA, Harvard, Vancouver, ISO, and other styles
21

WEI, YU-HUA, and 魏佑樺. "The Study of Entropy-based Classifiation and Linear Discriminant Analysis on Computer and Electronic components Industry of Stock Market for Portfolio and Return Rate." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/gj9rv7.

Full text
Abstract:
碩士
嶺東科技大學
資訊管理系碩士班
107
The computer device and electronic component industry is the midstream industry above the technology industry. This is a considerable proportion of Taiwan's technology electronics industry. Therefore, this study will use computer peripherals and electronic components stocks as an example, based on technical analysis to predict and analyze the stocks and trends of the stocks. The study used the Entropy-based Classification to as preprocessing to select influenced finance variables of those company which effect the decision. This study will use Linear Discriminant Analysis (LDA) as a method of mathematical statistics to input the sample data obtained into the method. Through the analysis, the results of linear discriminants can be obtained providing information easier for decision makers. The study analysis which combined the Entropy-based Classification with Linear Discriminant Analysis (LDA) is to find the accuracy rate and return of investment (ROI).
APA, Harvard, Vancouver, ISO, and other styles
22

CHEN, HONG-WEI, and 陳泓瑋. "The Study of hyperspectral imaging on Paddy Rice Image Classification through Particle Swarm Optimization with Density-Based Spatial Clustering of Applications with Noise and Linear Discriminant Analysis." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/27h8zk.

Full text
Abstract:
碩士
嶺東科技大學
資訊管理系碩士班
104
In general, image classification from the past use of supervised learning classifier with a multi-spectral image are considered in this study. However, supervised learning in data collection request quite a lot of manpower, material and time. On the other hand, multi-spectral and spatial resolution due to low resolution, it cannot accurately determine the spectral similarity of surface objects. If the unsupervised learning with hyperspectral of image information is analyzed and accurately judge, the substituted solution can be adopt to reduce time spending. Therefore, there are a wealth of multi-spectral image information, but how to filter out image classification on hyperspectral imaging is an important issue. This study focused on how to select important spectral information from hyperspectral imaging. The paddy fields images considering with a supervised learning linear discriminant analysis and unsupervised learning density-based clustering algorithm. The principal component analysis is used as pre-processing for parallel study designed the following four case studies: (a) multi-spectral and hyper-spectral with linear discriminant analysis (b) with a multi-spectral and hyper-spectral density-based clustering algorithm (c) multi-spectral and hyper-spectral principal component analysis with a linear discriminant analysis (d) multi-spectral and hyper-spectral principal component with density-based clustering algorithm. The results are compared with each other after the error matrix and the theme maps are drawn.
APA, Harvard, Vancouver, ISO, and other styles
23

[Verfasser], Sigurður Freyr Marinósson. "Stability analysis of nonlinear systems with linear programming : a Lyapunov functions based approach / von Sigurður Freyr Marinósson." 2002. http://d-nb.info/982323697/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Xiaohong. "Storey-Based Stability Analysis for Multi-Storey Unbraced Frames Subjected to Variable Loading." Thesis, 2008. http://hdl.handle.net/10012/3823.

Full text
Abstract:
For decades, structural engineers have been using various conventional design approaches for assessing the strength and stability of framed structures for various loads. Today, engineers are still designing without some critical information to insure that their stability assessment yields a safe design for the life of the structure with consideration for extreme loads. Presented in this thesis is new critical information provided from the study of stability analysis and design of steel framed structures accounting for extreme loads associated to load patterns that may be experienced during their lifetime. It is conducted in five main parts. A literature survey is first carried out reviewing the previous research of analyzing frame stability including the consideration of initial geometric imperfections, and also evaluating research of the analysis and design of the increased usage of cold-formed steel (CFS) storage racks. Secondly, the elastic buckling loads for single-storey unbraced steel frames subjected to variable loading is extended to multi-storey unbraced steel frames. The formulations and procedures are developed for the multi-storey unbraced steel frames subjected to variable loading using the storey-based buckling method. Numerical examples are presented as comparisons to the conventional proportional loading approach and to demonstrate the effect of connection rigidity on the maximum and minimum frame-buckling loads. Thirdly, the lateral stiffness of axially loaded columns in unbraced frames accounting for initial geometric imperfections is derived based on the storey-based buckling. A practical method of evaluating column effective length factor with explicit accounting for the initial geometric imperfections is developed and examined using numerical examples. The fourth part is an investigation of the stability for multi-storey unbraced steel frames under variable loading with accounting for initial geometric imperfections. Finally, the stability of CFS storage racks is studied. The effective length factor of CFS storage racks with accounting for the semi-rigid nature of the beam-to-column connections of such structures are evaluated based on experimental data. A parametric study on maximum and minimum frame-buckling loads with or without accounting for initial geometric imperfections is conducted. The proposed stability analysis of multi-storey unbraced frames subjected to variable loading takes into consideration the volatility of live loads during the life span of structures and frame buckling characteristics of the frames under any possible load pattern. From the proposed method, the maximum and minimum frame-buckling loads together with their associated load patterns provides critical information to clearly define the stability capacities of frames under extreme loads. This critical information in concern for the stability of structures is generally not available through a conventional proportional loading analysis. This study of work ends with an appropriate set of conclusions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography