Academic literature on the topic 'FEATURE OPTIMIZATION METHODS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'FEATURE OPTIMIZATION METHODS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "FEATURE OPTIMIZATION METHODS"

1

Lin, Lei. "Optimization methods for inventive design." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD012/document.

Full text
Abstract:
La thèse traite des problèmes d'invention où les solutions des méthodes d'optimisation ne satisfont pas aux objectifs des problèmes à résoudre. Les problèmes ainsi définis exploitent, pour leur résolution, un modèle de problème étendant le modèle de la TRIZ classique sous une forme canonique appelée "système de contradictions généralisées". Cette recherche instrumente un processus de résolution basé sur la boucle simulation-optimisation-invention permettant d'utiliser à la fois des méthodes d'optimisation et d'invention. Plus précisément, elle modélise l'extraction des contractions généralisées à partir des données de simulation sous forme de problèmes d'optimisation combinatoire et propose des algorithmes donnant toutes les solutions à ces problèmes<br>The thesis deals with problems of invention where solutions optimization methods do not meet the objectives of problems to solve. The problems previuosly defined exploit for their resolution, a problem extending the model of classical TRIZ in a canonical form called "generalized system of contradictions." This research draws up a resolution process based on the loop simulation-optimization-invention using both solving methods of optimization and invention. More precisely, it models the extraction of generalized contractions from simulation data as combinatorial optimization problems and offers algorithms that offer all the solutions to these problems
APA, Harvard, Vancouver, ISO, and other styles
2

Zanco, Philip. "Analysis of Optimization Methods in Multisteerable Filter Design." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2227.

Full text
Abstract:
The purpose of this thesis is to study and investigate a practical and efficient implementation of corner orientation detection using multisteerable filters. First, practical theory involved in applying multisteerable filters for corner orientation estimation is presented. Methods to improve the efficiency with which multisteerable corner filters are applied to images are investigated and presented. Prior research in this area presented an optimization equation for determining the best match of corner orientations in images; however, little research has been done on optimization techniques to solve this equation. Optimization techniques to find the maximum response of a similarity function to determine how similar a corner feature is to a multioriented corner template are also explored and compared in this research.
APA, Harvard, Vancouver, ISO, and other styles
3

Monrousseau, Thomas. "Développement du système d'analyse des données recueillies par les capteurs et choix du groupement de capteurs optimal pour le suivi de la cuisson des aliments dans un four." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0054.

Full text
Abstract:
Dans un monde où tous les appareils électro-ménagers se connectent et deviennent intelligents, il est apparu pour des industriels français le besoin de créer des fours de cuisson innovants capables de suivre l’état de cuisson à cœur de poissons et de viandes sans capteur au contact. Cette thèse se place dans ce contexte et se divise en deux grandes parties. La première est une phase de sélection d’attributs parmi un ensemble de mesures issues de capteurs spécifiques de laboratoire afin de permettre d’appliquer un algorithme de classification supervisée sur trois états de cuisson. Une méthode de sélection basée sur la logique floue a notamment été appliquée pour réduire grandement le nombre de variable à surveiller. La seconde partie concerne la phase de suivi de cuisson en ligne par plusieurs méthodes. Les techniques employées sont une approche par classification sur dix états à cœur, la résolution d’équation de la chaleur discrétisée, ainsi que le développement d’un capteur logiciel basé sur des réseaux de neurones artificiels synthétisés à partir d’expériences de cuisson, pour réaliser la reconstruction du signal de la température au cœur des aliments à partir de mesures disponibles en ligne. Ces algorithmes ont été implantés sur microcontrôleur équipant une version prototype d’un nouveau four afin d’être testés et validés dans le cas d’utilisations réelles<br>In a world where all personal devices become smart and connected, some French industrials created a project to make ovens able detecting the cooking state of fish and meat without contact sensor. This thesis takes place in this context and is divided in two major parts. The first one is a feature selection phase to be able to classify food in three states: under baked, well baked and over baked. The point of this selection method, based on fuzzy logic is to strongly reduce the number of features got from laboratory specific sensors. The second part concerns on-line monitoring of the food cooking state by several methods. These technics are: classification algorithm into ten bake states, the use of a discrete version of the heat equation and the development of a soft sensor based on an artificial neural network model build from cooking experiments to infer the temperature inside the food from available on-line measurements. These algorithms have been implemented on microcontroller equipping a prototype version of a new oven in order to be tested and validated on real use cases
APA, Harvard, Vancouver, ISO, and other styles
4

Xiong, Xuehan. "Supervised Descent Method." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/652.

Full text
Abstract:
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
APA, Harvard, Vancouver, ISO, and other styles
5

Lösch, Felix. "Optimization of variability in software product lines a semi-automatic method for visualization, analysis, and restructuring of variability in software product lines." Berlin Logos-Verl, 2008. http://d-nb.info/992075904/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Bing. "A Study of Adaptive Random Features Models in Machine Learning based on Metropolis Sampling." Thesis, KTH, Numerisk analys, NA, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293323.

Full text
Abstract:
Artificial neural network (ANN) is a machine learning approach where parameters, i.e., frequency parameters and amplitude parameters, are learnt during the training process. Random features model is a special case of ANN that the structure of random features model is as same as ANN’s but the parameters’ learning processes are different. For random features model, the amplitude parameters are learnt during the training process but the frequency parameters are sampled from some distributions. If the frequency distribution of the random features model is well-chosen, both models can approximate data well. Adaptive random Fourier features with Metropolis sampling is an enhanced random Fourier features model which can select appropriate frequency distribution adaptively. This thesis studies Rectified Linear Unit and sigmoid features and combines them with the adaptive idea to generate another two adaptive random features models. The results show that using the particular set of hyper-parameters, adaptive random Rectified Linear Unit features model can also approximate the data relatively well, though the adaptive random Fourier features model performs slightly better.<br>I artificiella neurala nätverk (ANN), som används inom maskininlärning, behöver parametrar, kallade frekvensparametrar och amplitudparametrar, hittasgenom en så kallad träningsprocess. Random feature-modeller är ett specialfall av ANN där träningen sker på ett annat sätt. I dessa modeller tränasamplitudparametrarna medan frekvensparametrarna samplas från någon sannolikhetstäthet. Om denna sannolikhetstäthet valts med omsorg kommer båda träningsmodellerna att ge god approximation av givna data. Metoden Adaptiv random Fourier feature[1] uppdaterar frekvensfördelningen adaptivt. Denna uppsats studerar aktiveringsfunktionerna ReLU och sigmoid och kombinerar dem med den adaptiva iden i [1] för att generera två ytterligare Random feature-modeller. Resultaten visar att om samma hyperparametrar som i [1] används så kan den adaptiva ReLU features-modellen approximera data relativt väl, även om Fourier features-modellen ger något bättre resultat.
APA, Harvard, Vancouver, ISO, and other styles
7

Sasse, Hugh Granville. "Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer components." Thesis, De Montfort University, 2010. http://hdl.handle.net/2086/4406.

Full text
Abstract:
The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces.
APA, Harvard, Vancouver, ISO, and other styles
8

YADAV, JYOTI. "A STUDY OF FEATURE OPTIMIZATION METHODS FOR LUNG CANCER DETECTION." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19156.

Full text
Abstract:
In this project, Lung cancer remains an extremely important disease in the world that causes deaths. Early Diagnosis can prevent large amounts of deaths. Classifiers play an important role in detecting lung cancer by means of a machine learning set of rules in addition to CAD-based image processing techniques. For the classifier’s accuracy, there is the need for a good feature collection of images. Features of an image can help to find all relevant information for identifying disease. Features are the important parameter for finding results. Mostly, features are extracted from feature extraction techniques like GLCM or some datasets already have features of lung cancer images by using some techniques. For different models of classifier, dimension, storage, speed, time and performance create an impactful effect on the results because we have large amount features of the images. An optimized method like the feature selection technique is the one solution that leads to finding relevant features from datasets containing features or features extracted from feature extraction techniques. The lung cancer database has 32 case records with 57 unique characteristics. Hong and Young compiled this database, which was indexed in the University of California Irvine repository. Take out medical information and X-ray information, for example, are among the experimental materials. The data described three categories of problematic lung malignancies, each with an integer value ranging from 0 to 3. A new strategy for identifying effective aspects of lung cancer is proposed in our work in Matlab 2022a. It employs a Genetic Algorithm. Using a simplified 8-feature SVM classifier and four feature KNN, 100% accurateness is achieved. The new method is compared to the existing Hyper Heuristic method for the feature selection. Through the maximum level of precision, the projected technique performs better. As a result, the proposed approach is recommended for determining an effective disease symptom.
APA, Harvard, Vancouver, ISO, and other styles
9

Salehipour, Amir. "Combinatorial optimization methods for the (alpha,beta)-k Feature Set Problem." Thesis, 2019. http://hdl.handle.net/1959.13/1400399.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>This PhD research thesis proposes novel and efficient combinatorial optimization-based solution methods for the (alpha,beta)-k Feature Set Problem. The (alpha,beta)-k Feature Set Problem is a combinatorial optimization-based feature selection approach proposed in 2004, and has several applications in computational biology and Bioinformatics. The (alpha,beta)-k Feature Set Problem aims to select a minimum cost set of features such that similarities between entities of the same class and differences between entities of different classes are maximized. The developed solution methods of this research include heuristic and exact methods. While this research focuses on utilizing exact methods, we also developed mathematical properties, and heuristics and problem-driven local searches and applied them in certain stages of the exact methods in order to guide exact solvers and deliver high quality solutions. The motivation behind this stems from computational difficulty of exact solvers in providing good quality solutions for the (alpha, beta)-k Feature Set Problem. Our proposed heuristics deliver very good quality solutions including optimal, and that in a reasonable amount of time. The major contributions of the presented research include: 1) investigating and exploring mathematical properties and characteristics of the (alpha,beta)-k Feature Set Problem for the first time, and utilizing those in order to design and develop algorithms and methods for solving large instances of the (alpha,beta)-k Feature Set Problem; 2) extending the basic modeling, algorithms and solution methods to the weighted variant of the (alpha,beta)-k Feature Set Problem (where features have a cost); and, 3) developing algorithms and solution methods that are capable of solving large instances of the (alpha,beta)-k Feature Set Problem in a reasonable amount of time (prior to this research, many of those instances pose a computational challenge for the exact solvers). To this end, we showed the usefulness of the developed algorithms and methods by applying them on three sets of 346 instances, including real-world, weighted, and randomly generated instances, and obtaining high quality solutions in a short time. To the best of our knowledge, the developed algorithms of this research have obtained the best results for the (alpha,beta)-k Feature Set Problem. In particular, they outperform state-of-the-art algorithms and exact solvers, and have a very competitive performance over large instances because they always deliver feasible solutions, and obtain new best solutions for a majority of large instances in a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
10

Tayal, Aditya. "Effective and Efficient Optimization Methods for Kernel Based Classification Problems." Thesis, 2014. http://hdl.handle.net/10012/8334.

Full text
Abstract:
Kernel methods are a popular choice in solving a number of problems in statistical machine learning. In this thesis, we propose new methods for two important kernel based classification problems: 1) learning from highly unbalanced large-scale datasets and 2) selecting a relevant subset of input features for a given kernel specification. The first problem is known as the rare class problem, which is characterized by a highly skewed or unbalanced class distribution. Unbalanced datasets can introduce significant bias in standard classification methods. In addition, due to the increase of data in recent years, large datasets with millions of observations have become commonplace. We propose an approach to address both the problem of bias and computational complexity in rare class problems by optimizing area under the receiver operating characteristic curve and by using a rare class only kernel representation, respectively. We justify the proposed approach theoretically and computationally. Theoretically, we establish an upper bound on the difference between selecting a hypothesis from a reproducing kernel Hilbert space and a hypothesis space which can be represented using a subset of kernel functions. This bound shows that for a fixed number of kernel functions, it is optimal to first include functions corresponding to rare class samples. We also discuss the connection of a subset kernel representation with the Nystrom method for a general class of regularized loss minimization methods. Computationally, we illustrate that the rare class representation produces statistically equivalent test error results on highly unbalanced datasets compared to using the full kernel representation, but with significantly better time and space complexity. Finally, we extend the method to rare class ordinal ranking, and apply it to a recent public competition problem in health informatics. The second problem studied in the thesis is known as the feature selection problem in literature. Embedding feature selection in kernel classification leads to a non-convex optimization problem. We specify a primal formulation and solve the problem using a second-order trust region algorithm. To improve efficiency, we use the two-block Gauss-Seidel method, breaking the problem into a convex support vector machine subproblem and a non-convex feature selection subproblem. We reduce possibility of saddle point convergence and improve solution quality by sharing an explicit functional margin variable between block iterates. We illustrate how our algorithm improves upon state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography