Tesi sul tema "Evolutionary programming (Computer science) Mathematical optimization"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-25 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Evolutionary programming (Computer science) Mathematical optimization".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Service, Travis. "Co-optimization: a generalization of coevolution". Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Service_09007dcc804e2264.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 26, 2008) Includes bibliographical references (p. 65-68).
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gadiraju, Sriphani Raju. "Modified selection mechanisms designed to help evolution strategies cope with noisy response surfaces". Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-07022003-164112.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Muthuswamy, Shanthi. "Discrete particle swarm optimization algorithms for orienteering and team orienteering problems". Diss., Online access via UMI:, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Khan, Salman A. "Design and analysis of evolutionary and swarm intelligence techniques for topology design of distributed local area networks". Pretori: [S.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-09272009-153908/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Wong, Yin-cheung Eugene, e 黃彥璋. "A hybrid evolutionary algorithm for optimization of maritime logisticsoperations". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44526763.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Doddapaneni, Srinivas P. "Automatic dynamic decomposition of programs on distributed memory machines". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8158.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Abdel, Raheem Mohamed. "Electimize a new evolutionary algorithm for optimization with applications in construction engineering". Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4833.

Testo completo
Abstract (sommario):
Optimization is considered an essential step in reinforcing the efficiency of performance and economic feasibility of construction projects. In the past few decades, evolutionary algorithms (EAs) have been widely utilized to solve various types of construction-related optimization problems due to their efficiency in finding good solutions in relatively short time periods. However, in many cases, these existing evolutionary algorithms failed to identify the optimal solution to several optimization problems. As such, it is deemed necessary to develop new approaches in order to help identify better-quality solutions. This doctoral research presents the development of a new evolutionary algorithm, named "Electimize," that is based on the simulation of the flow of electric current in the branches of an electric circuit. The main motive in this research is to provide the construction industry with a robust optimization tool that overcomes some of the shortcomings of existing EAs. In solving optimization problems using Electimize, a number of wires (solution strings) composed of a number of segments are fabricated randomly. Each segment corresponds to a decision variable in the objective function. The wires are virtually connected in parallel to a source of an electricity to represent an electric circuit. The electric current passing through each wire is calculated by substituting the values of the segments in the objective function. The quality of the wire is based on its global resistance, which is calculated using Ohm's law.; The first problem is the cash flow management problem, as mentioned earlier. The second problem is the time cost tradeoff problem (TCTP) and is used as an example of static optimization. The third problem is a site layout planning problem (SLPP), and represents dynamic optimization. When Electimize was applied to the TCTP, it succeeded to identify the optimal solution of the problem in a single iteration using thirty solution strings, compared to hundreds of iterations and solution strings that were used by EAs to solve the same problem. Electimize was also successful in solving the SLPP and outperformed the existing algorithm used to solve the problem by identifying a better optimal solution. The main contributions of this research are 1) developing a new approach and algorithm for optimization based on the simulation of the phenomenon of electrical conduction, 2) devising processes that enable assessing the quality of decision variable values independently, 3) formulating methodologies that allow for the extensive search of the solution space and identification of alternative optimal solutions, and 4) providing a robust optimization tool for decision makers and construction planners.; The main objectives of this research are to 1) develop an optimization methodology that is capable of evaluating the quality of decision variable values in the solution string independently; 2) devise internal optimization mechanisms that would enable the algorithm to extensively search the solution space and avoid its convergence toward local optima; and 3) provide the construction industry with a reliable optimization tool that is capable of solving different classes of NP-hard optimization problems. First, internal processes are designed, modeled, and tested to enable the individual assessment of the quality of each decision variable value available in the solution space. The main principle in assessing the quality of each decision variable value individually is to use the segment resistance (local resistance) as an indicator of the quality. This is accomplished by conducting a sensitivity analysis to record the change in the resistance of a control wire, when a certain decision variable value is substituted into the corresponding segment of the control wire. The calculated local resistances of all segments of a wire are then normalized to ensure that their summation is equal to the global wire resistance and no violation is made of Kirchhoff's rule. A benchmark NP-hard cash flow management problem from the literature is attempted to test and validate the performance of the developed approach. Not only was Electimize able to identify the optimal solution for the problem, but also it identified ten alternative optimal solutions, outperforming the existing algorithms. Second, the internal processes for the sensitivity analysis are designed to allow for extensive search of the solution space through the generation of new wires. Every time a decision variable value is substituted in the control wire to assess its quality, a new wire that might have a better quality is generated.; To further test the capabilities of Electimize in searching the solution space, Electimize was applied to a multimodal 9-city travelling salesman problem (TSP) that had been previously designed and solved mathematically. The problem has 27 alternative optimal solutions. Electimize succeeded to identify 21 of the 27 alternative optimal solutions in a limited time period. Moreover, Electimize was applied to a 16-city benchmark TSP (Ulysses16) and was able to identify the optimal tour and its alternative. Further, additional parameters are incorporated to 1) allow for the extensive search of the solution space, 2) prevent the convergence towards local optima, and 3) increase the rate of convergence towards the global optima. These parameters are classified into two categories: 1) resistance related parameters, and 2) solution exploration parameters. The resistance related parameters are: a) the conductor resistivity, b) its cross-sectional area, and c) the length of each segment. The main role of this set of parameters is to provide the algorithm with additional gauging parameters to help guide it towards the global optima. The solution exploration parameters included a) the heat factor, and b) the criterion of selecting the control wire. The main role of this set of parameters is to allow for an extensive search of the solution space in order to facilitate the identification all the available alternative optimal solutions; prevent the premature convergence towards local optima; and increase the rate of convergence towards the global optima. Two TSP instances (Bayg29 and ATT48) are attempted and the results obtained illustrate that Electimize outperforms other EAs with respect to the quality of solutions obtained. Third, to test the capabilities of Electimize as a reliable optimization tool in construction optimization problems, three benchmark NP-hard construction optimization problems are attempted.
ID: 030422839; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 263-268).
Ph.D.
Doctorate
Civil, Environmental and Construction Engineering
Engineering and Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Garret, Aaron Dozier Gerry V. "Neural enhancement for multiobjective optimization". Auburn, Ala., 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Dissertation/Garrett_Aaron_55.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Ngatchou, Patrick. "Intelligent techniques for optimization and estimation /". Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5827.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Rohling, Gregory Allen. "Multiple Objective Evolutionary Algorithms for Independent, Computationally Expensive Objectives". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4835.

Testo completo
Abstract (sommario):
This research augments current Multiple Objective Evolutionary Algorithms with methods that dramatically reduce the time required to evolve toward a region of interest in objective space. Multiple Objective Evolutionary Algorithms (MOEAs) are superior to other optimization techniques when the search space is of high dimension and contains many local minima and maxima. Likewise, MOEAs are most interesting when applied to non-intuitive complex systems. But, these systems are often computationally expensive to calculate. When these systems require independent computations to evaluate each objective, the computational expense grows with each additional objective. This method has developed methods that reduces the time required for evolution by reducing the number of objective evaluations, while still evolving solutions that are Pareto optimal. To date, all other Multiple Objective Evolutionary Algorithms (MOEAs) require the evaluation of all objectives before a fitness value can be assigned to an individual. The original contributions of this thesis are: 1. Development of a hierarchical search space description that allows association of crossover and mutation settings with elements of the genotypic description. 2. Development of a method for parallel evaluation of individuals that removes the need for delays for synchronization. 3. Dynamical evolution of thresholds for objectives to allow partial evaluation of objectives for individuals. 4. Dynamic objective orderings to minimize the time required for unnecessary objective evaluations. 5. Application of MOEAs to the computationally expensive flare pattern design domain. 6. Application of MOEAs to the optimization of fielded missile warning receiver algorithms. 7. Development of a new method of using MOEAs for automatic design of pattern recognition systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Ozogur-akyuz, Sureyya. "A Mathematical Contribution Of Statistical Learning And Continuous Optimization Using Infinite And Semi-infinite Programming To Computational Statistics". Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610381/index.pdf.

Testo completo
Abstract (sommario):
A subfield of artificial intelligence, machine learning (ML), is concerned with the development of algorithms that allow computers to &ldquo
learn&rdquo
. ML is the process of training a system with large number of examples, extracting rules and finding patterns in order to make predictions on new data points (examples). The most common machine learning schemes are supervised, semi-supervised, unsupervised and reinforcement learning. These schemes apply to natural language processing, search engines, medical diagnosis, bioinformatics, detecting credit fraud, stock market analysis, classification of DNA sequences, speech and hand writing recognition in computer vision, to encounter just a few. In this thesis, we focus on Support Vector Machines (SVMs) which is one of the most powerful methods currently in machine learning. As a first motivation, we develop a model selection tool induced into SVM in order to solve a particular problem of computational biology which is prediction of eukaryotic pro-peptide cleavage site applied on the real data collected from NCBI data bank. Based on our biological example, a generalized model selection method is employed as a generalization for all kinds of learning problems. In ML algorithms, one of the crucial issues is the representation of the data. Discrete geometric structures and, especially, linear separability of the data play an important role in ML. If the data is not linearly separable, a kernel function transforms the nonlinear data into a higher-dimensional space in which the nonlinear data are linearly separable. As the data become heterogeneous and large-scale, single kernel methods become insufficient to classify nonlinear data. Convex combinations of kernels were developed to classify this kind of data [8]. Nevertheless, selection of the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of &ldquo
infinite&rdquo
kernel combinations for learning problems with the help of infinite and semi-infinite programming regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogeneous data in real-world applications. Combination of kernels can be done, e.g., along a homotopy parameter or a more specific parameter. Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann-Stieltjes) integral constraint due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We analyze the regularity conditions which satisfy the Reduction Ansatz and discuss the type of distribution functions within the structure of the constraints and our bilevel optimization problem. Finally, we adapted well known numerical methods of semiinfinite programming to our new kernel machine. We improved the discretization method for our specific model and proposed two new algorithms. We proved the convergence of the numerical methods and we analyzed the conditions and assumptions of these convergence theorems such as optimality and convergence.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Roda, Fabio. "Intégration d'exigences de haut niveau dans les problèmes d'optimisation : théorie et applications". Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00817782.

Testo completo
Abstract (sommario):
Nous utilisons, ensemble, l'Ingénierie Système et la Programmation mathématique pour intégrer les exigences de haut niveau dans des problèmes d'optimisation. Nous appliquons cette méthode à trois types différents de système. (1) Les Systèmes d'Information (SI), c.à.d. les réseaux des ressources, matérielles, logicielles et utilisateurs, utilisés dans une entreprise, doivent fournir la base des projets qui sont lancés pour répondre aux besoins commerciaux/des affaires (business). Les SI doivent être capables d'évoluer au fil des remplacements d'une technologie par une autre. Nous proposons un modèle opérationnel et une formulation de programmation mathématique qui formalise un problème de priorisation qui se présente dans le contexte de l'évolution technologique d'un système d'information. (2) Les Recommender Systems (RS) sont un type de moteur de recherche dont l'objectif est de fournir des recommandations personnalisées. Nous considérons le problème du design des Recommender Systems dans le but de fournir de bonnes, intéressantes et précises suggestions. Le transport des matériaux dangereux entraine plusieurs problèmes liés aux conséquences écologiques des incidents possibles. (3) Le système de transport doit assurer le transport, pour l'élimination en sécurité des déchets dangereux, d'une façon telle que le risque des possibles incidents soit distribué d'une manière équitable parmi la population. Nous considérons et intégrons dans des formulations de programmation mathématique deux idées différentes d'équité.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Carman, Benjamin Andrew. "Repairing Redistricting: Using an Integer Linear Programming Model to Optimize Fairness in Congressional Districts". Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1619177994406176.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Vanden, Berghen Frank. "Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions". Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Testo completo
Abstract (sommario):
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Benacer, Rachid. "Contribution à l'étude des algorithmes de l'optimisation non convexe et non différentiable". Phd thesis, Grenoble 1, 1986. http://tel.archives-ouvertes.fr/tel-00320986.

Testo completo
Abstract (sommario):
Etude théorique et algorithmique des problèmes d'optimisation non convexes et non différentiables des types suivants: maximiser f(x) sur C, minimiser f(x)-g(x) sur C, minimiser f(x) lorsque x appartient à C et g(x) positive, où f, g sont convexes définies sur rn et C est une partie compacte convexe non vide de rn. Un étudie les conditions nécessaires d'optimalité du premier ordre la dualité, les méthodes de sous-gradients qui convergent vers des solutions optimales locales et les algorithmes qui permettent d'obtenir les solutions globales. On donne, quelques résultats numériques et applications des algorithmes présentés
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Martins, Alexandre Xavier. "Métaheuristiques et modélisation du problème de routage et affectation de longueurs d'ondes pour les réseaux de communications optiques". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00864176.

Testo completo
Abstract (sommario):
Notre travail porte sur l'étude du Problème de Routage et d'Allocation de Longueur d'Onde (Routing and Wavelength Allocation - RWA) dans des réseaux optiques WDM, indépendamment de la topologie physique sous-jacente. Le problème a été idntifié comme étant NP-difficile et plusieurs approches, tant exactes qu'approchées, existent. Nous fournissons d'abord une revue de littérature dans laquelle nous présentons quelques formulations mathématiques pour le problème ainsi que plusieurs manières d'obtenir des bornes inférieures et des heuristiques. Nous considérons le problème min-RWA dans lequel on doit satisfaire un certain nombre de requêtes avec le moins de longueurs d'onde possible. Nous présentons une méthodologie reposant sur une recherche locale de type Descente à Voisinage Variable (Variable Neighborhood Descent - VND) que l'on appelle VND-BFD. Son objectif principal est de supprimer des longueurs d'onde. Nous présentons également une méthode hybride VND-BT. Ensuite, nous proposons une nouvelle approche, elle-aussi reposant sur la VND. Elle consiste à ré-arranger les requêtes entre les longueurs d'onde disponibles. Lorsqu'elle atteint un optimum local, une procédure de perturbation est appliquée et le schéma est similaire à la Recherche Locale Itérée (Iterated Local Search - ILS). Quatre variantes sont définies selon les stratégies appliquées dans VND et ILS : VNDr-ILSp, VNDe-ILSp, VNDr-ILS5p et VNDe-ILS5p. Les résultats expérimentaux montrent que cette nouvelle approche est plus performante, en particulier la version VNDe-ILS5p. La méthode est compétitive avec les meilleures méthodes de la littérature puisque VNDe-ILS5p a permis d'améliorer une grande partie des meilleures solutions connues sur les instances standard du min-RWA. Enfin, nous considérons aussi le problème max-RWA dans lequel on doit maximiser le nombre de requêtes traitées avec un nombre donné de longueurs d'onde. Nous proposons des modèles compacts ainsi que des améliorations destinées à accélérer la résolution par des solveurs en nombre entiers. Après avoir décrit des modèles existants utilisant la génération de colonnes, nous proposons un nouveau modèle, PG-MAX-IS-IRC, utilisant lui-aussi la génération de colonnes. Il permet d'obtenir des bornes supérieures de même qualité en un temps très fortement réduit.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

"An evolutionary approach to multi-objective optimization problems". 2002. http://library.cuhk.edu.hk/record=b6073476.

Testo completo
Abstract (sommario):
Zhong-Yao Zhu.
"6th August 2002."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (p. 227-239).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

"Stochastic algorithms for optimal placements of flexible objects". 1999. http://library.cuhk.edu.hk/record=b6073193.

Testo completo
Abstract (sommario):
by Cheung, Shing Kwong.
Thesis (Ph.D.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (p. 137-143).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

"Integration of constraint programming and linear programming techniques for constraint satisfaction problem and general constrained optimization problem". 2001. http://library.cuhk.edu.hk/record=b5890598.

Testo completo
Abstract (sommario):
Wong Siu Ham.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 131-138).
Abstracts in English and Chinese.
Abstract --- p.ii
Acknowledgments --- p.vi
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation for Integration --- p.2
Chapter 1.2 --- Thesis Overview --- p.4
Chapter 2 --- Preliminaries --- p.5
Chapter 2.1 --- Constraint Programming --- p.5
Chapter 2.1.1 --- Constraint Satisfaction Problems (CSP's) --- p.6
Chapter 2.1.2 --- Satisfiability (SAT) Problems --- p.10
Chapter 2.1.3 --- Systematic Search --- p.11
Chapter 2.1.4 --- Local Search --- p.13
Chapter 2.2 --- Linear Programming --- p.17
Chapter 2.2.1 --- Linear Programming Problems --- p.17
Chapter 2.2.2 --- Simplex Method --- p.19
Chapter 2.2.3 --- Mixed Integer Programming Problems --- p.27
Chapter 3 --- Integration of Constraint Programming and Linear Program- ming --- p.29
Chapter 3.1 --- Problem Definition --- p.29
Chapter 3.2 --- Related works --- p.30
Chapter 3.2.1 --- Illustrating the Performances --- p.30
Chapter 3.2.2 --- Improving the Searching --- p.33
Chapter 3.2.3 --- Improving the representation --- p.36
Chapter 4 --- A Scheme of Integration for Solving Constraint Satisfaction Prob- lem --- p.37
Chapter 4.1 --- Integrated Algorithm --- p.38
Chapter 4.1.1 --- Overview of the Integrated Solver --- p.38
Chapter 4.1.2 --- The LP Engine --- p.44
Chapter 4.1.3 --- The CP Solver --- p.45
Chapter 4.1.4 --- Proof of Soundness and Completeness --- p.46
Chapter 4.1.5 --- Compared with Previous Work --- p.46
Chapter 4.2 --- Benchmarking Results --- p.48
Chapter 4.2.1 --- Comparison with CLP solvers --- p.48
Chapter 4.2.2 --- Magic Squares --- p.51
Chapter 4.2.3 --- Random CSP's --- p.52
Chapter 5 --- A Scheme of Integration for Solving General Constrained Opti- mization Problem --- p.68
Chapter 5.1 --- Integrated Optimization Algorithm --- p.69
Chapter 5.1.1 --- Overview of the Integrated Optimizer --- p.69
Chapter 5.1.2 --- The CP Solver --- p.74
Chapter 5.1.3 --- The LP Engine --- p.75
Chapter 5.1.4 --- Proof of the Optimization --- p.77
Chapter 5.2 --- Benchmarking Results --- p.77
Chapter 5.2.1 --- Weighted Magic Square --- p.77
Chapter 5.2.2 --- Template design problem --- p.78
Chapter 5.2.3 --- Random GCOP's --- p.79
Chapter 6 --- Conclusions and Future Work --- p.97
Chapter 6.1 --- Conclusions --- p.97
Chapter 6.2 --- Future work --- p.98
Chapter 6.2.1 --- Detection of implicit equalities --- p.98
Chapter 6.2.2 --- Dynamical variable selection --- p.99
Chapter 6.2.3 --- Analysis on help of linear constraints --- p.99
Chapter 6.2.4 --- Local Search and Linear Programming --- p.99
Appendix --- p.101
Proof of Soundness and Completeness --- p.101
Proof of the optimization --- p.126
Bibliography --- p.130
Gli stili APA, Harvard, Vancouver, ISO e altri
20

"A value estimation approach to Iri-Imai's method for constrained convex optimization". 2002. http://library.cuhk.edu.hk/record=b5891236.

Testo completo
Abstract (sommario):
Lam Sze Wan.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 93-95).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background --- p.4
Chapter 3 --- Review of Iri-Imai Algorithm for Convex Programming Prob- lems --- p.10
Chapter 3.1 --- Iri-Imai Algorithm for Convex Programming --- p.11
Chapter 3.2 --- Numerical Results --- p.14
Chapter 3.2.1 --- Linear Programming Problems --- p.15
Chapter 3.2.2 --- Convex Quadratic Programming Problems with Linear Inequality Constraints --- p.17
Chapter 3.2.3 --- Convex Quadratic Programming Problems with Con- vex Quadratic Inequality Constraints --- p.18
Chapter 3.2.4 --- Summary of Numerical Results --- p.21
Chapter 3.3 --- Chapter Summary --- p.22
Chapter 4 --- Value Estimation Approach to Iri-Imai Method for Con- strained Optimization --- p.23
Chapter 4.1 --- Value Estimation Function Method --- p.24
Chapter 4.1.1 --- Formulation and Properties --- p.24
Chapter 4.1.2 --- Value Estimation Approach to Iri-Imai Method --- p.33
Chapter 4.2 --- "A New Smooth Multiplicative Barrier Function Φθ+,u" --- p.35
Chapter 4.2.1 --- Formulation and Properties --- p.35
Chapter 4.2.2 --- "Value Estimation Approach to Iri-Imai Method by Us- ing Φθ+,u" --- p.41
Chapter 4.3 --- Convergence Analysis --- p.43
Chapter 4.4 --- Numerical Results --- p.46
Chapter 4.4.1 --- Numerical Results Based on Algorithm 4.1 --- p.46
Chapter 4.4.2 --- Numerical Results Based on Algorithm 4.2 --- p.50
Chapter 4.4.3 --- Summary of Numerical Results --- p.59
Chapter 4.5 --- Chapter Summary --- p.60
Chapter 5 --- Extension of Value Estimation Approach to Iri-Imai Method for More General Constrained Optimization --- p.61
Chapter 5.1 --- Extension of Iri-Imai Algorithm 3.1 for More General Con- strained Optimization --- p.62
Chapter 5.1.1 --- Formulation and Properties --- p.62
Chapter 5.1.2 --- Extension of Iri-Imai Algorithm 3.1 --- p.63
Chapter 5.2 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.1 for More General Constrained Optimization --- p.64
Chapter 5.2.1 --- Formulation and Properties --- p.64
Chapter 5.2.2 --- Value Estimation Approach to Iri-Imai Method --- p.67
Chapter 5.3 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.2 for More General Constrained Optimization --- p.69
Chapter 5.3.1 --- Formulation and Properties --- p.69
Chapter 5.3.2 --- Value Estimation Approach to Iri-Imai Method --- p.71
Chapter 5.4 --- Numerical Results --- p.72
Chapter 5.4.1 --- Numerical Results Based on Algorithm 5.1 --- p.73
Chapter 5.4.2 --- Numerical Results Based on Algorithm 5.2 --- p.76
Chapter 5.4.3 --- Numerical Results Based on Algorithm 5.3 --- p.78
Chapter 5.4.4 --- Summary of Numerical Results --- p.86
Chapter 5.5 --- Chapter Summary --- p.87
Chapter 6 --- Conclusion --- p.88
Bibliography --- p.93
Chapter A --- Search Directions --- p.96
Chapter A.1 --- Newton's Method --- p.97
Chapter A.1.1 --- Golden Section Method --- p.99
Chapter A.2 --- Gradients and Hessian Matrices --- p.100
Chapter A.2.1 --- Gradient of Φθ(x) --- p.100
Chapter A.2.2 --- Hessian Matrix of Φθ(x) --- p.101
Chapter A.2.3 --- Gradient of Φθ(x) --- p.101
Chapter A.2.4 --- Hessian Matrix of φθ (x) --- p.102
Chapter A.2.5 --- Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x) --- p.102
Chapter A.2.6 --- "Gradient of φθ+,u(x)" --- p.102
Chapter A.2.7 --- "Hessian Matrix of φθ+,u(x)" --- p.103
Chapter A.2.8 --- "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)" --- p.103
Chapter A.3 --- Newton's Directions --- p.103
Chapter A.3.1 --- Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x) --- p.104
Chapter A.3.2 --- "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)" --- p.104
Chapter A.4 --- Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+) --- p.105
Chapter A.4.1 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ) --- p.105
Chapter A.4.2 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ+) --- p.107
Chapter B --- Randomly Generated Test Problems for Positive Definite Quadratic Programming --- p.109
Chapter B.l --- Convex Quadratic Programming Problems with Linear Con- straints --- p.110
Chapter B.l.1 --- General Description of Test Problems --- p.110
Chapter B.l.2 --- The Objective Function --- p.112
Chapter B.l.3 --- The Linear Constraints --- p.113
Chapter B.2 --- Convex Quadratic Programming Problems with Quadratic In- equality Constraints --- p.116
Chapter B.2.1 --- The Quadratic Constraints --- p.117
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Ali, Younis Kalbat Abdulrahman Younis. "Distributed and Large-Scale Optimization". Thesis, 2016. https://doi.org/10.7916/D8D79B7V.

Testo completo
Abstract (sommario):
This dissertation is motivated by the pressing need for solving real-world large-scale optimization problems with the main objective of developing scalable algorithms that are capable of solving such problems efficiently. Large-scale optimization problems naturally appear in complex systems such as power networks and distributed control systems, which are the main systems of interest in this work. This dissertation aims to address four problems with regards to the theory and application of large-scale optimization problems, which are explained below: Chapter 2: In this chapter, a fast and parallelizable algorithm is developed for an arbitrary decomposable semidefinite program (SDP). Based on the alternating direction method of multipliers, we design a numerical algorithm that has a guaranteed convergence under very mild assumptions. We show that each iteration of this algorithm has a simple closed-form solution, consisting of matrix multiplications and eigenvalue decompositions performed by individual agents as well as information exchanges between neighboring agents. The cheap iterations of the proposed algorithm enable solving a wide spectrum of real-world large-scale conic optimization problems that could be reformulated as SDP. Chapter 3: Motivated by the application of sparse SDPs to power networks, the objective of this chapter is to design a fast and parallelizable algorithm for solving the SDP relaxation of a large-scale optimal power flow (OPF) problem. OPF is fundamental problem used for the operation and planning of power networks, which is non-convex and NP-hard in the worst case. The proposed algorithm would enable a real-time power network management and improve the system's reliability. In particular, this algorithm helps with the realization of Smart Grid by allowing to make optimal decisions very fast in response to the stochastic nature of renewable energy. The proposed algorithm is evaluated on IEEE benchmark systems. Chapter 4: The design of an optimal distributed controller using an efficient computational method is one of the most fundamental problems in the area of control systems, which remains as an open problem due to its NP-hardness in the worst case. In this chapter, we first study the infinite-horizon optimal distributed control (ODC) problem (for deterministic systems) and then generalize the results to a stochastic ODC problem (for stochastic systems). Our approach rests on formulating each of these problems as a rank-constrained optimization from which an SDP relaxation can be derived. We show that both problems admit sparse SDP relaxations with solutions of rank at most~3. Since a rank-1 SDP matrix can be mapped back into a globally-optimal controller, the rank-3 solution may be deployed to retrieve a near-global controller. We also propose computationally cheap SDP relaxation for each problem and then develop effective heuristic methods to recover a near-optimal controller from the low-rank SDP solution. The design of several near-optimal structured controllers with global optimality degrees above 99\% will be demonstrated. Chapter 5: The frequency control problem in power networks aims to control the global frequency of the system within a tight range by adjusting the output of generators in response to the uncertain and stochastic demand. The intermittent nature of distributed power generation in smart grid makes the traditional decentralized frequency controllers less efficient and demands distributed controllers that are able to deal with the uncertainty in the system introduced by non-dispatchable supplies (such as renewable energy), fluctuating loads, and measurement noise. Motivated by this need, we study the frequency control problem using the results developed in Chapter 4. In particular, we formulate the problem and then conduct a case study on the IEEE 39-Bus New England system. The objective is to design a near-global optimal distributed frequency controller for the New England test system by optimally adjusting the mechanical power input to each generator based on the real-time measurement received from neighboring generators through a user-defined communication topology.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

"Improved recurrent neural networks for convex optimization". Thesis, 2008. http://library.cuhk.edu.hk/record=b6074683.

Testo completo
Abstract (sommario):
Constrained optimization problems arise widely in scientific research and engineering applications. In the past two decades, solving optimization problems using recurrent neural network methods have been extensively investigated due to the advantages of massively parallel operations and rapid convergence. In real applications, neural networks with simple architecture and good performance are desired. However, most existing neural networks have some limitations and disadvantages in the convergence conditions or architecture complexity. This thesis is concentrated on analysis and design of recurrent neural networks with simplified architecture but for solving more general convex optimization problems. In this thesis, some improved recurrent neural networks have been proposed for solving smooth and non-smooth convex optimization problems and applied to some selected applications.
In Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints.
In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints.
In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems.
Liu, Qingshan.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 133-145).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Sun, Ju. "When Are Nonconvex Optimization Problems Not Scary?" Thesis, 2016. https://doi.org/10.7916/D8251J7H.

Testo completo
Abstract (sommario):
Nonconvex optimization is NP-hard, even the goal is to compute a local minimizer. In applied disciplines, however, nonconvex problems abound, and simple algorithms, such as gradient descent and alternating direction, are often surprisingly effective. The ability of simple algorithms to find high-quality solutions for practical nonconvex problems remains largely mysterious. This thesis focuses on a class of nonconvex optimization problems which CAN be solved to global optimality with polynomial-time algorithms. This class covers natural nonconvex formulations of central problems in signal processing, machine learning, and statistical estimation, such as sparse dictionary learning (DL), generalized phase retrieval (GPR), and orthogonal tensor decomposition. For each of the listed problems, the nonconvex formulation and optimization lead to novel and often improved computational guarantees. This class of nonconvex problems has two distinctive features: (i) All local minimizer are also global. Thus obtaining any local minimizer solves the optimization problem; (ii) Around each saddle point or local maximizer, the function has a negative directional curvature. In other words, around these points, the Hessian matrices have negative eigenvalues. We call smooth functions with these two properties (qualitative) X functions, and derive concrete quantities and strategy to help verify the properties, particularly for functions with random inputs or parameters. As practical examples, we establish that certain natural nonconvex formulations for complete DL and GPR are X functions with concrete parameters. Optimizing X functions amounts to finding any local minimizer. With generic initializations, typical iterative methods at best only guarantee to converge to a critical point that might be a saddle point or local maximizer. Interestingly, the X structure allows a number of iterative methods to escape from saddle points and local maximizers and efficiently find a local minimizer, without special initializations. We choose to describe and analyze the second-order trust-region method (TRM) that seems to yield the strongest computational guarantees. Intuitively, second-order methods can exploit Hessian to extract negative curvature directions around saddle points and local maximizers, and hence are able to successfully escape from the saddles and local maximizers of X functions. We state the TRM in a Riemannian optimization framework to cater to practical manifold-constrained problems. For DL and GPR, we show that under technical conditions, the TRM algorithm finds a global minimizer in a polynomial number of steps, from arbitrary initializations.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

McNeany, Scott Edward. "Characterizing software components using evolutionary testing and path-guided analysis". 2013. http://hdl.handle.net/1805/3775.

Testo completo
Abstract (sommario):
Indiana University-Purdue University Indianapolis (IUPUI)
Evolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Schachtebeck, Michael. "Delay Management in Public Transportation: Capacities, Robustness, and Integration". Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3CE-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia