Tesi sul tema "Evolutionary programming (Computer science) Mathematical optimization"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-25 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Evolutionary programming (Computer science) Mathematical optimization".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Service, Travis. "Co-optimization: a generalization of coevolution". Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Service_09007dcc804e2264.pdf.
Testo completoVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 26, 2008) Includes bibliographical references (p. 65-68).
Gadiraju, Sriphani Raju. "Modified selection mechanisms designed to help evolution strategies cope with noisy response surfaces". Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-07022003-164112.
Testo completoMuthuswamy, Shanthi. "Discrete particle swarm optimization algorithms for orienteering and team orienteering problems". Diss., Online access via UMI:, 2009.
Cerca il testo completoKhan, Salman A. "Design and analysis of evolutionary and swarm intelligence techniques for topology design of distributed local area networks". Pretori: [S.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-09272009-153908/.
Testo completoWong, Yin-cheung Eugene, e 黃彥璋. "A hybrid evolutionary algorithm for optimization of maritime logisticsoperations". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44526763.
Testo completoDoddapaneni, Srinivas P. "Automatic dynamic decomposition of programs on distributed memory machines". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8158.
Testo completoAbdel, Raheem Mohamed. "Electimize a new evolutionary algorithm for optimization with applications in construction engineering". Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4833.
Testo completoID: 030422839; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 263-268).
Ph.D.
Doctorate
Civil, Environmental and Construction Engineering
Engineering and Computer Science
Garret, Aaron Dozier Gerry V. "Neural enhancement for multiobjective optimization". Auburn, Ala., 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Dissertation/Garrett_Aaron_55.pdf.
Testo completoNgatchou, Patrick. "Intelligent techniques for optimization and estimation /". Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5827.
Testo completoRohling, Gregory Allen. "Multiple Objective Evolutionary Algorithms for Independent, Computationally Expensive Objectives". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4835.
Testo completoOzogur-akyuz, Sureyya. "A Mathematical Contribution Of Statistical Learning And Continuous Optimization Using Infinite And Semi-infinite Programming To Computational Statistics". Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610381/index.pdf.
Testo completolearn&rdquo
. ML is the process of training a system with large number of examples, extracting rules and finding patterns in order to make predictions on new data points (examples). The most common machine learning schemes are supervised, semi-supervised, unsupervised and reinforcement learning. These schemes apply to natural language processing, search engines, medical diagnosis, bioinformatics, detecting credit fraud, stock market analysis, classification of DNA sequences, speech and hand writing recognition in computer vision, to encounter just a few. In this thesis, we focus on Support Vector Machines (SVMs) which is one of the most powerful methods currently in machine learning. As a first motivation, we develop a model selection tool induced into SVM in order to solve a particular problem of computational biology which is prediction of eukaryotic pro-peptide cleavage site applied on the real data collected from NCBI data bank. Based on our biological example, a generalized model selection method is employed as a generalization for all kinds of learning problems. In ML algorithms, one of the crucial issues is the representation of the data. Discrete geometric structures and, especially, linear separability of the data play an important role in ML. If the data is not linearly separable, a kernel function transforms the nonlinear data into a higher-dimensional space in which the nonlinear data are linearly separable. As the data become heterogeneous and large-scale, single kernel methods become insufficient to classify nonlinear data. Convex combinations of kernels were developed to classify this kind of data [8]. Nevertheless, selection of the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of &ldquo
infinite&rdquo
kernel combinations for learning problems with the help of infinite and semi-infinite programming regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogeneous data in real-world applications. Combination of kernels can be done, e.g., along a homotopy parameter or a more specific parameter. Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann-Stieltjes) integral constraint due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We analyze the regularity conditions which satisfy the Reduction Ansatz and discuss the type of distribution functions within the structure of the constraints and our bilevel optimization problem. Finally, we adapted well known numerical methods of semiinfinite programming to our new kernel machine. We improved the discretization method for our specific model and proposed two new algorithms. We proved the convergence of the numerical methods and we analyzed the conditions and assumptions of these convergence theorems such as optimality and convergence.
Roda, Fabio. "Intégration d'exigences de haut niveau dans les problèmes d'optimisation : théorie et applications". Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00817782.
Testo completoCarman, Benjamin Andrew. "Repairing Redistricting: Using an Integer Linear Programming Model to Optimize Fairness in Congressional Districts". Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1619177994406176.
Testo completoVanden, Berghen Frank. "Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions". Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.
Testo completoDoctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Benacer, Rachid. "Contribution à l'étude des algorithmes de l'optimisation non convexe et non différentiable". Phd thesis, Grenoble 1, 1986. http://tel.archives-ouvertes.fr/tel-00320986.
Testo completoMartins, Alexandre Xavier. "Métaheuristiques et modélisation du problème de routage et affectation de longueurs d'ondes pour les réseaux de communications optiques". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00864176.
Testo completo"An evolutionary approach to multi-objective optimization problems". 2002. http://library.cuhk.edu.hk/record=b6073476.
Testo completo"6th August 2002."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (p. 227-239).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
"Stochastic algorithms for optimal placements of flexible objects". 1999. http://library.cuhk.edu.hk/record=b6073193.
Testo completoThesis (Ph.D.)--Chinese University of Hong Kong, 1999.
Includes bibliographical references (p. 137-143).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
"Integration of constraint programming and linear programming techniques for constraint satisfaction problem and general constrained optimization problem". 2001. http://library.cuhk.edu.hk/record=b5890598.
Testo completoThesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 131-138).
Abstracts in English and Chinese.
Abstract --- p.ii
Acknowledgments --- p.vi
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation for Integration --- p.2
Chapter 1.2 --- Thesis Overview --- p.4
Chapter 2 --- Preliminaries --- p.5
Chapter 2.1 --- Constraint Programming --- p.5
Chapter 2.1.1 --- Constraint Satisfaction Problems (CSP's) --- p.6
Chapter 2.1.2 --- Satisfiability (SAT) Problems --- p.10
Chapter 2.1.3 --- Systematic Search --- p.11
Chapter 2.1.4 --- Local Search --- p.13
Chapter 2.2 --- Linear Programming --- p.17
Chapter 2.2.1 --- Linear Programming Problems --- p.17
Chapter 2.2.2 --- Simplex Method --- p.19
Chapter 2.2.3 --- Mixed Integer Programming Problems --- p.27
Chapter 3 --- Integration of Constraint Programming and Linear Program- ming --- p.29
Chapter 3.1 --- Problem Definition --- p.29
Chapter 3.2 --- Related works --- p.30
Chapter 3.2.1 --- Illustrating the Performances --- p.30
Chapter 3.2.2 --- Improving the Searching --- p.33
Chapter 3.2.3 --- Improving the representation --- p.36
Chapter 4 --- A Scheme of Integration for Solving Constraint Satisfaction Prob- lem --- p.37
Chapter 4.1 --- Integrated Algorithm --- p.38
Chapter 4.1.1 --- Overview of the Integrated Solver --- p.38
Chapter 4.1.2 --- The LP Engine --- p.44
Chapter 4.1.3 --- The CP Solver --- p.45
Chapter 4.1.4 --- Proof of Soundness and Completeness --- p.46
Chapter 4.1.5 --- Compared with Previous Work --- p.46
Chapter 4.2 --- Benchmarking Results --- p.48
Chapter 4.2.1 --- Comparison with CLP solvers --- p.48
Chapter 4.2.2 --- Magic Squares --- p.51
Chapter 4.2.3 --- Random CSP's --- p.52
Chapter 5 --- A Scheme of Integration for Solving General Constrained Opti- mization Problem --- p.68
Chapter 5.1 --- Integrated Optimization Algorithm --- p.69
Chapter 5.1.1 --- Overview of the Integrated Optimizer --- p.69
Chapter 5.1.2 --- The CP Solver --- p.74
Chapter 5.1.3 --- The LP Engine --- p.75
Chapter 5.1.4 --- Proof of the Optimization --- p.77
Chapter 5.2 --- Benchmarking Results --- p.77
Chapter 5.2.1 --- Weighted Magic Square --- p.77
Chapter 5.2.2 --- Template design problem --- p.78
Chapter 5.2.3 --- Random GCOP's --- p.79
Chapter 6 --- Conclusions and Future Work --- p.97
Chapter 6.1 --- Conclusions --- p.97
Chapter 6.2 --- Future work --- p.98
Chapter 6.2.1 --- Detection of implicit equalities --- p.98
Chapter 6.2.2 --- Dynamical variable selection --- p.99
Chapter 6.2.3 --- Analysis on help of linear constraints --- p.99
Chapter 6.2.4 --- Local Search and Linear Programming --- p.99
Appendix --- p.101
Proof of Soundness and Completeness --- p.101
Proof of the optimization --- p.126
Bibliography --- p.130
"A value estimation approach to Iri-Imai's method for constrained convex optimization". 2002. http://library.cuhk.edu.hk/record=b5891236.
Testo completoThesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 93-95).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background --- p.4
Chapter 3 --- Review of Iri-Imai Algorithm for Convex Programming Prob- lems --- p.10
Chapter 3.1 --- Iri-Imai Algorithm for Convex Programming --- p.11
Chapter 3.2 --- Numerical Results --- p.14
Chapter 3.2.1 --- Linear Programming Problems --- p.15
Chapter 3.2.2 --- Convex Quadratic Programming Problems with Linear Inequality Constraints --- p.17
Chapter 3.2.3 --- Convex Quadratic Programming Problems with Con- vex Quadratic Inequality Constraints --- p.18
Chapter 3.2.4 --- Summary of Numerical Results --- p.21
Chapter 3.3 --- Chapter Summary --- p.22
Chapter 4 --- Value Estimation Approach to Iri-Imai Method for Con- strained Optimization --- p.23
Chapter 4.1 --- Value Estimation Function Method --- p.24
Chapter 4.1.1 --- Formulation and Properties --- p.24
Chapter 4.1.2 --- Value Estimation Approach to Iri-Imai Method --- p.33
Chapter 4.2 --- "A New Smooth Multiplicative Barrier Function Φθ+,u" --- p.35
Chapter 4.2.1 --- Formulation and Properties --- p.35
Chapter 4.2.2 --- "Value Estimation Approach to Iri-Imai Method by Us- ing Φθ+,u" --- p.41
Chapter 4.3 --- Convergence Analysis --- p.43
Chapter 4.4 --- Numerical Results --- p.46
Chapter 4.4.1 --- Numerical Results Based on Algorithm 4.1 --- p.46
Chapter 4.4.2 --- Numerical Results Based on Algorithm 4.2 --- p.50
Chapter 4.4.3 --- Summary of Numerical Results --- p.59
Chapter 4.5 --- Chapter Summary --- p.60
Chapter 5 --- Extension of Value Estimation Approach to Iri-Imai Method for More General Constrained Optimization --- p.61
Chapter 5.1 --- Extension of Iri-Imai Algorithm 3.1 for More General Con- strained Optimization --- p.62
Chapter 5.1.1 --- Formulation and Properties --- p.62
Chapter 5.1.2 --- Extension of Iri-Imai Algorithm 3.1 --- p.63
Chapter 5.2 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.1 for More General Constrained Optimization --- p.64
Chapter 5.2.1 --- Formulation and Properties --- p.64
Chapter 5.2.2 --- Value Estimation Approach to Iri-Imai Method --- p.67
Chapter 5.3 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.2 for More General Constrained Optimization --- p.69
Chapter 5.3.1 --- Formulation and Properties --- p.69
Chapter 5.3.2 --- Value Estimation Approach to Iri-Imai Method --- p.71
Chapter 5.4 --- Numerical Results --- p.72
Chapter 5.4.1 --- Numerical Results Based on Algorithm 5.1 --- p.73
Chapter 5.4.2 --- Numerical Results Based on Algorithm 5.2 --- p.76
Chapter 5.4.3 --- Numerical Results Based on Algorithm 5.3 --- p.78
Chapter 5.4.4 --- Summary of Numerical Results --- p.86
Chapter 5.5 --- Chapter Summary --- p.87
Chapter 6 --- Conclusion --- p.88
Bibliography --- p.93
Chapter A --- Search Directions --- p.96
Chapter A.1 --- Newton's Method --- p.97
Chapter A.1.1 --- Golden Section Method --- p.99
Chapter A.2 --- Gradients and Hessian Matrices --- p.100
Chapter A.2.1 --- Gradient of Φθ(x) --- p.100
Chapter A.2.2 --- Hessian Matrix of Φθ(x) --- p.101
Chapter A.2.3 --- Gradient of Φθ(x) --- p.101
Chapter A.2.4 --- Hessian Matrix of φθ (x) --- p.102
Chapter A.2.5 --- Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x) --- p.102
Chapter A.2.6 --- "Gradient of φθ+,u(x)" --- p.102
Chapter A.2.7 --- "Hessian Matrix of φθ+,u(x)" --- p.103
Chapter A.2.8 --- "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)" --- p.103
Chapter A.3 --- Newton's Directions --- p.103
Chapter A.3.1 --- Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x) --- p.104
Chapter A.3.2 --- "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)" --- p.104
Chapter A.4 --- Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+) --- p.105
Chapter A.4.1 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ) --- p.105
Chapter A.4.2 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ+) --- p.107
Chapter B --- Randomly Generated Test Problems for Positive Definite Quadratic Programming --- p.109
Chapter B.l --- Convex Quadratic Programming Problems with Linear Con- straints --- p.110
Chapter B.l.1 --- General Description of Test Problems --- p.110
Chapter B.l.2 --- The Objective Function --- p.112
Chapter B.l.3 --- The Linear Constraints --- p.113
Chapter B.2 --- Convex Quadratic Programming Problems with Quadratic In- equality Constraints --- p.116
Chapter B.2.1 --- The Quadratic Constraints --- p.117
Ali, Younis Kalbat Abdulrahman Younis. "Distributed and Large-Scale Optimization". Thesis, 2016. https://doi.org/10.7916/D8D79B7V.
Testo completo"Improved recurrent neural networks for convex optimization". Thesis, 2008. http://library.cuhk.edu.hk/record=b6074683.
Testo completoIn Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints.
In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints.
In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems.
Liu, Qingshan.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 133-145).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Sun, Ju. "When Are Nonconvex Optimization Problems Not Scary?" Thesis, 2016. https://doi.org/10.7916/D8251J7H.
Testo completoMcNeany, Scott Edward. "Characterizing software components using evolutionary testing and path-guided analysis". 2013. http://hdl.handle.net/1805/3775.
Testo completoEvolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
Schachtebeck, Michael. "Delay Management in Public Transportation: Capacities, Robustness, and Integration". Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3CE-4.
Testo completo