To see the other types of publications on this topic, follow the link: Simulated Annealing Optimization.

Dissertations / Theses on the topic 'Simulated Annealing Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Simulated Annealing Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sakhavat, Tamim, Haithem Grissa, and Ziyad Abdalrahman. "Simulated Annealing : Simulated Annealing for Large Scale Optimization in Wireless Communications." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-24606.

Full text
Abstract:
In this thesis a simulated annealing algorithm is employed as an optimization tool for a large scale optimization problem in wireless communication. In this application, we have 100 places for transition antennas and 100 places for receivers, and also a channel between each position in both areas. Our aim is to nd, say the best 3 positions there, in a way that the channel capacity is maximized. The number of possible combinations is huge. Hence, nding the best channel will take a very long time using an exhaustive search. To solve this problem, we use a simulated annealing algorithm and estimate the best answer. The simulated annealing algorithm chooses a random element, and then from the local search algorithm, compares the selected element with its neighbourhood. If the selected element is the maximum among its neighbours, it is a local maximum. The strength of the simulated annealing algorithm is its ability to escape from local maximum by using a random mechanism that mimics the Boltzmann statistic.
APA, Harvard, Vancouver, ISO, and other styles
2

Allred, Kory J. "Horizontal alignment optimization using simulated annealing /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1240703751&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nunes, Luís. "Monitoring networks optimization with simulated annealing." Doctoral thesis, Instituto Superior Técnico - Universidade Técnica de Lisboa, 2003. http://hdl.handle.net/10400.1/1160.

Full text
Abstract:
Tese dout., Ciências de Engenharia, Instituto Superior Técnico, Universidade Técnica de Lisboa, 2003
In this work some methods to optimize environmental monitoring networks are proposed. These methods share simulated annealing as the approximation algorithm. Only monitoring networks reduction is treated here. Monitoring network optimization is a very actual problem given the large number of existing networks in many countries operating large numbers of stations, some of which may be redundant, with very high exploitation costs. Difficulties appear when exploitation costs pushes the dimension of a network towards a minimum, and the statistical reliability pushes in the opposite direction. Finding the optimal dimension may be a very difficult optimization problem due to the large number of combinations, even for small network dimensions. Further complications appear when the available data is too incomplete or come from different homogeneous areas. Some practical answers to these problems were sought in this work. Results showed that optimizing a monitoring network dimension and location of stations, without compromising the quality of the collected data, could attain large reductions in exploitation costs. Simulated annealing showed to be a very flexible and efficient algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Gu, Xiaoqing. "The behavior of simulated annealing in stochastic optimization." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Osman, Ibrahim Hassan. "Metastrategy : simulated annealing and tabu search for combinatorial optimization problems." Thesis, Imperial College London, 1991. http://hdl.handle.net/10044/1/7596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leite, Joao Paulo de Barros. "Parallel adaptive search techniques for structural optimization." Thesis, Heriot-Watt University, 1996. http://hdl.handle.net/10399/1242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moins, Stephane. "Implementation of a Simulated Annealing algorithm for Matlab." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1344.

Full text
Abstract:

In this report we describe an adaptive simulated annealing method for sizing the devices in analog circuits. The motivation for use an adaptive simulated annealing method for analog circuit design are to increase the efficiency of the design circuit. To demonstrate the functionality and the performance of the approach, an operational transconductance amplifier is simulated. The circuit is modeled with symbolic equations that are derived automatically by a simulator.

APA, Harvard, Vancouver, ISO, and other styles
8

Rogers, Timothy James. "Pwr fuel assembly optimization using adaptive simulated annealing coupled with translat." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Barnes, Robert Otto II. "Concurrency Optimization for Integrative Network Analysis." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23220.

Full text
Abstract:
Virginia Tech\'s Computational Bioinformatics and Bio-imaging Laboratory (CBIL) is exploring integrative network analysis techniques to identify subnetworks or genetic pathways that contribute to various cancers. Chen et. al. developed a bagging Markov random field (BMRF)-based approach which examines gene expression data with prior biological information to reliably identify significant genes and proteins. Using random resampling with replacement (bootstrapping or bagging) is essential to confident results but is computationally demanding as multiple iterations of the network identification (by simulated annealing) is required. The MATLAB implementation is computationally demanding, employs limited concurrency, and thus time prohibitive. Using strong software development discipline we optimize BMRF using algorithmic, compiler, and concurrency techniques (including Nvidia GPUs) to alleviate the wall clock time needed for analysis of large-scale genomic data. Particularly, we decompose the BMRF algorithm into functional blocks, implement the algorithm in C/C++ and further explore the C/C++ implementation with concurrency optimization. Experiments are conducted with simulation and real data to demonstrate that a significant speedup of BMRF can be achieved by exploiting concurrency opportunities. We believe that the experience gained by this research shall help pave the way for us to develop computationally efficient algorithms leveraging concurrency, enabling researchers to efficiently analyze larger-scale data sets essential for furthering cancer research.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Yip, Pui-Chiu. "The role of regional guidance in optimization: The guided evolutionary simulated annealing approach." Case Western Reserve University School of Graduate Studies / OhioLINK, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=case1056998256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sham, Edwin O. H. "Inverse treatment planning by simulated annealing optimization of a dose-volume objective function." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33837.

Full text
Abstract:
An algorithm for optimization of numerous modulated beam weights has been developed. This algorithm employs a penalty function theorem and a simulated annealing (SA) routine to model a large-scale constrained optimization problem incorporating dose and dose volume constraints in reflecting the goal of inverse treatment planning by sparing sufficient healthy tissues while delivering a necessary tumorcidal dose. The convergence property of the dose-volume SA algorithm is investigated for validation. Its performance is also evaluated by comparing the algorithm with a gradient technique minimizing the same dose-volume objective function that incorporates the target dose objectives and organ dose-volume constraints by the penalty functions. The comparison shows that the objective function exhibits a global valley in which multiple local minima with similar outcomes in terms of the function values, the dose-volume histograms, and the dose distributions exist. Thus, the gradient algorithm is preferred for this optimization approach due to its fast efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Sweeney, James P. "Dual Constraint Problem Optimization Using A Natural Approach: Genetic Algorithm and Simulated Annealing." UNF Digital Commons, 2007. http://digitalcommons.unf.edu/etd/283.

Full text
Abstract:
Constraint optimization problems with multiple constraints and a large solution domain are NP hard and span almost all industries in a variety of applications. One such application is the optimization of resource scheduling in a "pay per use" grid environment. Charging for these resources based on demand is often referred to as Utility Computing, where resource providers lease computing power with varying costs based on processing speed. Consumers using this resource have time and cost constraints associated with each job they submit. Determining the optimal way to divide the job among the available resources with regard to the time and cost constraints is tasked to the Grid Resource Broker (GRB). The GRB must use an optimization algorithm that returns an accurate result in a timely mam1er. The Genetic Algorithm and the Simulated Annealing algorithm can both be used to achieve this goal, although Simulated Annealing outperforms the Genetic Algorithm for use by the GRB. Determining optimal values for the variables used in each algorithm is often achieved through trial and error, and success depends upon the solution domain of the problem. Although this work outlines a specific grid resource allocation application, the results can be applied to any optimization problem based on dual constraints.
APA, Harvard, Vancouver, ISO, and other styles
13

Haeser, Gabriel. "Algoritmo duas fases em otimização global." [s.n.], 1996. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305937.

Full text
Abstract:
Orientador: Marcia A. Gomes Ruggiero
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-05T23:37:23Z (GMT). No. of bitstreams: 1 Haeser_Gabriel_M.pdf: 906525 bytes, checksum: ea7e3eb42abe6b8b451f99c4c63a3da4 (MD5) Previous issue date: 1996
Resumo: Neste trabalho estudamos a teoria de algumas heurísticas para otimização global, e também a generalização do algoritmo genético de Aarts, Eiben e van Hee. Propomos um algoritmo para otimização global de problemas canalizados e diferenciáveis utilizando simulated annealing e o solver local GENCAN. Experimentos numéricos com o problema OVO ( Order- Value Optimization) são apresentados, e também com 28 problemas clássicos da literatura. Para problemas de otimização com restrições, apontamos idéias de como utilizar solvers locais e heurísticas globais em busca de bons algoritmos para otimização global, e propomos um algoritmo baseado em simulated annealing com solver local ALGENCAN
Abstract: In this work we study the theory behind some classical heuristics for global optimization, and a generalization of genetic algorithms from Aarts, Eiben and van Hee. We propose an algorithm for global optimization of box-constrained differentiable problems, using simulated annealing and the local solver GENCAN. Numerical experiments are presented for the OVO problem (Order-Value Optimization) and 28 classical problems. For general nonlinear programming problems, we mention some ideas of how to use local solvers and global heuristics towards good algorithms for global optimization, we also propose an algorithm based on simulated annealing with local solver ALGENCAN
Mestrado
Otimização
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
14

Herrera, Claudia Natalia Lara. "Algoritmo de tomografia por impedância elétrica baseado em Simulated Annealing." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-28012008-172456/.

Full text
Abstract:
A Tomografia por Impedância Elétrica (TIE) é uma técnica não invasiva usada para produzir imagens que representam a distribuição de resistividade, ou condutividade, de uma seção transversal dentro de um domínio, por vezes o tórax humano, a partir do conhecimento de medidas elétricas feitas através de eletrodos distribuídos na sua fronteira. Correntes injetam-se e medem-se voltagens ou vice-versa. Distribuição de variação de resistividade ou distribuição de valor absoluto de resistividade podem ser estimadas, gerando algoritmos ditos de diferenças ou absolutos. O presente trabalho avalia o desempenho de um algoritmo probabilístico baseado no método Simulated Annealing (SA) para obter distribuições absolutas de resistividade em duas dimensões (2D). O SA difere dos métodos tradicionais de busca, tem a capacidade de escapar de mínimos locais graças ao emprego do critério de Metropolis para a aceitação dos novos pontos no espaço de busca e não precisa da avaliação de derivadas da função objetivo. O algoritmo desenvolvido soluciona o problema inverso da TIE ao resolver iterativamente um problema direto, utilizando distribuições de resistividade obtidas por sorteio aleatório. O sorteio é realizado pelo algoritmo de Metropolis. Na ausência de regularizações, assume-se que a imagem sorteada que minimiza a diferença entre as voltagens medidas na fronteira do domínio e as calculadas é a que mais se aproxima da distribuição de resistividade real. Neste sentido, a imagem final maximiza a verossemelhança. Este trabalho contribui com o desenvolvimento de algoritmos para estimação de imagem aplicados para monitorar a ventilação mecânica dos pulmões. Uma vez que se pretende resolver um problema inverso, não-linear e mal-posto é necessário introduzir informação a priori, na forma de restrições do espaço solução ou de regularizações. São realizados ensaios com dados simulados por meio de um fantoma numérico, dados de bancada experimental e dados provenientes de um tórax humano. Os resultados mostram que a localização, o tamanho e a resistividade do objeto estão dentro da precisão da TIE obtida por métodos clássicos, mas o esforço computacional é grande. Verificam-se, assim, as vantagens e a viabilidade do algoritmo proposto.
The Electrical Impedance Tomography (EIT) is a non-invasive technique used to produce images that represent the cross-sectional electrical resistivity distribution, or conductivity, within a domain, for instance the human thorax, from electrical measurements made through electrodes distributed on its boundary. Currents are injected and voltages measured, or vice-versa. Distributions of resistivity variations or distributions of absolute resistivity can be estimated, producing difference or absolute algorithms. The present work develops and evaluates the performance of a probabilistic algorithm based on the Simulated Annealing method (SA) to obtain absolute resistivity distributions in two dimensions (2D). The SA differs from the traditional search methods, no evaluation of objective function derivatives is required and it is possible to escape from local minima through the use of the Metropolis criterion for acceptance of new points in the search space. The developed algorithm solves the inverse problem of EIT by solving iteratively a direct problem, using random resistivity distributions. The random search is accomplished by the Metropolis algorithm. In the absence of regularizations, it is assumed that the resistivity distribution, an image, that minimizes the difference between the measured electrical potentials on the boundary and computed electrical potentials is the closest to the real resistivity distribution. In this sense, the algorithm maximizes the likelihood. This work contributes to the development of image estimation algorithms applied to lung monitoring, for instance, during mechanical ventilation. To solve this non-linear ill-posed inverse problem it is necessary to introduce prior information in the form of restrictions of the solution space or regularization techniques. The tests are carried out using simulated data obtained from a numerical phantom, an experimental phantom and human thorax data. The results show that the localization of an object, the size of an object and the resistivity of an object are within the accuracy of EIT obtained by classical methods, but the computational effort is large. The advantages and feasibility of the proposed algorithm were investigated.
APA, Harvard, Vancouver, ISO, and other styles
15

Renova, Elvia Paola. "Simulated annealing algorithms for the optimization of particulate composite structures analyzed by X-FEM." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hamilton, Douglas (Douglas Maxwell). "Managerial factors affecting aircraft maintenance : an agent based model and optimization with simulated annealing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100377.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 49-51).
A single objective agent based model of managerial factors affecting aircraft maintenance was build based on a case study done regarding safety climate's effect on maintenance efficacy in the Korean Air Force. In particular the model measures the effect of managerial context and command on agents' motivation and efficacy. The model is then optimized using a simulated annealing algorithm. Input parameters were varied to ensure reliability and repeatability of results. The model's sensitivity, in terms of optimal input vector and results, were also tested across a variety of input parameters. Results suggested that across all input parameters two managerial contexts dominated: contingent reward systems and laissez-faire.
by Douglas Hamilton.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
17

REIS, T. M. "Bases gaussianas geradas com os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/7378.

Full text
Abstract:
Made available in DSpace on 2018-08-01T21:59:48Z (GMT). No. of bitstreams: 1 tese_10670_Tese - Thiago Mello dos Reis.pdf: 2877793 bytes, checksum: a65224aac4b723bdea3543cb8bc010e4 (MD5) Previous issue date: 2017-02-03
Os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram utilizados na geração de bases Gaussianas adaptadas para os átomos de H ao Ar, no estado fundamental. Um estudo sobre a eficiência e a confiabilidade de cada um dos métodos foi realizado. Para analisar a confiabilidade dos métodos propostos, fez-se um estudo específico envolvendo um conjunto teste de 15 átomos, a saber: N, Mg, Al, Cl, Ti, Ni, Br, Sr, Ru, Pd, Sb, Cs, Ir, Tl, At. Inicialmente, o método Coordenada Geradora Hartree-Fock Melhorado foi aplicado para gerar bases adaptadas usadas como ponto de partida para a geração de novas bases Gaussianas. Posteriormente, os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization foram desenvolvidos em estudos paralelos, porém seguindo o mesmo procedimento, a fim de termos a possibilidade de compará-los ao final do estudo. Previamente à efetiva aplicação dos métodos desenvolvidos, ambos foram calibrados visando definir os melhores parâmetros para os algoritmos utilizados; estudos sobre esquemas de resfriamento (para o método Monte Carlo Simulated Annealing ) e quantidade de partículas do enxame (para o método Particle Swarm Optimization), além do número total de passos para os algoritmos foram feitos. Após esta etapa de calibração, os dois métodos foram aplicados, juntamente com o princípio variacional, à função de onda Hartree-Fock para a obtenção de bases Gaussianas totalmente otimizadas. Em seguida, as bases foram contraídas tendo-se em vista a menor perda de energia observada, preconizando a contração dos expoentes mais internos. As duas últimas etapas do procedimento da geração das bases foram a inclusão de funções de polarização e funções difusas, respectivamente. Estes procedimentos foram feitos utilizando os métodos desenvolvidos neste trabalho através de cálculos a nível MP2. Os conjuntos de base gerados neste trabalho foram utilizados para cálculos práticos em sistemas atômicos e moleculares e os resultados foram comparados com resultados obtidos a partir de conjuntos de base similares relevantes na literatura. Verificamos que, para um mesmo nível de eficiência computacional entre os métodos Monte Carlo Simulated Annealing e Particle Swarm Optimization, há uma pequena diferença de eficácia entre eles, de modo que o método Monte Carlo Simulated Annealing apresentou resultados ligeiramente melhores para os cálculos performados. Comparando-se os resultados obtidos neste trabalho com os correspondentes encontrados na literatura, observamos valores numericamente comparáveis para as propriedades estudadas, todavia os métodos propostos neste trabalho são siginificativamente mais eficientes, sendo possível o estabelecimento de um único conjunto de passos nos algoritmos para diferentes sistemas atômicos. Ademais, verificamos que a etapa específica, referente a otimização proposta neste trabalho, é eficaz na tarefa de localizar o mínimo global das funções atômicas a nível de teoria HF. Estudos mais detalhados são necessários para constatar a real relação acerca da eficácia observada para os dois métodos propostos neste trabalho. O método Particle Swarm Optimization apresenta uma série de parâmetros que não tiveram sua influência checada neste trabalho. O fato dos métodos desenvolvidos neste trabalho terem sido construídos sobre bases Dupla Zeta não implica em restrição de generalidade, de tal sorte que estes métodos estão prontamente aptos para a aplicação no desenvolvimento de conjuntos de base gaussianas no ambiente atômico para conjuntos de base de qualidade variadas.
APA, Harvard, Vancouver, ISO, and other styles
18

Ozturk, Mustafa Yavuz. "Multiobjective Design Optimization Of Rockets And Missiles." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610503/index.pdf.

Full text
Abstract:
Multidisciplinary design optimization of aerospace vehicles has attracted interest of many researchers. Well known aerospace companies are developing tools for the mutlidisciplinary design optimization. However, the multiobjective optimization of the design is a new and important area investigated very little by the researchers. This thesis will examine the approaches to the multiobjective and mutlidisciplinary design optimization of rockets and missiles. In the study, multiobjective optimization method called MC-MOSA will be used.
APA, Harvard, Vancouver, ISO, and other styles
19

Agostini, Flavia Paiva. "Generalized Simulated Annealing Parameter Sweeping Applied to the Protein Folding Problem." Laboratório Nacional de Computação Científica, 2009. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=179.

Full text
Abstract:
Com os rápidos avanços no seqüenciamento do genoma, a compreensão da estrutura de proteínas torna-se uma extensão crucial a esses progressos. Apesar dos significativos avanços tecnológicos recentes, a determinação experimental da estrutura terciária de proteínas ainda é muito lenta se comparada com a taxa de acúmulo de dados das seqüências de aminoácidos. Isto torna o enovelamento de proteínas um problema central para o desenvolvimento da biologia pós-genômica. Em nosso trabalho, fazemos uso de um método de otimização, o Generalized Simulated Annealing (GSA), baseado na termoestatística generalizada por Tsallis. Embora o GSA seja um procedimento geral, sua eficiência depende não apenas da escolha apropriada de parâmetros, mas também das características topológicas da hiper--superfície de energia da função custo. Com o mapeamento dos parâmetros necessários à aplicação do GSA, pode-se reduzir significativamente o número de escolhas, além de tornar possível uma análise do efeito dos parâmetros no comportamento do algoritmo. Como passo inicial, usamos estruturas conhecidas, com as quais os resultados obtidos com o GSA possam ser comparados, como é o caso das polialaninas. Além disso, aplicamos, o GSA a três peptídeos de proteínas ribossomais da família P, de considerável importância no estudo da doença de Chagas. Cada um possui 13 aminoácidos, diferindo em apenas uma mutação não conservativa no terceiro aminoácido. Como os peptídeos não possuem estrutura experimentalmente resolvida, analisamos os resultados obtidos com GSA seguidos por simulações de Dinâmica Molecular. A validade destes resultados é estudada, de forma que, no futuro, estruturas desconhecidas possam ser determinadas com certo grau de confiabilidade.
As the genome sequencing advances, the comprehension of protein structures becomes a crucial extension to these progresses. In spite of the numerous recent technological advances, experimental determination of protein terciary structures is still very slow compared to the accumulated data from amino acid sequences. That is what makes the protein folding a central problem to the development of the pots-genomic era. In this work we use an optimization method, the Generalized Simulated Annealing (GSA), which is based on Tsallis' generalized thermostatistics, to investigate the protein folding problem. Although GSA is a generic procedure, its efficiency depends not only on the appropriate choice of parameters, but also on topological characteristics of the energy hypersurface. By mapping all the GSA parameters, it can be possible to reduce the number of possible choices of them. That also allows an analysis of its effects on the algorithm behavior. As a initial step, we apply GSA to known structures, such as polyalanines. In sequence, we also apply GSA to three more peptides of ribosomal P proteins, which are of considerable importance on the comprehension of Chagas' heart disease. Each one contains 13 amino acids and differ only on the third residue by a non-conservative mutation. As these peptides do not have experimentally resolved structure, we analyze results obtained from GSA followed by Molecular Dynamics simulations. Validity of these results is studied such that, in the future, unknown structures can be determined by this technique with a higher degree of confidence.
APA, Harvard, Vancouver, ISO, and other styles
20

Sato, André Kubagawa. "Proposta de algoritmo para a determinação da região livre de colisão e sua aplicação na solução de leiautes bidimensionais irregulares com recozimento simulado." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-28022011-151901/.

Full text
Abstract:
O problema de empacotamento consiste em arranjar um conjunto de itens em um contêiner, a fim de maximizar sua utilização. Este campo de estudos tem impacto em diversas indústrias, incluindo as indústrias têxtil, moveleira e naval. Neste trabalho, dois problemas de empacotamento de itens irregulares são estudados. O primeiro, chamado primal, é o caso em que os itens possuem rotação livre e o contêiner de dimensões fixas pode ser representado por um polígono qualquer, podendo ser não convexo. O segundo problema, denominado dual, consiste em posicionar os itens, que possuem apenas algumas orientações possíveis, em um contêiner retangular em que uma das dimensões é considerada infinita. Assim, o objetivo é obter o menor contêiner, variando a dimensão não fixa, no qual todos os itens podem ser posicionados sem sobreposição. Em ambos problemas, a solução é representada por uma lista ordenada de itens e uma regra de posicionamento é aplicada para se obter o leiaute. Neste caso, sobreposições não são permitidas. Para se garantir leiautes factíveis (sem sobreposição), é adotado o conceito de região livre de colisão. A região livre de colisão representa todas as translações possíveis para inserir um novo item em um contêiner com itens já posicionados. A região livre de colisão é obtida através de operações Booleanas envolvendo polígonos de obstrução e de posicionamento interno. Devido às propriedades dos conceitos envolvidos, o cálculo da região livre de colisão deve ser feito utilizando operações Booleanas não regularizadas. Um novo algoritmo de operação Booleana não regularizada de união e subtração é desenvolvido a partir da implementação de um algoritmo de operações Booleanas regularizadas. Um algoritmo de recozimento simulado é utilizado para controlar a posição, o ângulo (ou orientação) e a seqüência dos itens. Cada item só pode ser posicionado no vértice da região livre de colisão. Com a finalidade de melhorar o desempenho computacional do algoritmo, um método de paralelização do cálculo da região livre de colisão é proposto. Para comparação, são adotados dois algoritmos seriais. Através dos resultados, é possível afirmar que o algoritmo primal foi capaz de resolver problemas do tipo quebra-cabeça, incluindo contêineres convexos e com furos. O algoritmo apresentou melhora significativa no desempenho quando comparado com trabalhos anteriores. Para o caso dual foi proposto um algoritmo de dois níveis, em que o externo controla o comprimento do contêiner e o interno é semelhante ao primal. Este algoritmo foi testado com problemas existentes na literatura e apresentou soluções competitivas, obtendo alguns leiautes mais compactos. A paralelização apresentou ganho de desempenho apenas nos problemas com grande número de itens. Foi constatado que o custo computacional de operações Booleanas não regularizadas é fortemente dependente do número de vértices e intersecções dos polígonos de entrada da operação.
The irregular shape packing problem is an optimization problem that consists of arranging items on a container in order to maximize the utility rate of the sheet stock. This work investigates two problems. In the first problem, the single bin packing, the items can rotate freely and the container with fixed dimension can be any polygon, convex or non-convex. The second problem, the open dimension problem, consists of arranging items that have few admissible orientations in a container with fixed width and variable length. The objective is to find a feasible layout of the set of items that minimizes the length of the container. The solution is always represented as an ordered list of items to be packed and a placement heuristic is applied in order to generate a layout. To ensure feasible layouts, the concept of collision free region is adopted. It represents all the positions that a new item can be placed inside the container, without colliding with already placed items. The collision free region is obtained through non manifold Boolean operations applied to no-fit polygon and the inner-fit polygon. The simulated annealing algorithm controls the position, rotation and placement order of the items. Each item is is exclusively placed on collision free region\'s vertex. To improve the computational cost performance of the algorithm, a parallelization method to determine the collision free region is proposed. The speed of this algorithm is compared with two different serial methods of determing the collision free region. From the results, it can be observed that the solutions for the single bin packing problem are very competitive with previous works and can achieve optimal solution for puzzles with irregular shaped containers and containers with holes. The algorithm for the open dimension has two hierarchical levels: a core level with a simulated annealing algorithm, and the external level controlling the container length. This algorithm was tested with literature problems and obtained very competitive results, some which are more compact. The results showed that the parallelized version is better than the sequential approach only for datasets with very large number of items. The computational cost of the non manifold Boolean operation algorithm is strongly dependent on the number of vertices and intersections of the original polygons.
APA, Harvard, Vancouver, ISO, and other styles
21

Efta, James Anderson. "A Methodology for Planning Road Best Management Practices Combining WEPP: Road Erosion Modeling and Simulated Annealing Optimization." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-09012009-091937/.

Full text
Abstract:
Erosion from forest roads is a known problem in mountainous terrain. To abate these negative consequences, physical Best Management Practices (BMPs) are implemented, sometimes with no knowledge of erosion hot spots. With the need to minimize water quality impacts while at the same time accounting for multiple considerations and constraints, road BMP planning at the watershed scale is a difficult task. To assist in this planning process, a methodology is presented here that combines WEPP: Road erosion predictions with simulated annealing optimization. Under this methodology, erosion predictions associated with BMP options for a segment comprise the objective function of an optimization problem. This methodology was tested on a watershed in the Lake Tahoe Basin. WEPP: Road input data was gathered through road surveys. Modeling results predicted relatively little sediment leaving the forest buffer, as a result of numerous well-maintained BMPs and the dry climate found in the watershed. A sensitivity analysis for all WEPP: Road input parameters is presented, which provides insight into the general applicability of these erosion estimates as well as the relative importance of each input parameter. After evaluating erosion risk across the entire watershed, applicable BMPs were assigned to problem road segments and WEPP: Road was used to predict change in sediment leaving the buffer with BMP implementation at a given site. These predictions, combined with budget constraints as well as equipment scheduling considerations, were incorporated into an algorithm using simulated annealing as its optimization engine. Three modeled scenarios demonstrate the viability of this methodology in reducing total sediment leaving the road buffer over a planning horizon. Of the 173 segments surveyed, 38 segments could be treated using generic BMPs. For all three scenarios, BMP-SA reduced sediment leaving the buffer by as much as 70% over the course of a 20-year planning horizon. For the 38 segments treated with BMPs, sediment was reduced by greater than 90% over the planning horizon. This methodology is a viable approach for streamlining watershed-scale road network BMP planning, despite its heavy reliance on road erosion estimates.
APA, Harvard, Vancouver, ISO, and other styles
22

Moritz, Susanne. "A mixed integer approach for the transient case of gas network optimization." Phd thesis, [S.l.] : [s.n.], 2007. http://elib.tu-darmstadt.de/diss/000785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Gabriel Caetano da. "Estimação do espectro de relaxação de polímeros através do algoritmo Simulated Annealing." Universidade do Estado do Rio de Janeiro, 2006. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=395.

Full text
Abstract:
A determinação do espectro de relaxação de polímeros utilizando dados de tensão oscilatória de baixa amplitude pode ser calculada assumindo-se que existe uma única função contínua H(λ) capaz de descrever o comportamento viscoelástico linear. O objetivo deste trabalho é determinar esta função ou uma aproximação da mesma utilizando um algoritmo estocástico denominado Simulated Annealing. A estratégia proposta é similar a proposta por Jensen (2002), entretanto, a lista de resfriamento do algoritmo foi modificada, objetivando-se uma maior robustez do referido algoritmo. A ferramenta computacional foi calibrada de forma a estimar com acurácia o espectro de relaxação discreto de outros polímeros. Os métodos de interpolação lagrangeana e de regressão não-linear foram aplicados para obter a função contínua do espectro de relaxação, a partir de um conjunto discreto de dados. Os resultados obtidos para o polietileno linear de baixa densidade (PELBD) comprovaram a eficiência da ferramenta computacional de otimização, sendo extremamente próximos aos fornecidos pelo reômetro AR 2000 (CENPES/PETROBRAS).
The determination of the relaxation spectrum using data from small amplitude oscillatory shear rate was accomplished by assuming that exists a unique continuous function H(λ) which describes linear viscoelasticity. The aim of this work is to determine this function or a close approximation using a computer stochastic algorithm called Simulated Annealing (SA). The strategy is the same proposed by Jensen, but the cooling schedule of SA algorithm was modified, in order to enhance the robustness of the referred algorithm. Besides, a calibration procedure was conducted for estimate accurate relaxation spectrum for other polymers. Lagrangean interpolation and nonlinear regression techniques were applied in order to obtain the continuous function that represent relaxation spectrum, using discrete data. The results generated for low linear density polyethylene (LLDPE) indicate the efficiency of the optimization computational tool, being extremely close to that produced by AR 2000 rheometer (CENPES/PETROBRAS).
APA, Harvard, Vancouver, ISO, and other styles
24

Zheng, Lei. "An Optimization Approach to Indoor Location Problem Based on Received Signal Strength." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1357888392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Jian Hua. "Dynamics and parameter identification of a tractor semitrailer-driver closed-loop system using the simulated annealing optimization approach." Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1407502071&sid=5&Fmt=2&clientId=46962&RQT=309&VName=PQD.

Full text
Abstract:
Thèse (M.Eng.) -- École de technologie supérieure, Montréal, 2007.
"Thesis submitted to l'École de technologie supérieure in partial fulfillment of the requirements for the master's degree in mechanical engineering M.Eng.". Bibliogr. : f. ([92]-96). Également disponible en version électronique. CaQMUQET
APA, Harvard, Vancouver, ISO, and other styles
26

Machado, Enéas Souza. "Utilização da metaheurística do recozimento simulado na otimização do planejamento de sistemas regionais de tratamento de efluentes e sua expansão da capacidade." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3147/tde-14092009-142616/.

Full text
Abstract:
O presente trabalho discorre sobre o uso da metaheurística do Recozimento Simulado (Simulated Annealing) na otimização do planejamento de sistemas regionais de tratamento de efluentes e na sua expansão da capacidade. O primeiro modelo desenvolvido trata da otimização espacial de um sistema regional: dadas fontes de efluentes e locais potenciais para instalação de estações de tratamento, o modelo busca a configuração regional de menor custo. O modelo é composto de duas fases: a primeira é um modelo hidráulico que valida a rede proposta através da solução da equação universal de perda de cargas e uma otimização por Recozimento, visto haver inúmeras soluções, já que a rede pode ter qualquer sentido de fluxo. Esta otimização hidráulica visa minimizar o bombeamento do sistema. A segunda fase compreende a otimização do sistema regional, onde novas configurações e/ou alterações de diâmetros são testadas. Esta segunda otimização também é resolvida via Recozimento com o intuito de minimizar o custo do sistema. O segundo modelo trata da expansão da capacidade do sistema: o período de planejamento é dividido em duas etapas. O Recozimento é aplicado nas duas etapas. Soluções propostas para a segunda etapa são passo a passo testadas para a primeira etapa, de modo que o resultado espelhe uma otimização de todo o período. O uso intenso do Recozimento e de simulações na obtenção de soluções iniciais e candidatas leva a um tempo de processamento bastante elevado, especialmente no caso do Modelo Dinâmico. Os modelos foram testados em uma bacia exemplo obtida da literatura e também na bacia do rio Barigui, na Região Metropolitana de Curitiba. Foram desenvolvidas funções de custo para interceptores, estações elevatórias e estações de tratamento de efluentes com base em dados de obras efetuadas na Região Metropolitana de Curitiba. O uso da metaheurística do Recozimento Simulado provou ser um caminho interessante para a otimização de sistemas regionais tais como de tratamento de efluentes. Estudos adicionais são necessários no sentido de se obter um modelo hidráulico de maior eficiência computacional, um número maior de testes com os parâmetros do Recozimento e funções de custo mais abrangentes, especialmente quanto a custos de operação e manutenção.
This study is concerned with the use of the metaheuristic Simulated Annealing for the optimal planning of regional effluent systems and its capacity expansion. The first model deals with the spatial optimization of the system: given a network where some nodes represent effluent sources and other nodes represent the location of possible sewage treatment plants, the model seeks the minimum cost configuration. The first module of the model verifies the hydraulic viability of proposed configurations, by solving the universal equation of head loss. This is also done via annealing since there is a multitude of solutions because any flow direction is allowed. The second part of the model consists of trying different candidate solutions for the network, by means of changing its configurations and/or diameters and looking for the lowest cost solution. The second model deals with the capacity expansion of the system. The planning horizon is divided in two parts. Each solution for the second period is tested also for the first period, thus providing a global minimum for the entire planning period. The use of annealing coupled with intensive use of simulation results in large processing times, especially for the dynamic model. The models were tested for a network available in the literature and also in the Barigui river basin, in the Metropolitan Region of Curitiba, PR. Cost equations were derived for conveyance systems, lifting stations and wastewater treatment plants. The use of Simulated Annealing proved to be an interesting tool for the planning and optimization of regional systems such as the ones here studied. Further studies are recommended such as a mix of the two hydraulic models developed, seeking for the improvement of computational time. Additional testing of the annealing parameters are also needed and O&M cost functions should be detailed.
APA, Harvard, Vancouver, ISO, and other styles
27

Pendyala, Shilpa. "Synthesis Techniques for Sub-threshold Leakage and NBTI Optimization in Digital VLSI Systems." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/6012.

Full text
Abstract:
The rising power demands and cost motivates us to explore low power solutions in electronics. In nanometer Complementary Metal Oxide Semiconductor (CMOS) processes with low threshold voltages and thin gate oxides, subthreshold leakage power dominates total power of a circuit. As technology scales, Negative Bias Temperature Instability (NBTI) emerged as a major limiting reliability mechanism. It causes a threshold voltage shift which, over time, results in circuit performance degradation. Hence, leakage power and NBTI degradation are two key challenges in deep sub micron regime. In this dissertation, interval arithmetic based interval propagation technique is introduced as an effective leakage optimization technique in high level circuits with little overhead. The concept of self similarity from fractal theory is adopted for the first time in VLSI research to handle large design space. Though there are some leakage and NBTI co-optimization techniques in literature, our vector cycling approach combined with a back tracking algorithm have achieved better results for ISCAS85 benchmarks. We did not find any previous research works on NBTI optimization of finite state machines (FSMs). The optimization techniques of NBTI optimization in FSMs is introduced in this dissertation as well and substantial NBTI optimization is reported. Input vector control has been shown to be an effective technique to minimize subthreshold leakage. Applying appropriate minimum leakage vector (MLV) to each register transfer level (RTL) module instance results in a low leakage state with significant area overhead. For each module, via Monte Carlo simulation, we identify a set of MLV intervals such that maximum leakage is within (say) 10% of the lowest leakage points. As the module bit width increases, exhaustive simulation to find the low leakage vector is not feasible. Further, we need to search the entire input space uniformly to obtain as many low leakage intervals as possible. Based on empirical observations, we observed self similarity in the leakage distribution of adder/multiplier modules when input space is partitioned into smaller cells. This property enables uniform search of low leakage vectors in the entire input space. Also, the time taken for characterization increases linearly with the module size. Hence, this technique is scalable to higher bit width modules with acceptable characterization time. We can reduce area overhead (in some cases to 0) by choosing Primary Input (PI) MLVs such that resultant inputs to internal nodes are also MLVs. Otherwise, control points can be inserted. Based on interval arithmetic, given a DFG, we propose a heuristic with several variations for PI MLV identification with minimal control points. Experimental results for DSP filters simulated in 16nm technology demonstrated leakage savings of 93.8% with no area overhead, compared to existing work. Input vector control can also be adopted to reduce NBTI degradation as well as leakage in CMOS circuits. In the prior work, it is shown that minimum leakage vector of a circuit is not necessarily NBTI friendly. In order to achieve NBTI and leakage co-optimization, we propose an input vector cycling technique which applies different sub-optimal low leakage vectors to primary inputs at regular intervals. A co-optimal input vector for a given circuit is obtained by using simulated annealing (SA) technique. For a given input vector, a set of critical path PMOS transistors are under stress. A second input vector is obtained using a back tracking algorithm such that most of the critical path PMOS transistors are put in recovery mode. When a co-optimized input vector is assigned to primary input, critical path nodes under stress with high delay contribution are set to recovery. Logic 1 is back propagated from the nodes to the primary inputs to obtain the second input vector. These two vectors are alternated at regular time intervals. The total stress is evenly distributed among transistor sets of two vectors, as the intersection of the two sets is minimized. Hence, the overall stress on critical path transistors is alleviated, thereby reducing the NBTI delay degradation. For ISCAS85 benchmarks, an average of 5.3% improvement is achieved in performance degradation at 3.3% leakage overhead with NBTI-leakage co-optimization with a back tracking algorithm compared to solely using co-optimization. A 10.5% average NBTI improvement is obtained when compared to circuit with minimum leakage input vector for 18% average leakage overhead. Also, an average NBTI improvement of 2.13% is obtained with 6.77% leakage improvement when compared to circuit with minimum NBTI vector. Vector cycling is shown to be more effective in mitigating NBTI over input vector control. Several works in the literature have proposed optimal state encoding techniques for delay, leakage, and dynamic power optimization. In this work, we propose, for the first time, NBTI optimization based on state code optimization. We propose a SA based state code assignment algorithm, resulting in minimization of NBTI degradation in the synthesized circuit. A PMOS transistor when switched ON for a long period of time, will lead to delay degradation due to NBTI. Therefore, in combinational circuits, an NBTI friendly input vector that stresses the least number of PMOS transistors on the critical path can be applied. For sequential circuits, the state code can significantly influence the ON/OFF mode of PMOS transistors in the controller implementation. Therefore, we propose to focus on state encoding. As the problem is computational intractable, we will focus on encoding states with high state probability. The following SA moves are employed: (a) code swap; and (b) code modification by flipping bits. Experiments with LGSYNTH93 benchmarks resulted in 18.6% improvement in NBTI degradation on average with area and power improvements of 5.5% and 4.6% respectively.
APA, Harvard, Vancouver, ISO, and other styles
28

Monmousseau, Philippe. "Scheduling of a Constellation of Satellites: Improving a Simulated Annealing Model by Creating a Mixed-Integer Linear Model." Thesis, KTH, Optimeringslära och systemteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179300.

Full text
Abstract:
The purpose of this thesis is to provide a new scheduling model of a large constellation of imaging satellites that does not use a heuristic solving method. The objective is to create a mixed-integer linear model that would be competitive in speed and in its closeness to reality against a current model using simulated annealing, while trying to improve both models. Each satellite has the choice between a number of possible events, each event having a utility and a cost, and the chosen schedule must take into account numerous time-related constraints. The main difficulties appeared in modeling realistically a battery level and in handling infeasible configurations due to inaccurate parameters. The obtained linear model has enabled a better understanding of the performance of the simulated annealing solver, and could also be adapted to different real-world scheduling problems.
APA, Harvard, Vancouver, ISO, and other styles
29

Agostini, Flavia Paiva. "Mapeamento de Parâmetros do Simulated Annealing Generalizado aplicado ao problema do Enovelamento de Proteínas." Laboratório Nacional de Computação Científica, 2009. https://tede.lncc.br/handle/tede/105.

Full text
Abstract:
Made available in DSpace on 2015-03-04T18:51:09Z (GMT). No. of bitstreams: 1 TeseFlavia.pdf: 12428230 bytes, checksum: 6fb8e9ea53da0aa51093c702fb32bc4a (MD5) Previous issue date: 2009-06-06
Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior
As the genome sequencing advances, the comprehension of protein structures becomes a crucial extension to these progresses. In spite of the numerous recent technological advances, experimental determination of protein terciary structures is still very slow compared to the accumulated data from amino acid sequences. That is what makes the protein folding a central problem to the development of the pots-genomic era. In this work we use an optimization method, the Generalized Simulated Annealing (GSA), which is based on Tsallis' generalized thermostatistics, to investigate the protein folding problem. Although GSA is a generic procedure, its efficiency depends not only on the appropriate choice of parameters, but also on topological characteristics of the energy hypersurface. By mapping all the GSA parameters, it can be possible to reduce the number of possible choices of them. That also allows an analysis of its effects on the algorithm behavior. As a initial step, we apply GSA to known structures, such as polyalanines. In sequence, we also apply GSA to three more peptides of ribosomal P proteins, which are of considerable importance on the comprehension of Chagas' heart disease. Each one contains 13 amino acids and differ only on the third residue by a non-conservative mutation. As these peptides do not have experimentally resolved structure, we analyze results obtained from GSA followed by Molecular Dynamics simulations. Validity of these results is studied such that, in the future, unknown structures can be determined by this technique with a higher degree of confidence.
Com os rápidos avanços no seqüenciamento do genoma, a compreensão da estrutura de proteínas torna-se uma extensão crucial a esses progressos. Apesar dos significativos avanços tecnológicos recentes, a determinação experimental da estrutura terciária de proteínas ainda é muito lenta se comparada com a taxa de acúmulo de dados das seqüências de aminoácidos. Isto torna o enovelamento de proteínas um problema central para o desenvolvimento da biologia pós-genômica. Em nosso trabalho, fazemos uso de um método de otimização, o Generalized Simulated Annealing (GSA), baseado na termoestatística generalizada por Tsallis. Embora o GSA seja um procedimento geral, sua eficiência depende não apenas da escolha apropriada de parâmetros, mas também das características topológicas da hiper--superfície de energia da função custo. Com o mapeamento dos parâmetros necessários à aplicação do GSA, pode-se reduzir significativamente o número de escolhas, além de tornar possível uma análise do efeito dos parâmetros no comportamento do algoritmo. Como passo inicial, usamos estruturas conhecidas, com as quais os resultados obtidos com o GSA possam ser comparados, como é o caso das polialaninas. Além disso, aplicamos, o GSA a três peptídeos de proteínas ribossomais da família P, de considerável importância no estudo da doença de Chagas. Cada um possui 13 aminoácidos, diferindo em apenas uma mutação não conservativa no terceiro aminoácido. Como os peptídeos não possuem estrutura experimentalmente resolvida, analisamos os resultados obtidos com GSA seguidos por simulações de Dinâmica Molecular. A validade destes resultados é estudada, de forma que, no futuro, estruturas desconhecidas possam ser determinadas com certo grau de confiabilidade.
APA, Harvard, Vancouver, ISO, and other styles
30

Höghäll, Anton. "Tuning of Metaheuristics for Systems Biology Applications." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-58842.

Full text
Abstract:

In the field of systems biology the task of finding optimal model parameters is a common procedure. The optimization problems encountered are often multi-modal, i.e., with several local optima. In this thesis, a class of algorithms for multi-modal problems called metaheuristics are studied. A downside of metaheuristic algorithms is that they are dependent on algorithm settings in order to yield ideal performance.This thesis studies an approach to tune these algorithm settings using user constructed test functions which are faster to evaluate than an actual biological model. A statistical procedure is constructed in order to distinguish differences in performance between different configurations. Three optimization algorithms are examined closer, namely, scatter search, particle swarm optimization, and simulated annealing. However, the statistical procedure used can be applied to any algorithm that has configurable options.The results are inconclusive in the sense that performance advantages between configurations in the test functions are not necessarily transferred onto real biological models. However, of the algorithms studied a scatter search implementation was the clear top performer in general. The set of test functions used must be studied if any further work is to be made following this thesis.In the field of systems biology the task of finding optimal model parameters is a common procedure. The optimization problems encountered are often multi-modal, i.e., with several local optima. In this thesis, a class of algorithms for multi-modal problems called metaheuristics are studied. A downside of metaheuristic algorithms is that they are dependent on algorithm settings in order to yield ideal performance.

This thesis studies an approach to tune these algorithm settings using user constructed test functions which are faster to evaluate than an actual biological model. A statistical procedure is constructed in order to distinguish differences in performance between different configurations. Three optimization algorithms are examined closer, namely, scatter search, particle swarm optimization, and simulated annealing. However, the statistical procedure used can be applied to any algorithm that has configurable options.

The results are inconclusive in the sense that performance advantages between configurations in the test functions are not necessarily transferred onto real biological models. However, of the algorithms studied a scatter search implementation was the clear top performer in general. The set of test functions used must be studied if any further work is to be made following this thesis.

APA, Harvard, Vancouver, ISO, and other styles
31

Johnson, Alan W. "Generalized hill climbing algorithms for discrete optimization problems." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-152638/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cano, Pérez Carlos. "OPTIMIZACIÓN HEURISTICA MULTIOBJETIVO DE CIMENTACIONES DE SOPORTES EN MEDIANERIA." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62208.

Full text
Abstract:
[EN] The main goal of this Thesis is the application of a suitable heuristic optimization algorithm that allows finding optimal solutions to the structural problem of faced brackets foundation, when one of them is located in a sharecropping, and therefore, the disposition of its foundation is prevented in one direction. Since there are various structural models to solve the problem, it is the most commonly used in current practice. Isolated footings, constant width combined footing, asymmetrical combined footing, and beam brace eccentric shoe are the structural models finally selected. Also, it has been confirmed the existence of various analytical models to solve the problem. Included in this Thesis is the simplest analysis model, commonly used inprofessional practice, and a more complex model that we call elastic simplified method. The first is the named "Rigid foundation", which is based on the premise of a rigid behavior foundation. This implies the assumption of a linear distribution of the pressure transmitted to the ground. On the other hand, it will also analyze a model of analysis that does not limit the performance of the foundation only to the case of rigid foundation,modeling the terrain with springs through the so-called 'subgrade reaction'. As a further aim of the Thesis, is to determine the structural model and the most suitable analysis model for solving the problem and determine the differences with other commonly used models. Another objective of the thesis will be to evaluate the different objective functions, so that they may assess the optimal solution in terms of economic, environmental and ease of construction. From the literature review, it has noted the existence of numerous optimization studies conducted in search of economic optimum (cost of the solution, solution weight) and from the environmental point of view (CO2 emissions, energy consumption, solution weight). From the standpoint of ease of construction of the solution, studies are lower, including parameters 'Armed uniformity, number of reinforcing bars or number of different bar types'. In this Thesis all these objective functions will be evaluated, incorporating environmental features such as the study of water consumption of the constituent materials and from the point of view of ease of construction, the perimeter / area ratio and the parameter average diameter of the solution, which will be defined later. Another of the objectives set for the thesis has been to be able to compare the results obtained by applying the "Simulated Annealing" algorithm, with the results that have been obtained in routine practice, following the usual process of structural calculation.A sample calculation in joint shoe is included in the book "Cálculo de estructuras de cimentacion" 4th edition from Calavera, comparing the values of objective functions of the solution provided in the book, with those achieved after application of optimization SA algorithm and those collected after using a structural calculation software widely used in practice, as is "Cypecad ver 2014". Finally, the ultimate goal of the thesis has been conducting a parametric study that allows providing the optimal solutions for a wide range of the problem under study configurations
[ES] El objetivo fundamental de la presente Tesis es la aplicación de un algoritmo de optimización heurístico adecuado, que permita localizar soluciones óptimas al problema estructural consistente en la cimentación de dos soportes enfrentados, cuando uno de ellos se sitúa en una medianería, y por tanto, tiene impedida en una dirección la disposición de su cimiento. Dado que existen diversos modelos estructurales que permiten solucionar el problema, se ha seleccionado entre ellos, los más habitualmente utilizados en la práctica actual.. Los modelos estructurales finalmente seleccionados han sido los constituidos por zapatas aisladas, zapata combinada de ancho constante, zapata combinada asimétrica y zapata excéntrica con viga riostra. También se ha podido confirmar la existencia de diversos modelos de análisis para la solución del problema, incuyéndose en la tesis tanto el modelo más sencillo de análisis, utilizado habitualmente en la práctica profesional, como un modelo más complejo que denominaremos, modelo elástico simplificado. El primero de ellos será el denominado 'Cimiento Rígido', donde se partirá de la premisa de un comportamiento rígido del cimiento, lo que implicará la suposición de que presión transmitida al terreno sigue una distribución lineal. Por otro lado, se analizará también un modelo de análisis que no limitará el comportamiento del cimiento únicamente al caso de cimiento rígido, modelizando el terreno mediante muelles a través del denominado 'Módulo de Balasto'. Como objetivo adicional de la tesis se establecerá el poder determinar el modelo estructural y el modelo de análisis más adecuado para la resolución del problema y determinar las diferencias con el resto de modelos usualmente utilizados. Otro de los objetivos de la tesis será el de poder evaluar distintas funciones objetivo, de forma que se pueda evaluar el óptimo de una solución desde el punto de vista económico, medioambiental y de facilidad constructiva. De la revisión bibliográfica realizada, se ha podido constatar la existencia de numerosos estudios de optimización realizados en busca de óptimos económicos (coste de la solución, peso de la solución) y desde el punto de vista medioambiental (emisiones de CO2, consumo de energía, peso de la solución). Desde el punto de vista de la facilidad constructiva de la solución, los estudios realizados son menores, incluyéndose parámetros de 'Uniformidad de Armado, Número de barras de armado o número de tipo de barras distintos'. En la presente tesis se evaluarán todas estas funciones objetivo, incorporándose como funciones de tipo medioambiental, el estudio de consumo de agua de los materiales constituyentes de la solución y desde el punto de vista de la facilidad constructiva, la relación Perímetro/ Área y el parámetro Diámetro Medio de la solución, que se definirán más tarde. Otro de los objetivos establecidos para la tesis ha sido el de poder comparar los resultados obtenidos mediante la aplicación del algoritmo de Simulated Annealing, con los resultados que se hubieran obtenido en la práctica profesional habitual, siguiendo los procesos habituales de cálculo estructural. Para ello, se desarrolla un ejemplo de cálculo de zapata en medianería incluido en el libro "Cálculo de estructuras de cimentación" Edición 4, de Calavera, comparando los valores de las funciones objetivo de la solución aportada en el libro, con las alcanzadas tras la aplicación del algoritmo de optimización de SA y las conseguidas tras el uso de un software de cálculo estructural ampliamente utilizado en la práctica, como es 'Cypecad ver 2014'. Por último, el objetivo final de la tesis ha sido la realización de un estudio paramétrico, que permita aportar las soluciones óptimas de un amplio abanicos de configuraciones del problema en estudio.
[CAT] L'objectiu fonamental de la present Tesi és l'aplicació d'un algoritme d'optimització heurístic adequat, que permeta localitzar solucions òptimes al problema estructural consistent en la fonamentació de dos suports enfrontats, quan un d'ells se sitúa en una paret mitgera, i per tant, té impedida en una direcció la disposició del seu fonament.Atés que hi ha diversos models estructurals que permeten solucionar el problema, s'ha seleccionat entre ells, els més habitualment utilitzats en la pràctica. Els models estructurals finalment seleccionats han sigut els constituïts per zapatas aïllades, zapata combinada d'ample constant, zapata combinada asimètrica i zapata excèntrica amb biga riostra. També s'ha pogut confirmar l'existència de diversos models d'anàlisi per a la solució del problema, incuyéndose en la tesi tant el model més senzill d'analisis, utilitzat habitualment en la pràctica professional, com un model més complex que denominarem, model elàstic simplificat. El primer d'ells serà el denominat 'Cimiento Rígido', on es partirà de la premissa d'un comportament rígid del fonament, la qual cosa implicarà la suposició que pressió transmesa al terreny seguix una distribució lineal. D'altra banda, s'analitzarà també un model d'anàlisi que no limitarà el comportament del fonament unicamente al cas de fonament rígid, odelizando el terreny per mitjà de molls a través del denominat 'Módulo de Balasto'. Com a objectiu addicional de la tesi s'establirà el poder determinar el model estructural i el model d'anàlisi més adequat per a la resolució del problema i determinar les diferències amb la resta de models usualment utilitzats. Un altre dels objectius de la tesi sera el de poder avaluar distintes funcions objectiu, de manera que es puga avaluar l'òptim d'una solució des del punt de vista econòmic, mediambiental i de facilitat constructiva. De la revisió bibliogràfica realitzada, s'ha pogut constatar l'existència de nombrosos estudis d'optimització realitzats a la cerca d'òptims econòmics (cost de la solució, pes de la solució) i des del punt de vista mediambiental (emissions de CO2, consum d'energia, pes de la solució). Des del punt de vista de la facilitat constructiva de la solució, els estudis realitzats són menors, incloent-se paràmetres de 'Uniformidad d'Armat, nombre de barres d'armat o nombre de tipus de barres distintos'. En la present tesi s'avaluaren totes estàs funciones objectiu, incorporant-se com funciones de tipus mediambiental, l'estudi de consum d'aigua dels materials constituents de la solució i des del punt de vista de la facilitat constructiva, la relació Perímetro/ Àrea i el paràmetre Diàmetre Mitjà de la solució, que es definirà més tard. Un altre dels objectius establits per a la tesi ha sigut el de poder comparar els resultats obtinguts per mitjà de l'aplicació de l'algoritme de Simulated Annealing, amb els resultats que s'hagueren obtingut en la pràctica professional habitual, seguint els processos habituals de càlcul estructural. Per a això, es desenrotlla un exemple de càlcul de zapata en paret mitgera inclòs en el llibre "Cálculo d'estructures de cimentacion" Edició 4, de Calavera, comparant els valors de les funcions objectiu de la solució aportada en el llibre, amb les aconseguides després de l'aplicació de l'algoritme d'optimització de SA i les aconseguides després de l'ús d'un programari de càlcul estructural ampliament utilitzat en la pràctica, com és 'Cypecad ver 2014' Finalment, l'objectiu final de la tesi ha sigut la realització d'un estudi paramètric, que permeta aportar la solucions òptimes d'un ampli espectre de configuracions del problema en estudi
Cano Pérez, C. (2016). OPTIMIZACIÓN HEURISTICA MULTIOBJETIVO DE CIMENTACIONES DE SOPORTES EN MEDIANERIA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62208
TESIS
APA, Harvard, Vancouver, ISO, and other styles
33

Bakula, Casey J. "LOW-POWER PULSE-SHAPING FILTER DESIGN USING HARDWARE-SPECIFIC POWER MODELING AND OPTIMIZATION." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1204743997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Montalva, Subirats José Miguel. "Optimización multiobjetivo de la distribución en planta de procesos industriales. Estudio de objetivos." Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/11147.

Full text
Abstract:
En el proceso de diseño e las construcciones industriales, es de vital importancia conocer cual es la ubicación óptima de las diferentes áras de trabajo que conforman un proceso de fabricación, así como de las instalaciones y servicios auxiliares. El problema de distribución en planta (Facilities Layout Problem, FLP) integra a todas las actividades industriales y se ha convertido desde los años 60 en uno de los problemas clásicos de optimización combinatoria, en el que trabajan multiutd de investigadores a nivel internacional. Hasta los años 90, el enfoque que se realizaba del problema era básicamente un enfoque monobjetivo, en el que se primaba fundamentalmente la minimización del coste de transporte de material o personas entre las diferentes áreas productivas o de servicios. Para ello se han venido empleando diferentes técnicas de optimización heurística, que persiguen minimizar el tiempo de cálculo y facilitar la búsqueda de mínimos, aunque sean locales, pues el espacio de soluciones es tan grande, que es difícil garantizar la existencia de un mínimo global del problema. No obstante, el criterio de coste no es el único que se debe considerar en este tipo de planteamientos, pues existen otra serie de indicadores que son de vital importancia, para garantizar que la solución propuesta tiene un nivel de desarrollo tecnológico con la aparición de equipos y programas informáticos más desarrollados, han prosperado las aproximaciones multiobjetivos al problema de distribución en planta. Entre los objetivos principales del presente trabajo se encuentran; la realización de un estado del arte de los indicadores que se han empleado en la bibliografía para la resolución en planta, obteniendo un conjunto de indicadores independientes y suficientes que puedan ser empleados en la obtención de distribuciones en planta óptimas. Se investigará si es necesario definir algún nuevo indicador que cubra los objetivos fundamentales de la distribución en planta establecidos por distintos autores. Una vez seleccionados los indicadores se propone una técnica de optimización multiobjetivo basada en un algoritmo de recocido simulado (Simulated Annealing). Finalmente se presentan los resultados de los experimentos realizados, empleando la técnica de optimización multiobjetivo propuesta, sobre un problema ampliamente utilizado en la bibliografía, el propuesto por Armour y Buffa de 20 actividades. Se obtienen las fronteras de Pareto para diferentes bicriterios, introduciendo puntos que completan las existentes hasta la actualidad, estudiando la posibilidad de extender la optimización a 3 indicadores.
Montalva Subirats, JM. (2011). Optimización multiobjetivo de la distribución en planta de procesos industriales. Estudio de objetivos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11147
Palancia
APA, Harvard, Vancouver, ISO, and other styles
35

Tran, Nam. "THE EFFECT OF FIBER DEPTH ON THE ESTIMATION OF PERIPHERAL NERVE FIBER DIAMETER USING GROUP DELAY AND SIMULATED ANNEALING OPTIMIZATION." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1225.

Full text
Abstract:
Peripheral neuropathy refers to diseases of or injuries to the peripheral nerves in the human body. The damage can interfere with the vital connection between the central nervous system and other parts of the body, and can significantly reduce the quality of life of those affected. In the US, approximately between 15 and 20 million people over the age of 40 have some forms of peripheral neuropathy. The diagnosis of peripheral neuropathy often requires an invasive operation such as a biopsy because different forms of peripheral neuropathy can affect different types of nerve fibers. There are non-invasive methods available to diagnose peripheral neuropathy such as the nerve conduction velocity test (NCV). Although the NCV is useful to test the viability of an entire nerve trunk, it does not provide adequate information about the individual functioning nerve fibers in the nerve trunk to differentiate between the different forms of peripheral neuropathy. A novel technique was proposed to estimate the individual nerve fiber diameters using group delay and simulated annealing optimization. However, this technique assumed that the fiber depth is always constant at 1 mm and the fiber activation due to a stimulus is depth independent. This study aims to incorporate the effect of fiber depth into the fiber diameter estimation technique and to make the simulation more realistic, as well as to move a step closer to making this technique a viable diagnostic tool. From the simulation data, this study found that changing the assumption of the fiber depth significantly impacts the accuracy of the fiber diameter estimation. The results suggest that the accuracy of the fiber diameter estimation is dependent on whether the type of activation function is depth dependent or not, and whether the template fiber diameter distribution contains mostly large fibers or both small and large fibers, but not dependent on whether the fiber depth is constant or variable.
APA, Harvard, Vancouver, ISO, and other styles
36

Frithiof, Fredrik. "A framework for designing a modular muffler system by global optimization." Thesis, KTH, Optimeringslära och systemteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169650.

Full text
Abstract:
When creating a muffler to be installed on a noise generating machine, the design parameters as well as the placements of sound attenuating elements has to be optimized in order to minimize the sound coming out of the equipage. This is exemplified in a small project task for students of a basic course in optimization at KTH. The task is however flawed, since both the way in which the optimization problem is formed is overly simplistic and the algorithm used to solve the problem, fmincon, does not cope well with the mathematical complexity of the model, meaning it gets stuck in a local optimum that is not a global optimum. This thesis is about investigating how to solve both of these problems. The model is modified to combine several frequencies and adjusting them to the sensitivity to different frequencies in the human ear. By doing this, the objective is changed from the previous way of maximizing Dynamic Insertion Loss Dilfor a specific frequency to minimize the total perceived sound level LA.  The model is based on the modular design of TMM from four-pole theory. This divides the muffler into separate parts, with the sound attenuating elements being mathematically defined only by what T matrix it has. The element types to choose from are the Expansion Chamber, the Quarter Wave Resonator and the Helmholtz Resonator. The global optimization methods to choose from are Global Search, MultiStart, Genetic Algorithm, Pattern Search and Simulated Annealing. By combining the different types of sound attenuating elements in every way and solving each case with every global optimization method, the best combination to implement to the model is chosen. The choice is two Quarter Wave Resonators being solved by MultiStart, which provides satisfactory results. Further analysis is done to ensure the robustness of chosen implementation, which does not reveal any significant flaws. The purpose of this thesis is fulfilled.
När man skapar en ljuddämpare som ska installeras på en ljud-genererande maskin bör designparametrarna samt placeringarna av ljuddämpande element optimeras för att minimera ljudet som kommer ut ur ekipaget. Detta exemplifieras i en liten projektuppgift för studenter till en grundkurs i optimering på KTH. Uppgiften är dock bristfällig, eftersom både det sätt som optimeringsproblemet är utformat är alltför förenklat och den algoritm som används för att lösa problemet, fmincon, inte klarar av modellens matematiska komplexitet bra, vilket menas med att den fastnar i ett lokalt optimum som inte är ett globalt optimum. Detta examensarbete handlar om att undersöka hur man kan lösa båda dessa problem. Modellen är modifierad för att kombinera flera frekvenser och anpassa dem till känsligheten för olika frekvenser i det mänskliga örat. Genom att göra detta är målet ändrat från det tidigare sättet att maximera den dynamiska insatsisoleringen DIL för en specifik frekvens till att minimera den totala upplevda ljudnivån LA. Modellen bygger på den modulära designen av TMM från 4-polsteori. Detta delar upp ljuddämparen i separata delar, med ljuddämpande element som matematiskt endast definieras av vilken T matris de har. De elementtyper att välja mellan är expansionskammare, kvartsvågsresonator och Helmholtzresonator. De globala optimeringsmetoder att välja mellan är Global Search, MultiStart, Genetic Algorithm, Pattern Search och Simulated Annealing. Genom att kombinera de olika typerna av ljuddämpande element på alla sätt och lösa varje fall med varje global optimeringsmetod, blir den bästa kombinationen vald och implementerad i modellen. Valet är två kvartsvågsresonatorer som löses genom MultiStart, vilket ger tillfredsställande resultat. Ytterligare analyser görs för att säkerställa robustheten av den valda implementationen, som inte avslöjar några väsentliga brister. Syftet med detta examensarbete är uppfyllt.
APA, Harvard, Vancouver, ISO, and other styles
37

Sullivan, Kelly Ann. "A Convergence Analysis of Generalized Hill Climbing Algorithms." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/27027.

Full text
Abstract:
Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented. The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Baltazar, Cervantes Juan Carlos. "Development of an automated methodology for calibration of simplified air-side HVAC system models and estimation of potential savings from retrofit/commissioning measures." Texas A&M University, 2006. http://hdl.handle.net/1969.1/5026.

Full text
Abstract:
This dissertation provides one methodology to determine potential energy savings of buildings with limited information. This methodology is based upon the simplified energy analysis procedure of HVAC systems and the control of the comfort conditions. Numerically, the algorithm is a tailored exhaustive search over all the independent variables that are commonly controlled for a specific type of HVAC system. The potential energy savings methodology has been applied in several buildings that have been retrofitted and/or commissioned previously. Results from the determined savings for the Zachry building at Texas A&M after being commissioned show a close agreement to the calculated potential energy savings (about 85%). Differences are mainly attributed to the use of simplified models. Due to the restriction of limited information about the building characteristics and operational control, the potential energy savings method requires the determination of parameters that characterize its thermal performance. Thus, a calibrated building is needed. A general procedure has been developed to carry out automated calibration of building energy use simulations. The methodology has been tested successfully on building simulations based on the simplified energy analysis procedure. The automated calibration is the minimization of the RMSE of the energy use over daily conditions. The minimization procedure is fulfilled with a non-canonical optimization algorithm, the Simulated Annealing, which mimics the Statistical Thermodynamic performance of the annealing process. That is to say, starting at a specified temperature the algorithm searches variable-space states that are steadier, while heuristically, by the Boltzmann distribution, the local minima is avoided. The process is repeated at a new lower temperature that is determined by a specific schedule until the global minimum is found. This methodology was applied to the most common air-handler units producing excellent results for ideal cases or for samples modified with a 1% white noise.
APA, Harvard, Vancouver, ISO, and other styles
39

Zapletal, Marek. "Implementace a testování vybraných optimalizačních metod pro úlohy odhadu parametrů simulačních modelů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-254426.

Full text
Abstract:
This thesis deals with design of appropriate optimization algorithms for purposes of newly developed tool Mechlab’s parameter estimation, which serves for parameter estimation of simulation models in Matlab/Simulink. Levenberg-Marquardt algorithm had been chosen among other gradient methods. On the other hand, genetic algorithm and simulated annealing had been chosen from category of soft computing techniques to be implemented. Chosen algorithms were tested on artifical problem of mechanical oscilator and also on real datasets from electronic throttle. Proposed simulated annealing worked in both cases whith sufficient results, nevertheless was time-costly. For the oscilator problem, all the algorithms provided accurate solutions, but in the case of real dataset, Levenberg-Marquardt functionality was limited, while genetic algorithm still provided excelent results.
APA, Harvard, Vancouver, ISO, and other styles
40

Coyle, Jesse Aaron. "Optimization of nuclear, radiological, biological, and chemical terrorism incidence models through the use of simulated annealing Monte Carlo and iterative methods." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43599.

Full text
Abstract:
A random search optimization method based off an analogous process for the slow cooling of metals is explored and used to find the optimum solution for a number of regression models that analyze nuclear, radiological, biological,and chemical terrorism targets. A non-parametric simulation based off of historical data is also explored. Simulated series of 30 years and a 30 year extrapolation of historical data are provided. The inclusion of independent variables used in the regression analysis is based off existing work in the reviewed literature. CBRN terrorism data is collected from both the Monterey Institute's Weapons of Mass Destruction Terrorism Database as well as from the START Global Terrorism Database. Building similar models to those found in the literature and running them against CBRN terrorism incidence data determines if conventional terrorism indicator variables are also significant predictors of CBRN terrorism targets. The negative binomial model was determined to be the best regression model available for the data analysis. Two general types of models are developed, including an economic development model and a political risk model. From the economic development model we find that national GDP, GDP per capita, trade openness, and democracy to significant indicators of CBRN terrorism targets. Additionally from the political risk model we find corrupt, stable, and democratic regimes more likely to experience a CBRN event. We do not find language/religious fractionalization to be a significant predictive variable. Similarly we do not find ethnic tensions, involvement in external conflict, or a military government to have significant predictive value.
APA, Harvard, Vancouver, ISO, and other styles
41

Burvall, Benjamin, and Johannes Olegård. "A Comparison of a Genetic Algorithm and Simulated Annealing Applied to a Traffic Light Control Problem : A Traffic Intersection Optimization Problem." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166453.

Full text
Abstract:
This work compares a Genetic Algorithm (GA) and Simulated Annealing (SA) when applied to a variant of the Traffic Light Control Problem (TLCP).  TLCP is about controlling the lights in one or more traffic intersections in order to optimize traffic flow. This is important in order for society to function properly.  The idea is that to solve this problem quickly, as would be necessary in a real traffic situation, stochastic search algorithms like SA and GA should be used.  GA and SA in particular are chosen because they are often used in previous work.A 4-way traffic intersection is simulated. GA and SA are used to find a schedule for lighting the traffic lights in such a way that for a given collection of cars, the traffic flow is maximized.  The goal is to study how traffic flows in the solutions produced by GA and SA when the problem size increases.The conclusion of this work is that SA seems to generally finds better solutions than GA in small search spaces and that SA and GA are comparable in larger search spaces.
Det här arbetet jämför en Genetisk Algoritm (GA) och Simulated Annealing (SA) när de appliceras på en variant av Trafikljusstyrningsproblemet (TLCP).  TLCP handlar om att styra trafikljus i en eller flera trafikkorsningar för att optimera trafikflödet genom dem. Detta är viktigt för att samhället ska fungera bra.  Tanken för att lösa problemet tillräckligt snabbt för att fungera i verklig trafik är att använda sig av stokastiska algoritmer såsom GA och SA. Just GA och SA har valts eftersom de ofta används i liknande arbeten.En 4-vägs trafikkorsning simuleras. GA och SA används för att hitta ett schema för hur trafikljusen ska styras för att trafikflödet ska optimeras, för en given mängd bilar.  Målet är att studera hur trafikflödet för lösningarna producerade av GA och SA skalar när storleken på problemet växer.Som slutsats konstateras att SA generellt hittar bättre lösningar på kortare tid än GA när det gäller mindre lösningar. För större lösningar var GA och SA jämförbara.
APA, Harvard, Vancouver, ISO, and other styles
42

Vaughan, Diane Elizabeth. "Simultaneous Generalized Hill Climbing Algorithms for Addressing Sets of Discrete Optimization Problems." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/28514.

Full text
Abstract:
Generalized hill climbing (GHC) algorithms provide a framework for using local search algorithms to address intractable discrete optimization problems. Many well-known local search algorithms can be formulated as GHC algorithms, including simulated annealing, threshold accepting, Monte Carlo search, and pure local search (among others). This dissertation develops a mathematical framework for simultaneously addressing a set of related discrete optimization problems using GHC algorithms. The resulting algorithms, termed simultaneous generalized hill climbing (SGHC) algorithms, can be applied to a wide variety of sets of related discrete optimization problems. The SGHC algorithm probabilistically moves between these discrete optimization problems according to a problem generation probability function. This dissertation establishes that the problem generation probability function is a stochastic process that satisfies the Markov property. Therefore, given a SGHC algorithm, movement between these discrete optimization problems can be modeled as a Markov chain. Sufficient conditions that guarantee that this Markov chain has a uniform stationary probability distribution are presented. Moreover, sufficient conditions are obtained that guarantee that a SGHC algorithm will visit the globally optimal solution over all the problems in a set of related discrete optimization problems. Computational results are presented with SGHC algorithms for a set of traveling salesman problems. For comparison purposes, GHC algorithms are also applied individually to each traveling salesman problem. These computational results suggest that optimal/near optimal solutions can often be reached more quickly using a SGHC algorithm.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Krumpe, Norman Joseph. "A COMPARISON OF SIMULATION OPTIMIZATION TECHNIQUES IN SOLVING SINGLE-OBJECTIVE, CONSTRAINED, DISCRETE VARIABLE PROBLEMS." Miami University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=miami1129749397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Malleypally, Vinaya. "Parallelizing Tabu Search Based Optimization Algorithm on GPUs." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7638.

Full text
Abstract:
There are many combinatorial optimization problems such as traveling salesman problem, quadratic-assignment problem, flow shop scheduling, that are computationally intractable. Tabu search based simulated annealing is a stochastic search algorithm that is widely used to solve combinatorial optimization problems. Due to excessive run time, there is a strong demand for a parallel version that can be applied to any problem with minimal modifications. Existing advanced and/or parallel versions of tabu search algorithms are specific to the problem at hand. This leads to a drawback of optimization only for that particular problem. In this work, we propose a parallel version of tabu search based SA on the Graphics Processing Unit (GPU) platform. We propose two variants of the algorithm based on where the tabu list is stored (global vs. local). In the first version, the list is stored in the global shared memory such that all threads can access this list. Multiple random walks in solution space are carried out. Each walk avoids the moves made in rest of the walks due to their access to global tabu list at the expense of more time. In the second version, the list is stored at the block level and is shared by only the block threads. Groups of random walks are performed in parallel and a walk in a group avoids the moves made by the rest of the walks within that group due to their access to shared local tabu list. This version is better than the first version in terms of execution time. On the other hand, the first version finds the global optima more often. We present experimental results for six difficult optimization functions with known global optima. Compared to the CPU implementation with similar workload, the proposed GPU versions are faster by approximately three orders of magnitude and often find the global optima.
APA, Harvard, Vancouver, ISO, and other styles
45

Carrillo, Oscar Javier Begambre. "Algoritmo híbrido para avaliação da integridade estrutural: uma abordagem heurística." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-09082007-142229/.

Full text
Abstract:
Neste estudo, o novo algoritmo hibrido autoconfigurado PSOS (Particle Swarm Optimization - Simplex) para avaliação da integridade estrutural a partir de respostas dinâmicas é apresentado. A formulação da função objetivo para o problema de minimização definido emprega funções de resposta em freqüência e/ou dados modais do sistema. Uma nova estratégia para o controle dos parâmetros do algoritmo Particle Swarm Optimization (PSO), baseada no uso do método de Nelder - Mead é desenvolvida; conseqüentemente, a convergência do PSO fica independente dos parâmetros heurísticos e sua estabilidade e precisão são melhoradas. O método híbrido proposto teve melhor desempenho, nas diversas funções teste analisadas, quando comparado com os algoritmos simulated annealing, algoritmos genéticos e o PSO. São apresentados diversos problemas de detecção de dano, levando em conta os efeitos do ruído e da falta de dados experimentais. Em todos os casos, a posição e extensão do dano foram determinadas com sucesso. Finalmente, usando o PSOS, os parâmetros de um oscilador não linear (oscilador de Duffing) foram identificados.
In this study, a new auto configured Particle Swarm Optimization - Simplex algorithm for damage detection has been proposed. The formulation of the objective function for the minimization problem is based on the frequency response functions (FRFs) and the modal parameters of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and accuracy are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA), the Genetic Algorithm (GA) and the basic PSO. Several damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied. In these cases, the damage location and extent were determined successfully. Finally, using the PSOS, a non-linear oscillator (Duffing oscillator) was identified with good results.
APA, Harvard, Vancouver, ISO, and other styles
46

Henderson, Darrall. "Assessing the Finite-Time Performance of Local Search Algorithms." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/26926.

Full text
Abstract:
Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value. It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Vigeh, Arya. "Investigation of a Simulated Annealing Cooling Schedule used to Optimize the Estimation of the Fiber Diameter Distribution in a Peripheral Nerve Trunk." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/497.

Full text
Abstract:
In previous studies it was determined that the fiber diameter distribution in a peripheral nerve could be estimated by a simulation technique known as group delay. These results could be further improved using a combinatorial optimization algorithm called simulated annealing. This paper explores the structure and behavior of simulated annealing for the application of optimizing the group delay estimated fiber diameter distribution. Specifically, a set of parameters known as the cooling schedule is investigated to determine its effectiveness in the optimization process. Simulated annealing is a technique for finding the global minimum (or maximum) of a cost function which may have many local minima. The set of parameters which comprise the cooling schedule dictate the rate at which simulated annealing reaches its final solution. Converging too quickly can result in sub-optimal solutions while taking too long to determine a solution can result in an unnecessarily large computational effort that would be impractical in a real-world setting. The goal of this study is to minimize the computational effort of simulated annealing without sacrificing its effectiveness at minimizing the cost function. The cost function for this application is an error value computed as the difference in the maximum compound evoked potentials between an empirically-determined template distribution of fiber diameters and an optimized set of fiber diameters. The resulting information will be useful when developing the group delay estimation and subsequent simulated annealing optimization in an experimental laboratory setting.
APA, Harvard, Vancouver, ISO, and other styles
48

Weber, Tiago Oliveira. "Síntese de CIs analógicos em nível de circuito e sistema utilizando métodos modernos de otimização." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-22072016-151343/.

Full text
Abstract:
Circuitos integrados analógicos são essenciais em sistemas eletrônicos modernos, sendo responsáveis por tarefas como conversão analógica/digital e digital/analógica, comunicação por radiofrequência, filtragem, etc. O projeto deste tipo de circuito e sistema é de grande complexidade uma vez que deve atender a especificações de desempenho cada vez mais exigentes e ter um tempo de projeto reduzido a fim de não comprometer o tempo total dos projetos de sinal misto. Diversas ferramentas são propostas na literatura visando auxiliar o projetista a aumentar sua produtividade. Apesar disso, devido à forte interligação entre etapas, o fluxo de projeto de circuitos integrados analógicos ainda é, tradicionalmente, realizado utilizando-se apenas cálculos manuais e posterior ajuste fino através de softwares de simulação elétrica. Neste trabalho, são estudadas técnicas de síntese de circuitos analógicos utilizando métodos modernos de otimização em nível de circuito e sistema. Após este estudo, é proposto um novo algoritmo de Simulated Annealing/Simulated Quenching, incluindo um mecanismo para utilização do operador de crossover considerando informações de múltiplos objetivos. É realizada a hibridização entre o algoritmo desenvolvido e um algoritmo de Particle Swarm Optimization para criação de um segundo algoritmo capaz de realizar a busca pela fronteira de Pareto. As características dos algoritmos propostos foram elaboradas visando a síntese de circuitos integrados analógicos, no entanto, resultados indicam que eles também têm excelente desempenho em comparação com diversos algoritmos atuais do tipo sem derivada para determinados problemas matemáticos. A generalidade dos métodos modernos de otimização permite que variações da mesma técnica sejam utilizadas em nível de circuito (dimensionamento e polarização de componentes do circuito) e de sistema (tradução de especificações de sistema em especificações de blocos). Dessa forma, são propostas técnicas para a criação de uma ferramenta de síntese em nível de sistema e circuito utilizando métodos modernos de otimização. Uma interface através de arquivos texto de entrada foi desenvolvida para tornar a ferramenta versátil e poder ser utilizada para uma grande variedade de tipos de circuitos eletrônicos. Para validar o algoritmo e a ferramenta na síntese em nível de circuito, foram sintetizados circuitos em tecnologia 0,35 µm, 180 nm e 130 nm. Entre eles, foram sintetizados amplificadores do tipo Miller, amplificadores do tipo folded cascode complementar, amplificadores de baixo ruído operando em 2,45 GHz e fontes de referência. Comparações utilizando o teste não paramétrico de Mann-Whitney-Wilcoxon mostram que o algoritmo proposto tem melhor desempenho que os demais algoritmos comparados para os casos estudados. Comparações com projetos manuais e outras ferramentas confirmam a eficácia dos algoritmos e ferramenta. Para validação da ferramenta em nível de sistema, foram sintetizados filtros do tipo Gm-C.
Analog integrated circuits are very important in modern electronic systems, performing tasks such as analog to digital conversion, digital to analog conversion, radio frequency communication, filtering and others. The design of this type of circuit requires attending to several performance specifications as well as a time specification in order to avoid compromising the overall design time of mixed signal projects. Several tools are proposed in the literature in order to aid the designer, however the traditional design flow for analog integrated circuits is usually accomplished using only hand calculations and adjusts through the use of electrical simulators. In this work, techniques for analog design synthesis for circuit and system level are studied. An optimization algorithm is proposed based on Simulated Annealing/Simulated Quenching with a mechanism for using the crossover operator considering multiobjective information. An hybrid algorithm combining the proposed algorithm with Particle Swarm Optimization was created to properly explore the Pareto front The characteristics of the algorithms are made to enable the synthesis of analog integrated circuits, however, tests indicate they have excellent performance in comparison with many other derivative-free algorithms when applied to certain mathematical problems. The generality of modern optimization methods allow that variations of the same techniques can be used in circuit level (sizing and biasing of circuit components) and in system level (translation of system specifications to block specifications). Therefore, techniques for the creation of a circuit-level and system-level tool are developed. An interface using spice-like text files as inputs is developed to allow the designer to use the tool for a wide range of electronic circuits. In order to validate the proposed algorithms and circuit level tool, circuits were synthesized in 0.35 m, 180 nm and 130 nm. The synthesized circuits included Miller amplifiers, complementary folded cascode amplifiers, low noise amplifiers operating at 2.45 GHz and voltage reference circuits. Comparisons using the non-parametric Mann-Whitney-Wilcoxon test showed that the proposed algorithm has better performance than the compared algorithms for the studied cases. At the system level, syntheses of Gm-C filters were performed to validate the tool.
APA, Harvard, Vancouver, ISO, and other styles
49

Ugail, Hassan, M. I. G. Bloor, and M. J. Wilson. "Implementing automatic design optimisation in an interactive environment." American Institute of Aeronautics and Astronautics, 2000. http://hdl.handle.net/10454/2942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

LIN, KE-RONG, and 林克容. "Track optimization using simulated evolution and simulated annealing." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/80624787734628918655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography