To see the other types of publications on this topic, follow the link: Monte Carlo simulation Optimization.

Dissertations / Theses on the topic 'Monte Carlo simulation Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo simulation Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bryskhe, Henrik. "Optimization of Monte Carlo simulations." Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-121843.

Full text
Abstract:
<p>This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was also written in order to have a platform for Monte Carlo simulations where the different techniques were implemented and tested. The program also provides an overview of the simulation setup, were the user can easily verify that everything has been setup correctly. The thesis also covers an attempt to rewrite Penelope from FORTAN to C. The new version is significantly faster and can be used on more systems. A distribution package was also added to the new Penelope version. Since Monte Carlo simulations are easily distributed, running this type of simulations on ten computers yields ten times the speedup. Combining the different techniques in the platform provides an easy to use and at the same time efficient way of performing Monte Carlo simulations.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Armour, Jessica D. "On the Gap-Tooth direct simulation Monte Carlo method." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72863.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, February 2012.<br>"February 2012." Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. [73]-74).<br>This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of reducing the computational cost of microscopic (e.g. molecular) simulation methods using simulation particles only in small regions of space (teeth) surrounded by (ideally) large gaps. This scheme requires an algorithm for transporting particles between teeth. Such an algorithm can be readily developed and implemented within direct Monte Carlo simulations of dilute gases due to the non-interacting nature of the particle-simulators. The present work develops and evaluates particle treatment at the boundaries associated with diffuse-wall boundary conditions and investigates the drawbacks associated with GT-DSMC implementations which detract from the theoretically large computational benefit associated with this algorithm (the cost reduction is linear in the gap-to-tooth ratio). Particular attention is paid to the additional numerical error introduced by the gap-tooth algorithm as well as the additional statistical uncertainty introduced by the smaller number of particles. We find the numerical error introduced by transporting particles to adjacent teeth to be considerable. Moreover, we find that due to the reduced number of particles in the simulation domain, correlations persist longer, and thus statistical uncertainties are larger than DSMC for the same number of particles per cell. This considerably reduces the computational benefit associated with the GT-DSMC algorithm. We conclude that the GT-DSMC method requires more development, particularly in the area of error and uncertainty reduction, before it can be used as an effective simulation method.<br>by Jessica D. Armour.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Homem, de Mello Tito. "Simulation-based methods for stochastic optimization." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yunsong. "Optimization of Monte Carlo Neutron Transport Simulations with Emerging Architectures." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX090/document.

Full text
Abstract:
L’accès aux données de base, que sont les sections efficaces, constitue le principal goulot d’étranglement aux performances dans la résolution des équations du transport neutronique par méthode Monte Carlo (MC). Ces sections efficaces caractérisent les probabilités de collisions des neutrons avec les nucléides qui composent le matériau traversé. Elles sont propres à chaque nucléide et dépendent de l’énergie du neutron incident et de la température du matériau. Les codes de référence en MC chargent ces données en mémoire à l’ensemble des températures intervenant dans le système et utilisent un algorithme de recherche binaire dans les tables stockant les sections. Sur les architectures many-coeurs (typiquement Intel MIC), ces méthodes sont dramatiquement inefficaces du fait des accès aléatoires à la mémoire qui ne permettent pas de profiter des différents niveaux de cache mémoire et du manque de vectorisation de ces algorithmes.Tout le travail de la thèse a consisté, dans une première partie, à trouver des alternatives à cet algorithme de base en proposant le meilleur compromis performances/occupation mémoire qui tire parti des spécificités du MIC (multithreading et vectorisation). Dans un deuxième temps, nous sommes partis sur une approche radicalement opposée, approche dans laquelle les données ne sont pas stockées en mémoire, mais calculées à la volée. Toute une série d’optimisations de l’algorithme, des structures de données, vectorisation, déroulement de boucles et influence de la précision de représentation des données, ont permis d’obtenir des gains considérables par rapport à l’implémentation initiale.En fin de compte, une comparaison a été effectué entre les deux approches (données en mémoire et données calculées à la volée) pour finalement proposer le meilleur compromis en termes de performance/occupation mémoire. Au-delà de l'application ciblée (le transport MC), le travail réalisé est également une étude qui peut se généraliser sur la façon de transformer un problème initialement limité par la latence mémoire (« memory latency bound ») en un problème qui sature le processeur (« CPU-bound ») et permet de tirer parti des architectures many-coeurs<br>Monte Carlo (MC) neutron transport simulations are widely used in the nuclear community to perform reference calculations with minimal approximations. The conventional MC method has a slow convergence according to the law of large numbers, which makes simulations computationally expensive. Cross section computation has been identified as the major performance bottleneck for MC neutron code. Typically, cross section data are precalculated and stored into memory before simulations for each nuclide, thus during the simulation, only table lookups are required to retrieve data from memory and the compute cost is trivial. We implemented and optimized a large collection of lookup algorithms in order to accelerate this data retrieving process. Results show that significant speedup can be achieved over the conventional binary search on both CPU and MIC in unit tests other than real case simulations. Using vectorization instructions has been proved effective on many-core architecture due to its 512-bit vector units; on CPU this improvement is limited by a smaller register size. Further optimization like memory reduction turns out to be very important since it largely improves computing performance. As can be imagined, all proposals of energy lookup are totally memory-bound where computing units does little things but only waiting for data. In another word, computing capability of modern architectures are largely wasted. Another major issue of energy lookup is that the memory requirement is huge: cross section data in one temperature for up to 400 nuclides involved in a real case simulation requires nearly 1 GB memory space, which makes simulations with several thousand temperatures infeasible to carry out with current computer systems.In order to solve the problem relevant to energy lookup, we begin to investigate another on-the-fly cross section proposal called reconstruction. The basic idea behind the reconstruction, is to do the Doppler broadening (performing a convolution integral) computation of cross sections on-the-fly, each time a cross section is needed, with a formulation close to standard neutron cross section libraries, and based on the same amount of data. The reconstruction converts the problem from memory-bound to compute-bound: only several variables for each resonance are required instead of the conventional pointwise table covering the entire resolved resonance region. Though memory space is largely reduced, this method is really time-consuming. After a series of optimizations, results show that the reconstruction kernel benefits well from vectorization and can achieve 1806 GFLOPS (single precision) on a Knights Landing 7250, which represents 67% of its effective peak performance. Even if optimization efforts on reconstruction significantly improve the FLOP usage, this on-the-fly calculation is still slower than the conventional lookup method. Under this situation, we begin to port the code on GPGPU to exploit potential higher performance as well as higher FLOP usage. On the other hand, another evaluation has been planned to compare lookup and reconstruction in terms of power consumption: with the help of hardware and software energy measurement support, we expect to find a compromising solution between performance and energy consumption in order to face the "power wall" challenge along with hardware evolution
APA, Harvard, Vancouver, ISO, and other styles
5

Bolin, Christopher E. (Christopher Eric). "Iterative uncertainty reduction via Monte Carlo simulation : a streamlined life cycle assessment case study." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82189.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>"June 2013." Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (p. 97-103).<br>Life cycle assessment (LCA) is one methodology for assessing a product's impact on the environment. LCA has grown in popularity recently as consumers and governments request more information concerning the environmental consequences of goods and services. In many cases, however, carrying out a complete LCA is prohibitively expensive, demanding large investments of time and money to collect and analyze data. This thesis aims to address the complexity of LCA by highlighting important product parameters, thereby guiding data collection. LCA streamlining is the process of reducing the necessary effort to produce acceptable analyses. Many methods of LCA streamlining are unfortunately vague and rely on engineering intuition. While they can be effective, the reduction in effort is often accompanied by a commensurate increase in the uncertainty of the results. One nascent streamlining method aims to reduce uncertainty by generating random simulations of the target product's environmental impact. In these random Monte Carlo simulations the product's attributes are varied, producing a range of impacts. Parameters that contribute significantly to the uncertainty of the overall impact are targeted for resolution. To resolve a parameter, data must be collected to more precisely define its value. This research project performs a streamlined LCA case study in collaboration with a diesel engine manufacturer. A specific engine is selected and a complex model of its production and manufacturing energy use is created. The model, consisting of 184 parameters, is then sampled randomly to determine key parameters for resolution. Parameters are resolved progressively and the resulting decrease in uncertainty is examined. The primary metric for evaluating model uncertainty is False Error Rate (FSR), defined here as the confusion between two engines that differ in energy use by 10%. Initially the FSR is 21%, dropping to 6.1% after 20 parameters are resolved, and stabilizing at 5.8% after 39 parameters are resolved. The case study illustrates that, if properly planned, a streamlined LCA can be performed that achieves desired resolution while vastly reducing the data collection burden.<br>by Christopher E. Bolin.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Greberg, Felix. "Debt Portfolio Optimization at the Swedish National Debt Office: : A Monte Carlo Simulation Model." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275679.

Full text
Abstract:
It can be difficult for a sovereign debt manager to see the implications on expected costs and risk of a specific debt management strategy, a simulation model can therefore be a valuable tool. This study investigates how future economic data such as yield curves, foreign exchange rates and CPI can be simulated and how a portfolio optimization model can be used for a sovereign debt office that mainly uses financial derivatives to alter its strategy. The programming language R is used to develop a bespoke software for the Swedish National Debt Office, however, the method that is used can be useful for any debt manager. The model performs well when calculating risk implications of different strategies but debt managers that use this software to find optimal strategies must understand the model's limitations in calculating expected costs. The part of the code that simulates economic data is developed as a separate module and can thus be used for other studies, key parts of the code are available in the appendix of this paper. Foreign currency exposure is the factor that had the largest effect on both expected cost and risk, moreover, the model does not find any cost advantage of issuing inflation-protected debt. The opinions expressed in this thesis are the sole responsibility of the author and should not be interpreted as reflecting the views of the Swedish National Debt Office.<br>Det kan vara svårt för en statsskuldsförvaltare att se påverkan på förväntade kostnader och risk när en skuldförvaltningsstrategi väljs, en simuleringsmodell kan därför vara ett värdefullt verktyg. Den här studien undersöker hur framtida ekonomiska data som räntekurvor, växelkurser ock KPI kan simuleras och hur en portföljoptimeringsmodell kan användas av ett skuldkontor som främst använder finansiella derivat för att ändra sin strategi. Programmeringsspråket R används för att utveckla en specifik mjukvara åt Riksgälden, men metoden som används kan vara användbar för andra skuldförvaltare. Modellen fungerar väl när den beräknar risk i olika portföljer men skuldförvaltare som använder modellen för att hitta optimala strategier måste förstå modellens begränsningar i att beräkna förväntade kostnader. Delen av koden som simulerar ekonomiska data utvecklas som en separat modul och kan därför användas för andra studier, de viktigaste delarna av koden finns som en bilaga till den här rapporten. Valutaexponering är den faktor som hade störst påverkan på både förväntade kostnader och risk och modellen hittar ingen kostnadsfördel med att ge ut inflationsskyddade lån. Åsikterna som uttrycks i den här uppsatsen är författarens egna ansvar och ska inte tolkas som att de reflekterar Riksgäldens syn.
APA, Harvard, Vancouver, ISO, and other styles
7

Dugan, Nazim. "Structural Properties Of Homonuclear And Heteronuclear Atomic Clusters: Monte Carlo Simulation Study." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607475/index.pdf.

Full text
Abstract:
In this thesis study, a new method for finding the optimum geometries of atomic nanoparticles has been developed by modifying the well known diffusion Monte Carlo method which is used for electronic structure calculations of quantum mechanical systems. This method has been applied to homonuclear and heteronuclear atomic clusters with the aim of both testing the method and studying various properties of atomic clusters such as radial distribution of atoms and coordination numbers. Obtained results have been compared with the results obtained by other methods such as classical Monte Carlo and molecular dynamics. It has been realized that this new method usually finds local minima when it is applied alone and some techniques to escape from local minima on the potential energy surface have been developed. It has been concluded that these techniques of escaping from local minima are key factors in the global optimization procedure.
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Min. "Computed radiography system modeling, simulation and optimization." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0128/document.

Full text
Abstract:
Depuis plus d’un siècle, la radiographie sur film est utilisée pour le contrôle non destructif (CND) de pièces industrielles. Avec l’introduction de méthodes numériques dans le domaine médical, la communauté du CND industriel a commencé à considérer également les techniques numériques alternatives au film. La radiographie numérique (en anglais Computed radiography -CR) utilisant les écrans photostimulables (en anglais imaging plate -IP) est une voie intéressante à la fois du point de vue coût et facilité d’implémentation. Le détecteur (IP) utilisé se rapproche du film car il est flexible et réutilisable. L’exposition de l’IP aux rayons X génère une image latente qui est ensuite lue et numérisée grâce à un système de balayage optique par laser. A basse énergie, les performances du système CR sont bonnes ce qui explique son utilisation importante dans le domaine médical. A haute énergie par contre, les performances du système CR se dégradent à la fois à cause de la mauvaise absorption de l’IP mais également de la présence de rayonnement diffusé par la pièce qui, étant d’énergie plus faible, est préférentiellement absorbée par l’IP. Les normes internationales préconisent l’utilisation d’écrans métalliques pour améliorer la réponse des systèmes CR à haute énergie. Néanmoins, la nature et l’épaisseur de ces écrans n’est pas clairement définie et la gamme des configurations possibles est large. La simulation est un outil utile pour prévoir les performances d’une expérience et déterminer les meilleures conditions opératoires. Les méthodes Monte Carlo sont communément admises comme étant les plus précises pour simuler les phénomènes de transport de rayonnement, et ainsi comprendre les phénomènes physiques en jeu. Cependant, le caractère probabiliste de ces méthodes implique des temps de calcul importants, voire prohibitifs pour des géométries complexes. Les méthodes déterministes au contraire, peuvent prendre en compte des géométries complexes avec des temps de calcul raisonnables, mais l’estimation du rayonnement diffusé est plus difficile. Dans ce travail de thèse, nous avons tout d’abord mené une étude de simulation Monte Carlo afin de comprendre le fonctionnement des IP avec écrans métalliques à haute énergie pour le contrôle de pièces de forte épaisseur. Nous avons notamment suivi le trajet des photons X mais également des électrons. Quelques comparaisons expérimentales ont pu être menées à l’ESRF (European Synchrotron Radiation Facility). Puis nous avons proposé une approche de simulation hybride, qui combine l'utilisation de codes déterministe et Monte Carlo pour simuler l'imagerie d'objets de forme complexe. Cette approche prend en compte la dégradation introduite par la diffusion des rayons X et la fluorescence dans l'IP ainsi que la diffusion des photons optiques dans l'IP. Les résultats de différentes configurations de simulation ont été comparés<br>For over a century, film-based radiography has been used as a nondestructive testing technique for industrial inspections. With the advent of digital techniques in the medical domain, the NDT community is also considering alternative digital techniques. Computed Radiography (CR) is a cost-efficient and easy-to-implement replacement technique because it uses equipment very similar to film radiography. This technology uses flexible and reusable imaging plates (IP) as a detector to generate a latent image during x-ray exposure. With an optical scanning system, the latent image can be readout and digitized resulting in a direct digital image. CR is widely used in the medical field since it provides good performance at low energies. For industrial inspection, CR application is limited by its poor response to high energy radiation and the presence of scattering phenomena. To completely replace film radiography by such a system, its performance still needs to be improved by either finding more appropriate IPs or by optimizing operating conditions. Guidelines have been addressed in international standards to ensure a good image quality supplied by CR system, where metallic screens are recommended for the case of using high energy sources. However, the type and thickness of such a screen are not clearly defined and a large panel of possible configurations does exist. Simulation is a very useful tool to predict experimental outcomes and determine the optimal operating conditions. The Monte Carlo (MC) methods are widely accepted as the most accurate method to simulate radiation transport problems. It can give insight about physical phenomena, but due to its random nature, a large amount of computational time is required, especially for simulations involving complex geometries. Deterministic methods, on the other hand, can handle easily complex geometry, and are quite efficient. However, the estimation of scattering effects is more difficult with deterministic methods. In this thesis work, we have started with a Monte Carlo simulation study in order to investigate the physical phenomena involved in IP and in metallic screens at high energies. In particular we have studied separately the behavior of X-ray photons and electrons. Some experimental comparisons have been carried out at the European Synchrotron Radiation Facility. Then, we have proposed a hybrid simulation approach, combining the use of deterministic and Monte Carlo code, for simulating the imaging of complex shapes objects. This approach takes into account degradation introduced by X-ray scattering and fluorescence inside IP, as well as optical photons scattering during readout process. Different simulation configurations have been compared
APA, Harvard, Vancouver, ISO, and other styles
9

Giani, Monardo. "A cost-based optimization of a fiberboard pressing plant using Monte-Carlo simulation (a reliability program)." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30417/1/Monardo_Giani_Thesis.pdf.

Full text
Abstract:
In this research the reliability and availability of fiberboard pressing plant is assessed and a cost-based optimization of the system using the Monte- Carlo simulation method is performed. The woodchip and pulp or engineered wood industry in Australia and around the world is a lucrative industry. One such industry is hardboard. The pressing system is the main system, as it converts the wet pulp to fiberboard. The assessment identified the pressing system has the highest downtime throughout the plant plus it represents the bottleneck in the process. A survey in the late nineties revealed there are over one thousand plants around the world, with the pressing system being a common system among these plants. No work has been done to assess or estimate the reliability of such a pressing system; therefore this assessment can be used for assessing any plant of this type.
APA, Harvard, Vancouver, ISO, and other styles
10

Bergman, Alanah Mary. "Monte Carlo simulation of x-ray dose distributions for direct aperture optimization of intensity modulated treatment fields." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/30720.

Full text
Abstract:
This thesis investigates methods of reducing radiation dose calculation errors as applied to a specialized x-ray therapy called intensity modulated radiation therapy (IMRT). There are three major areas of investigation. First, limits of the popular 2D pencil beam kernel (PBK) dose calculation algorithm are explored. The ability to resolve high dose gradients is partly related to the shape of the PBK. Improvements to the spatial resolution can be achieved by modifying the dose kernel shapes already present in the clinical treatment planning system. Optimization of the PBK shape based on measured-to-calculated test pattern dose comparisons reduces the impact of some limitations of this algorithm. However, other limitations remain (e.g. assuming spatial invariance, no modeling of extra-focal radiation, and no modeling of lateral electron transport). These limitations directed this thesis towards the second major investigation - Monte Carlo (MC) simulation for IMRT. MC is considered to be the "gold standard" for radiation dose calculation accuracy. This investigation incorporates MC calculated beamlets of dose deposition into a direct aperture optimization (DAO) algorithm for IMRT inverse planning (MC-DAO) . The goal is to show that accurate tissue inhomogeneity information and lateral electronic transport information, combined with DAO, will improve the quality/accuracy of the patient treatment plan. MC simulation generates accurate beamlet dose distributions in traditionally difficultto- calculate regions (e.g. air-tissue interfaces or small (≤ 5 cm² ) x-ray fields). Combining DAO with MC beamlets reduces the required number of radiation units delivered by the linear accelerator by ~30-50%. The MC method is criticized for having long simulation times (hours). This can be addressed with distributed computing methods and data filtering ('denoising'). The third major investigation describes a practical implementation of the 3D Savitzky-Golay digital filter for MC dose 'denoising'. This thesis concludes that MC-based DAO for IMRT inverse planning is clinically feasible and offers accurate modeling of particle transport and dose deposition in difficult environments where lateral electronic dis-equilibrium exists.<br>Science, Faculty of<br>Physics and Astronomy, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Yoon Hyung. "Three Essays on Application of Optimization Modeling and Monte Carlo Simulation to Consumer Demand and Carbon Sequestration." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275275175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hatton, Marc. "Requirements specification for the optimisation function of an electric utility's energy flow simulator." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96956.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.<br>ENGLISH ABSTRACT: Efficient and reliable energy generation capability is vital to any country's economic growth. Many strategic, tactical and operational decisions take place along the energy supply chain. Shortcomings in South Africa's electricity production industry have led to the development of an energy ow simulator. The energy ow simulator is claimed to incorporate all significant factors involved in the energy ow process from primary energy to end-use consumption. The energy ow simulator thus provides a decision support system for electric utility planners. The original aim of this study was to develop a global optimisation model and integrate it into the existing energy ow simulator. After gaining an understanding of the architecture of the energy ow simulator and scrutinising a large number of variables, it was concluded that global optimisation was infeasible. The energy ow simulator is made up of four modules and is operated on a module-by-module basis, with inputs and outputs owing between modules. One of the modules, namely the primary energy module, lends itself well to optimisation. The primary energy module simulates coal stockpile levels through Monte Carlo simulation. Classic inventory management policies were adapted to fit the structure of the primary energy module, which is treated as a black box. The coal stockpile management policies that are introduced provide a prescriptive means to deal with the stochastic nature of the coal stockpiles. As the planning horizon continuously changes and the entire energy ow simulator has to be re-run, an efficient algorithm is required to optimise stockpile management policies. Optimisation is achieved through the rapidly converging cross-entropy method. By integrating the simulation and optimisation model, a prescriptive capability is added to the primary energy module. Furthermore, this study shows that coal stockpile management policies can be improved. An integrated solution is developed by nesting the primary energy module within the optimisation model. Scalability is incorporated into the optimisation model through a coding approach that automatically adjusts to an everchanging planning horizon as well as the commission and decommission of power stations. As this study is the first of several research projects to come, it paves the way for future research on the energy ow simulator by proposing future areas of investigation.<br>AFRIKAANSE OPSOMMING: Effektiewe en betroubare energie-opwekkingsvermoë is van kardinale belang in enige land se ekonomiese groei. Baie strategiese, taktiese en operasionele besluite word deurgaans in die energie-verskaffingsketting geneem. Tekortkominge in Suid-Afrika se elektrisiteitsopwekkingsindustrie het tot die ontwikkeling van 'n energie-vloei-simuleerder gelei. Die energie-vloei-simuleerder vervat na bewering al die belangrike faktore wat op die energie-vloei-proses betrekking het van primêre energieverbruik tot eindgebruik. Die energie-vloei-simuleerder verskaf dus 'n ondersteuningstelsel aan elektrisiteitsdiensbeplanners vir die neem van besluite. Die oorspronklike doel van hierdie studie was om 'n globale optimeringsmodel te ontwikkel en te integreer in die bestaande energie-vloeisimuleerder. Na 'n begrip aangaande die argitektuur van die energievloei- simuleerder gevorm is en 'n groot aantal veranderlikes ondersoek is, is die slotsom bereik dat globale optimering nie lewensvatbaar is nie. Die energie-vloei-simuleerder bestaan uit vier eenhede en werk op 'n eenheid-tot-eenheid basis met insette en uitsette wat tussen eenhede vloei. Een van die eenhede, naamlik die primêre energiemodel, leen dit goed tot optimering. Die primêre energiemodel boots steenkoolreserwevlakke deur Monte Carlo-simulering na. Tradisionele voorraadbestuursbeleide is aangepas om die primêre energiemodel se struktuur wat as 'n swartboks hanteer word, te pas. Die steenkoolreserwebestuursbeleide wat ingestel is, verskaf 'n voorgeskrewe middel om met die stogastiese aard van die steenkoolreserwes te werk. Aangesien die beplanningshorison deurgaans verander en die hele energie-vloei-simulering weer met die energie-vloei-simuleerder uitgevoer moet word, word 'n effektiewe algoritme benodig om die re-serwebestuursbeleide te optimeer. Optimering word bereik deur die vinnige konvergerende kruis-entropie-metode. 'n Geïntegreerde oplossing is ontwikkel deur die primêre energiemodel en die optimering funksie saam te voeg. Skalering word ingesluit in die optimeringsmodel deur 'n koderingsbenadering wat outomaties aanpas tot 'n altyd-veranderende beplanningshorison asook die ingebruikneem en uitgebruikstel van kragstasies. Aangesien hierdie studie die eerste van verskeie navorsingsprojekte is, baan dit die weg vir toekomstige navorsing oor die energie-vloeisimuleerder deur ondersoekareas vir die toekoms voor te stel.
APA, Harvard, Vancouver, ISO, and other styles
13

Poignant, Floriane. "Physical, chemical and biological modelling for gold nanoparticle-enhanced radiation therapy : towards a better understanding and optimization of the radiosensitizing effect." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1160.

Full text
Abstract:
En radiothérapie, les nanoparticules faites de métaux lourds telles que les nanoparticules l’or (AuNPs) ont démontré des propriétés radiosensibilisantes particulièrement prometteuses. Une augmentation de la dose et du nombre de radicaux produits, à échelle tumorale (effet photoélectrique) et à échelle sub-cellulaire (électrons Auger) pourraient être responsables d’une partie des effets pour les rayons X de basse énergie. Dans le cadre de cette thèse, nous proposons d'étudier ces mécanismes physiques et chimiques précoces par des outils de simulation, afin de mieux les quantifier et comprendre leur impact sur la survie cellulaire. Nous avons d’abord finalisé et validé une simulation Monte Carlo développée pour suivre les électrons jusqu’à très basse énergie à la fois dans l’eau (meV) et dans l’or (eV). Nous avons obtenu de bons résultats pour l’or en comparant nos données avec des données expérimentales de la littérature, en terme de production d’électrons et de perte d’énergie. Nous avons utilisé cet outil de simulation pour quantifier l’énergie déposée dans des nanocibles situées près d’une AuNP, qui est corrélée à la probabilité de générer des dommages. Cette étude a nécessité d’importantes optimisations, afin d’atteindre des temps de calculs raisonnables. Nous avons montré une augmentation significative de la probabilité d’avoir un dépôt d’énergie dans la nanocible supérieur à une énergie seuil, dans un rayon de 200 nm autour de la AuNP, ce qui suggère qu’une AuNP pourrait efficacement détruire des cibles biologiques situées dans sa périphérie. Nous avons ensuite utilisé la simulation pour quantifier des effets chimiques. A échelle macroscopique, nous avons estimé l'augmentation de la quantité de radicaux libres produits en présence d’une concentration d’AuNPs. Nous avons également comparé la distribution radiale des espèces chimiques d’une nanoparticule d’or ionisée, à celle d’une nanoparticule d’eau ionisée. Si le nombre total d'espèces chimiques par ionisation était en moyenne plus important pour l'or que pour l'eau, le nombre d’espèces chimiques produites en périphérie de la nanoparticule n’était pas systématiquement supérieur pour l’or par rapport à l’eau. Cela suggère que l’effet de la AuNP dans sa périphérie réside surtout dans l’augmentation de la probabilité d’avoir une ionisation. Nous avons également étudié plusieurs scénarios pour expliquer l’augmentation expérimentale inattendue de la production d’espèces fluorescentes lors de l’irradiation d’une solution d’AuNPs et de coumarine. Notre étude suggère qu’un scénario plausible pouvant expliquer les observations expérimentales est l’interférence entre une AuNP et une des molécules intermédiaires produites suite à la réaction entre la coumarine et le radical hydroxyle. Pour finir, nous avons injecté les résultats des simulations dans le modèle biophysique NanOx, développé à l’origine à l’IPNL pour calculer des doses biologiques en hadronthérapie, afin de prédire la survie cellulaire en présence de AuNPs. Nous avons aussi implémenté le Local Effect Model (LEM), principal modèle biophysique utilisé dans le contexte des nanoparticules. Pour le LEM, nous nous sommes appuyés sur plusieurs approches dosimétriques proposées dans la littérature. Pour un système simpliste où les AuNPs étaient distribuées de façon homogène dans la cellule, nous avons montré que, selon l’approche dosimétrique, les prédictions de survies du LEM étaient significativement différentes. De plus, nous avons obtenu une augmentation de la mort cellulaire avec NanOx qui était due uniquement à l’augmentation macroscopique du dépôt de dose. Nous n’avons obtenu aucun effet supplémentaire dû aux électrons Auger, en contradiction avec les prédictions du LEM. Cette étude suggère que les modèles actuels proposés pour prédire l'effet radiosensibilisant des AuNPs doivent être améliorés pour être prédictifs, en prenant par exemple en compte de potentiels mécanismes biologiques mis en évidence par l'expérience<br>In radiation therapy, high-Z nanoparticles such as gold nanoparticles (GNPs) have shown particularly promising radiosensitizing properties. At an early stage, an increase in dose deposition and free radicals production throughout the tumour (photoelectric effect) and at sub-cellular scale (Auger cascade) might be responsible for part of the effect for low-energy X-rays. In this Ph.D work, we propose to study these early mechanisms with simulation tools, in order to better quantify them and better understand their impact on cell survival. We first finalised and validated Monte Carlo (MC) models, developed to track electrons down to low energy both in water (meV) and gold (eV). The comparison of theoretical predictions with available experimental data in the literature for gold provided good results, both in terms of secondary electron production and energy loss. This code allowed us to quantify the energy deposited in nanotargets located near the GNP, which is correlated with the probability to generate damages. This study required important optimisations in order to achieve reasonable computing time. We showed a significant increase of the probability of having an energy deposition in the nanotarget larger than a threshold, within 200 nm around the GNP, suggesting that GNPs may be particularly efficient at destroying biological nanotargets in its vicinity. The MC simulation was then used to quantify some chemical effects. At the macroscale, we quantified the increase of free radicals production for a concentration of GNPs. We also compared the radial distribution of chemical species following the ionisation of either a gold nanoparticle or a water nanoparticle. We showed that following an ionization, the average number of chemical species produced is higher for gold compared to water. However, in the vicinity of the nanoparticle, the number of chemical species was not necessarily higher for gold compared to water. This suggests that the effect of GNPs in its vicinity mostly comes from the increase of the probability of having an ionisation. We also studied several scenarios to explain the unexpectedly high experimental increase of the production of fluorescent molecules during the irradiation of a colloidal solution of GNPs and coumarin. Our study suggests that a plausible scenario to explain experimental measurements would be that GNPs interfere with an intermediate molecule, produced following the reaction between a coumarine molecule and a hydroxyl radical. During the last step of this Ph.D work, we injected our MC results in the biophysical model NanOx, originally developed at IPNL to calculate the biological dose in hadrontherapy, to predict cell survival in presence of GNPs. In addition, we implemented the Local Effect Model (LEM), currently the main biophysical model implemented for GNP-enhanced radiation therapy, to compare the NanOx and the LEM predictions with each other. In order to estimate cell survival with the LEM, we used various dosimetric approaches that were proposed in the literature. For a simple system where GNPs were homogeneously distributed in the cell, we showed that the LEM had different outcomes with regard to cell survival, depending on the dosimetric approach. In addition, we obtained an increase of cell death with the biophysical model NanOx that was purely due to the increase of the macroscopic dose. We did not obtain an increased biological effectiveness due to Auger electrons, which comes in contradiction with the LEM predictions. This study suggests that the current biophysical models available to predict the radiosensitizing effect of GNPs must be improved to be predictive. This may be done, for instance, by accounting for potential biological mechanisms evidenced by experimental works
APA, Harvard, Vancouver, ISO, and other styles
14

Wei, Xiaofan. "Stochastic Analysis and Optimization of Structures." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1163789451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kidwell, Ann-Sofi. "Optimization under parameter uncertainties with application to product cost minimization." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-38858.

Full text
Abstract:
This report will look at optimization under parameters of uncertainties. It will describe the subject in its wider form, then two model examples will be studied, followed by an application to an ABB product. The Monte Carlo method will be described and scrutinised, with the quasi-Monte Carlo method being favoured for large problems. An example will illustrate how the choice of Monte Carlo method will affect the efficiency of the simulation when evaluating  functions of different dimensions. Then an overview of mathematical optimization is given, from its simplest form to nonlinear, nonconvex  optimization problems containing uncertainties.A Monte Carlo simulation is applied to the design process and cost function for a custom made ABB transformer, where the production process is assumed to contain some uncertainties.The result from optimizing an ABB cost formula, where the in-parameters contains some uncertainties, shows how the price can vary and is not fixed as often assumed, and how this could influence an accept/reject decision.
APA, Harvard, Vancouver, ISO, and other styles
16

Mickum, George S. "Development of a dedicated hybrid K-edge densitometer for pyroprocessing safeguards measurements using Monte Carlo simulation models." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54358.

Full text
Abstract:
Pyroprocessing is an electrochemical method for recovering actinides from used nuclear fuel and recycling them into fresh nuclear fuel. It is posited herein that proposed safeguards approaches on pyroprocessing for nuclear material control and accountability face several challenges due to the unproven plutonium-curium inseparability argument and the limitations of neutron counters. Thus, the Hybrid K-Edge Densitometer is currently being investigated as an assay tool for the measurement of pyroprocessing materials in order to perform effective safeguards. This work details the development of a computational model created using the Monte Carlo N-Particle code to reproduce HKED assay of samples expected from the pyroprocesses. The model incorporates detailed geometrical dimensions of the Oak Ridge National Laboratory HKED system, realistic detector pulse height spectral responses, optimum computational efficiency, and optimization capabilities. The model has been validated on experimental data representative of samples from traditional reprocessing solutions and then extended to the sample matrices and actinide concentrations of pyroprocessing. Data analysis algorithms were created in order to account for unsimulated spectral characteristics and correct inaccuracies in the simulated results. The realistic assay results obtained with the model have provided insight into the extension of the HKED technique to pyroprocessing safeguards and reduced the calibration and validation efforts in support of that design study. Application of the model has allowed for a detailed determination of the volume of the sample being actively irradiated as well as provided a basis for determining the matrix effects from the pyroprocessing salts on the HKED assay spectra.
APA, Harvard, Vancouver, ISO, and other styles
17

LUZ, CRISTINA PIMENTA DE MELLO SPINETI. "COMMERCIAL OPTIMIZATION OF A WIND FARM IN BRAZIL USING MONTE CARLO SIMULATION WITH EXOGENOUS CLIMATIC VARIABLES AND A NEW PREFERENCE FUNCTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27858@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO<br>Nos últimos anos, observa-se crescente penetração da energia eólica na matriz energética mundial e brasileira. Em 2015, ela já representava (seis por cento) da capacidade total de geração de energia do país, colocando-o na (décima) posição entre os países com capacidade eólica instalada. A crescente penetração dessa fonte de energia e suas características de intermitência e forte sazonalidade, passaram a demandar modelos de otimização capazes de auxiliar tanto a gestão dos sistemas elétricos com geração intermitente de energia eólica, quanto a comercialização dessa energia. Avançaram, assim, os estudos de previsões de médias a cada (dez) minutos, horárias e diárias de geração eólica, para atender a sua inserção na programação dos sistemas elétricos e a sua comercialização em mercados diários e horários. Contudo, poucos estudos deram atenção à previsão e simulação de médias mensais de geração eólica, imprescindíveis para gestão e otimização da comercialização dessa energia no Brasil, visto que esta ocorre essencialmente em base mensal. Neste contexto, insere-se esta tese, que busca avaliar a otimização comercial de um parque eólico no mercado livre de energia brasileiro, considerando diferentes modelos de simulação da incerteza de geração eólica e níveis de aversão ao risco do gestor. Para representar diferentes níveis de aversão ao risco do gestor, desenvolveu-se uma nova função de preferência, capaz de modelar a variação do nível de aversão ao risco de um mesmo gestor, para diferentes faixas de preferência, definidas a partir de percentis αs de VaRα. A função de preferência desenvolvida é uma ponderação entre o valor esperado e níveis de CVaR dos resultados. De certo modo, ela altera as probabilidades dos resultados, de acordo as preferências do gestor, similar ao efeito dos pesos de decisão na Teoria do Prospecto. Para simulação da geração eólica são adotados modelos autorregressivos com sazonalidade representada por dummies mensais (ARX-11) e periódicos (PAR). Considera-se, ainda, a inclusão de variáveis climáticas exógenas no modelo ARX-11, com ganho de capacidade preditiva. Observou-se que, para um gestor neutro ao risco, as diferentes simulações de geração eólica não alteraram a decisão ótima. O mesmo não é válido para um gestor avesso ao risco, especialmente ao ser considerado o modelo de simulação com variáveis climáticas exógenas. Portanto, é importante a definição de um único modelo de simulação a ser considerado pelo gestor avesso ao risco ou, a adoção de alguma técnica multicritério para ponderação de diferentes modelos. O perfil de risco também altera as decisões ótimas do gestor, observando-se redução do desvio-padrão e da média da distribuição dos resultados e, aumento dos CVaRs e prêmio de risco, à medida que aumenta a aversão ao risco. Assim, é importante a especificação de uma única função de preferência, que represente adequadamente o perfil de risco do gestor ou da empresa, para otimização da comercialização. A flexibilidade da função de preferência desenvolvida, ao permitir a definição de diferentes níveis de aversão ao risco do gestor, para diferentes faixas de preferência, contribui para essa especificação.<br>In recent years, we have seen an increased penetration of wind power in the Brazilian energy matrix and also worldwide. In 2015, wind power already accounted for (six percent) of the Brazilian total power capacity and the country was the (tenth) in the world raking of wind power installed capacity. Due to the growing penetration of the source, its intermittency and strong seasonality, optimization models able to deal with the management of wind power, both in electrical systems operation and in trading environment, are necessary. Thus, we see the growth in the number of studies concerned about wind power forecasts for every (10) minutes, hours and days, meeting the electrical systems and international trading schedules. However, few studies have given attention to the forecasting and simulation of wind power monthly averages, which are essential for the management and optimization of energy trading in Brazil, since its occurs essentially on a monthly basis. In this context, we introduce this thesis, which seeks to assess the commercial optimization of a wind farm in the Brazilian energy free market, considering different simulation models for the wind power production uncertainty and different levels of manager s risk aversion. In order to represent the manager s different levels of risk aversion, we developed a new preference function, which is able to model the variation of risk aversion level of the same manager, for different preference groups. These groups are defined by α s percentiles of VaRα. The developed preference function is a weighted average between expected value of results and CVaR levels. In a way, it changes the odds of the results, according to the manager s preference, similar to the effect of the decision weights on Prospect Theory. We adopted autoregressive models to simulate wind power generation, with seasonality represented by monthly dummies (ARX -11) or periodic model (PAR). Furthermore, we consider the inclusion of climate exogenous variables in the ARX-11 model and obtain predictive gain. We observed that for a risk neutral manager, different simulations of wind power production do not change the optimal decision. However, this does not apply for risk averse managers, especially when we consider the simulation model with climate exogenous variables. Therefore, it is important that the risk averse manager establishes a single simulation model to consider or adopts some multi-criteria technique for weighting different models. The risk profile also changes the manager optimal decision. We observed that increasing risk aversion, the standard deviation and mean of the results distribution decrease, while risk premium and CVaRs increase. Therefore, to proceed the optimization, it is important to specify a single preference function, which represents adequately the manager or company risk profile. The flexibility of the developed preference function, allowing the definition of different manager s risk aversion levels for different preference groups, contributes to this specification.
APA, Harvard, Vancouver, ISO, and other styles
18

Dumas, Antoine. "Développement de méthodes probabilistes pour l'analyse des tolérances des systèmes mécaniques sur-contraints." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0054/document.

Full text
Abstract:
L'analyse des tolérances des mécanismes a pour but d'évaluer la qualité du produit lors de sa phase de conception. La technique consiste à déterminer si, dans une production de grandes séries, le taux de rebuts des mécanismes défaillants est acceptable. Deux conditions doivent être vérifiées: une condition d'assemblage et une condition fonctionnelle. La méthode existante se base sur le couplage de la simulation de Monte Carlo avec un algorithme d'optimisation qui est très couteuse en temps de calcul. L'objectif des travaux de thèse est de développer des méthodes plus efficaces basées sur des approches probabilistes. Dans un premier temps, il est proposé une linéarisation des équations non linéaires du modèle de comportement afin de simplifier l'étape faisant appel à l'algorithme d'optimisation. Une étude de l'impact de cette opération sur la qualité de la probabilité est menée. Afin de minimiser l'erreur d'approximation, deux procédures itératives pour traiter le problème d'assemblage sont proposées. Ils permettent de calculer la probabilité de défaillance d'assemblage de façon précise en un temps de calcul réduit. En outre, les travaux de thèse ont permis le développement d'une nouvelle méthode de résolution basée sur la méthode de fiabilité système FORM (First Order Reliability Method) système. Cette méthode permet de traiter uniquement le problème fonctionnel. Elle a nécessité la mise au point d'une nouvelle formulation du problème d'analyse des tolérances sous forme système. La formulation décompose le mécanisme hyperstatique en plusieurs configurations isostatiques, le but étant de considérer les configurations dominantes menant à une situation de défaillance. La méthode proposée permet un gain de temps considérable en permettant d'obtenir un résultat en quelques minutes, y compris pour atteindre des faibles probabilités<br>Tolerance analysis of mechanism aims at evaluating product quality during its design stage. Technique consists in computing a defect probability of mechanisms in large series production. An assembly condition and a functional condition are checked. Current method mixes a Monte Carlo simulation and an optimization algorithm which is too much time consuming. The objective of this thesis is to develop new efficient method based on probabilistic approach to deal with the tolerance analysis of overconstrained mechanism. First, a linearization procedure is proposed to simplify the optimization algorithm step. The impact of such a procedure on the probability accuracy is studied. To overcome this issue, iterative procedures are proposed to deal with the assembly problem. They enable to compute accurate defect probabilities in a reduced computing time. Besides, a new resolution method based on the system reliability method FORM (First Order Reliability Method) for systems was developed for the functional problem. In order to apply this method, a new system formulation of the tolerance analysis problem is elaborated. Formulation splits up the overconstrained mechanism into several isoconstrained configurations. The goal is to consider only the main configurations which lead to a failure situation. The proposed method greatly reduces the computing time allowing getting result within minutes. Low probabilities can also be reached and the order of magnitude does not influence the computing time
APA, Harvard, Vancouver, ISO, and other styles
19

Nygren, Nelly. "Optimization of the Gamma Knife Treatment Room Design." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300904.

Full text
Abstract:
Radiation shielding is a central part of the design of treatment rooms for radiation therapy systems. The dose levels that medical staff and members of the public can be exposed to outside the treatment rooms are regulated by authorities and influence the required wall thicknesses and possible locations for the systems. Several standard methods exist for performing shielding calculations, but they are not well adapted to the stereotactic radiosurgery system Leksell Gamma Knife because of its self-shielding properties. The built-in shielding makes the leakage radiation anisotropic and generally have lower energy than the primary radiation from the Gamma Knife's cobalt sources. Oversimplifications made in the standard shielding calculation methods regarding the field can lead to excessively thick shielding and limit the number of suitable locations for the system.  In this thesis project, a simulation-based dose calculation algorithm was developed, that uses Monte Carlo-generated data in two steps. The algorithm uses a phase space to accurately describe the radiation field around the Gamma Knife. Information about individual photons in the field is then combined with a generated library of data describing the resulting dose outside a wall depending on the wall thickness and the photon energy. The dose calculation algorithm is fast enough to be integrated into optimization processes, in which the algorithm is used iteratively while varying room design parameters. Demonstrated in this report is a case with a room of fixed size, in which the Gamma Knife's position and the walls' thicknesses are varied, with the aim to find the room design resulting in the minimum wall thicknesses needed to achieve acceptable dose levels outside. The results in this thesis indicate that the dose calculation algorithm performs well and could likely be used in more complex optimizations with more design variables and more advanced design goals.<br>Strålsäkerhet är en viktig aspekt vid uppförandet av behandlingsrum för strål-terapisystem. Strålningsnivåerna som sjukvårdspersonal och allmänheten kan exponeras för utanför behandlingsrummet regleras av myndigheter och påverkar vilken väggtjocklek som behövs och vilka platser som är lämpliga att placera systemen på. Flertalet metoder för strålskyddsberäkning existerar, men de är inte väl anpassade till det stereotaktiska radiokirurgiska systemet Leksell Gamma Knife, eftersom det har ett inbyggt strålskydd. Det inbyggda strålskyddet gör att strålfältet runt Gamma Knife är anisotropt och generellt har lägre energi än primärstrålningen från systemets koboltkällor. Förenklingar som görs rörande strålfältet i flera existerande metoder för strålskyddsberäkning kan leda till att överdrivet tjocka strålskydd används eller begränsa antalet lämpliga platser att placera systemet på. I detta projekt utvecklades en dosberäkningsalgoritm, som i två steg använder data genererad genom Monte Carlo-simuleringar. Algoritmen använder ett fasrum för att detaljerat beskriva strålfältet runt Gamma Knife. Information om enskilda fotoner i fältet används sen i kombination med ett genererat bibliotek av data som beskriver det dosbidrag som en foton bidrar med utanför behandlingsrummet, baserat på fotonens energi och väggarnas tjocklek. Dosberäkningsalgoritmen är snabb nog att integreras i optimeringsprocesser där den används iterativt samtidigt som rumsdesignparametrar varieras. I denna rapport demonstreras ett fall med ett rum av bestämd storlek, där positionen av Gamma Knife i rummet och väggarnas tjocklekar varieras. Optimeringens syfte i exemplet är att hitta den rumsdesign som med de minsta väggtjocklekarna resulterar i acceptabla strålningsnivåer utanför rummet. Resultaten tyder på att dosberäkningsalgoritmen sannolikt kan användas i mer komplexa optimeringar med fler designvariabler och mer avancerade designmål.
APA, Harvard, Vancouver, ISO, and other styles
20

Callert, Gustaf, and Dahlström Filip Halén. "A performance investigation and evaluation of selected portfolio optimization methods with varying assets and market scenarios." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190997.

Full text
Abstract:
This study investigates and evaluates how different portfolio optimization methods perform when varying assets and financial market scenarios. Methods included are mean variance, Conditional Value-at-Risk, utility based, risk factor based and Monte Carlo optimization. Market scenarios are represented by stagnating, bull and bear market data from the Bloomberg database. In order to perform robust optimizations resampling of the Bloomberg data has been done hundred times. The evaluation of the methods has been done with respect to selected ratios and two benchmark portfolios. Namely an equally weighted portfolio and an equally weighted risk contributions portfolio. The study found that mean variance and Conditional Value-at-Risk optimization performed best when using linear assets in all the investigated cases. Considering non-linear assets such as options an equally weighted portfolio performs best.<br>Den här studien undersöker och utvärderar hur olika portföljoptimeringsmetoder presterar med varierande finansiella tillgångsslag och marknadsscenarion. De metoder som har undersökts är: väntevärde-varians, villkorligt-värde-av-risk, nyttjande- och Monte Carlo baserad optimering. De marknadsscenarion som valts är: stagnerande, uppåt- samt nedåtgående scenarion där marknadsdata hämtats från Bloomberg för respektive tillgång. För att erhålla robusta optimeringsresultat har data omsamplats hundra gånger. Utvärderingen av metoderna har gjorts med avseende på utvalda indikatorer och två jämförelseportföljer, en likaviktad portfölj och en likariskviktad portfölj. Studien fann att portföljer genererade av väntevärde-varians och villkorligt-värde-av-risk optimering visade bäst prestanda, när linjära tillgångar använts i samtliga scenarion. När ickelinjära tillgångar såsom optioner har använts gav den likaviktade jämförelseportföljen bäst resultat i samtliga scenarion.
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Yanhui. "A holistic framework of degradation modeling for reliability analysis and maintenance optimization of nuclear safety systems." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC002/document.

Full text
Abstract:
Composants de systèmes de sûreté nucléaire sont en général très fiable, ce qui conduit à une difficulté de modéliser leurs comportements de dégradation et d'échec en raison de la quantité limitée de données disponibles. Par ailleurs, la complexité de cette tâche de modélisation est augmentée par le fait que ces systèmes sont souvent l'objet de multiples processus concurrents de dégradation et que ceux-ci peut être dépendants dans certaines circonstances, et influencé par un certain nombre de facteurs externes (par exemple la température, le stress, les chocs mécaniques, etc.).Dans ce cadre de problème compliqué, ce travail de thèse vise à développer un cadre holistique de modèles et de méthodes de calcul pour l'analyse basée sur la fiabilité et la maintenance d'optimisation des systèmes de sûreté nucléaire en tenant compte des connaissances disponibles sur les systèmes, les comportements de dégradation et de défaillance, de leurs dépendances, les facteurs influençant externes et les incertitudes associées.Les contributions scientifiques originales dans la thèse sont:(1) Pour les composants simples, nous intégrons des chocs aléatoires dans les modèles de physique multi-états pour l'analyse de la fiabilité des composants qui envisagent dépendances générales entre la dégradation et de deux types de chocs aléatoires.(2) Pour les systèmes multi-composants (avec un nombre limité de composants):(a) un cadre de modélisation de processus de Markov déterministes par morceaux est développé pour traiter la dépendance de dégradation dans un système dont les processus de dégradation sont modélisées par des modèles basés sur la physique et des modèles multi-états; (b) l'incertitude épistémique à cause de la connaissance incomplète ou imprécise est considéré et une méthode volumes finis est prolongée pour évaluer la fiabilité (floue) du système; (c) les mesures d'importance de l'écart moyen absolu sont étendues pour les composants avec multiples processus concurrents dépendants de dégradation et soumis à l'entretien; (d) la politique optimale de maintenance compte tenu de l'incertitude épistémique et la dépendance de dégradation est dérivé en combinant schéma volumes finis, évolution différentielle et non-dominée de tri évolution différentielle; (e) le cadre de la modélisation de (a) est étendu en incluant les impacts des chocs aléatoires sur les processus dépendants de dégradation.(3) Pour les systèmes multi-composants (avec un grand nombre de composants), une méthode d'évaluation de la fiabilité est proposé considérant la dépendance dégradation en combinant des diagrammes de décision binaires et simulation de Monte Carlo pour réduire le coût de calcul<br>Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the modeling framework of (a) is extended by including the impacts of random shocks on the dependent degradation processes.(3) For multi-component systems (with a large number of components), a reliability assessment method is proposed considering degradation dependency, by combining binary decision diagrams and Monte Carlo simulation to reduce computational costs
APA, Harvard, Vancouver, ISO, and other styles
22

Figueiredo, Marcelo Vilela [UNESP]. "Modelo multiobjetivo de análise envoltória de dados combinado com desenvolvimento de funções empíricas e otimização via simulação Monte Carlo." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/150767.

Full text
Abstract:
Submitted by MARCELO VILELA FIGUEIREDO null (marcelo_mvf@yahoo.com.br) on 2017-05-25T23:07:30Z No. of bitstreams: 1 Dissertação.pdf: 1014783 bytes, checksum: 22908ac56d455abd6c044c5f2ad518ac (MD5)<br>Rejected by Luiz Galeffi (luizgaleffi@gmail.com), reason: Solicitamos que realize uma nova submissão seguindo a orientação abaixo: O arquivo submetido não contém o certificado de aprovação. O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão com o arquivo correto. Agradecemos a compreensão. on 2017-05-30T16:26:10Z (GMT)<br>Submitted by MARCELO VILELA FIGUEIREDO null (marcelo_mvf@yahoo.com.br) on 2017-05-30T21:46:52Z No. of bitstreams: 1 Dissertação - Marcelo Vilela Figueiredo.pdf: 1210959 bytes, checksum: 8c48e557b53c5d68ac46ba054aedee03 (MD5)<br>Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-05-31T12:49:55Z (GMT) No. of bitstreams: 1 figueiredo_mv_me_guara.pdf: 1210959 bytes, checksum: 8c48e557b53c5d68ac46ba054aedee03 (MD5)<br>Made available in DSpace on 2017-05-31T12:49:55Z (GMT). No. of bitstreams: 1 figueiredo_mv_me_guara.pdf: 1210959 bytes, checksum: 8c48e557b53c5d68ac46ba054aedee03 (MD5) Previous issue date: 2017-03-31<br>O controle de qualidade é um dos principais pilares para um bom rendimento de uma linha produtiva, visando garantir maior eficiência, eficácia e redução de custos de produção. A identificação de causas de defeitos e o controle das mesmas é uma atividade relativamente complexa, devido à infinidade de variáveis presentes em determinados processos. Na produção de itens à base de aço fundido, objetiva-se reduzir defeitos de fundição (rechupes, trincas, problemas dimensionais, entre outros), os quais podem ser ocasionados por diversas variáveis de processo, tais como: composição química do aço, temperatura de vazamento e propriedades mecânicas. Em virtude disso, o presente trabalho foi desenvolvido em uma indústria siderúrgica de grande porte, a qual atua na produção de componentes ferroviários e industriais. Por meio de sua extensa base de dados, foram avaliadas as eficiências dos produtos produzidos, sendo os mesmos denominados DMU (Decision Making Units). Para tal foi aplicada a BiO-MCDEA (Bi Objective Data Envelopment Analysis) em sete DMUs produzidas à base de aço fundido em função de 38 variáveis de processos. Nesta aplicação foram evidenciadas as variáveis de processos (input/output) influentes na determinação da eficiência das DMUs. Uma vez obtidos tais resultados, foram desenvolvidas funções empíricas para as variáveis respostas em função das variáveis de processos influentes por meio de regressão não-linear múltipla. Por fim foi realizada a Otimização via Simulação Monte Carlo de forma a determinar com quais valores se deve trabalhar com cada input para a otimização das funções empíricas. Os resultados obtidos foram satisfatórios, sendo bem condizentes com a realidade da empresa e a abordagem aplicada por meio da combinação de diferentes ferramentas se mostrou aderente à realidade estudada, e também inovadora.<br>Quality control is one of the pillars to guaranty a good yield on a production line, aiming to reach better efficiency, effectiveness and reduction of production costs. The identification of defects causes and its control is an activity relatively complex, due to the infinity of variables on some process. One of the most important objectives on a Steel Castings Parts production is to reduce castings defects (shrinkage, cracks, dimensional problems, etc.), that can be caused by several process variables, such asChemical Composition, Pouring Temperature and Mechanical Properties. Due to the mentioned explanations, this study was developed at a large steel industry, which produces rail and industrial parts. The efficiency of the produced parts, called DMU (Decision Making Units), was analyzed through an extensive data base. It was done by using BiO-MCDEA (Bi Objective Data Envelopment Analysis) on seven DMUs, which are steel casting parts, in function of 38 process variables. Additionally, the process variables influents on the DMU’s efficiency determination were evidenced through the mentioned implementation. Once those results were obtained, empirical functions were developed for the response variables in function of the influents process variables through multiple non-linear regression. Finally an optimization via Monte Carlo Simulation was implemented in order to determine the inputs values necessary to optimize the empirical functions. The achieved results were satisfactory, being consistent with the industry’s reality and the applied methodology through the combination of different tools were effectiveness and innovative.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Xing. "Novel brachytherapy techniques for cervical cancer and prostate cancer." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1682.

Full text
Abstract:
Intensity-modulated brachytherapy techniques, compensator-based intensity modulated brachytherapy (CBT) and interstitial rotating shield brachytherapy (I-RSBT), are two novel conceptual radiation therapies for treating cervical and prostate cancer, respectively. Compared to conventional brachytherapy techniques for treating cervical cancer, CBT can potentially improve the dose conformity to the high-risk clinical target volume (CTV) of the cervix in a less invasive approach. I-RSBT can reduce the dose delivered to the prostate organ at risks (OARs) with the same radiation dose delivered to the prostate CTV. In this work, concepts and prototypes for CBT and I-RSBT were introduced and developed. Preliminary dosimetric measurements were performed for CBT and I-RSBT, respectively. A CBT prototype system was constructed and experimentally validated. A prototype cylindrical compensator with eight octants, each with different thicknesses, was designed. Direct metal laser sintering (DMLS) was used to construct CoCr and Ti compensator prototypes, and a 4-D milling technique was used to construct a Ti compensator prototype. Gafchromic EBT2 films, held by an acrylic quality assurance (QA) phantom, were irradiated to approximately 125 cGy with an electronic brachytherapy (eBT) source for both shielded and unshielded cases. The dose at each point on the films were calculated using a TG-43 calculation model that was modified to account for the presence of a compensator prototype by ray-tracing. With I-RSBT, a multi-pass dose delivery mechanism with prototypes was developed. Dosimetric measurements for a Gd-153 radioisotope was performed to demonstrate that using multiple partially shielded Gd-153 sources for I-RSBT is feasible. A treatment planning model was developed for applying I-RSBT clinically. A custom-built, stainless steel encapsulated 150 mCi Gd-153 capsule with an outer length of 12.8 mm, outer diameter of 2.10 mm, active length of 9.98 mm, and active diameter of 1.53 mm was used. A partially shielded catheter was constructed with a 500 micron platinum shield and a 500 micron aluminum emission window, both with 180° azimuthal coverage. An acrylic phantom was constructed to measure the dose distributions from the shielded catheter in the transverse plane using Gafchromic EBT3 films. Film calibration curves were generated from 50, 70, and 100 kVp x-ray beams with NIST-traceable air kerma values to account for energy variation. In conclusion, CBT, which is a non-invasive alternative to supplementary interstitial brachytherapy, is expected to improve dose conformity to bulky cervical tumors relative to conventional intracavitary brachytherapy. However, at the current stage, it would be time-consuming to construct a patient-specific compensator using DMLS, and the quality assurance of the compensator would be difficult. I-RSBT is a promising approach to reducing radiation dose delivered to prostate OARs. The next step in making Gd-153 based I-RSBT feasible in clinic is developing a Gd-153 source that is small enough such that the source, shield, and catheter all fit within a 16 guage needle, which has a 1.65 mm diameter.
APA, Harvard, Vancouver, ISO, and other styles
24

Besbes, Mariem. "Modélisation et résolution du problème d’implantation des ateliers de production : proposition d’une approche combinée Algorithme Génétique – Algorithme A*." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC094.

Full text
Abstract:
Pour faire face à la concurrence, les entreprises cherchent à améliorer leurs performances industrielles. L’une des solutions à ce défi réside dans la détermination de la meilleure configuration des ateliers de production. Ce type de problème est connu en anglais par Facility Layout Problem « FLP». Dans ce contexte, notre travail propose une méthodologie pour la définition de la configuration d’atelier à travers une approche réaliste. Plus précisément, notre objectif est de prendre en compte les distances réelles parcourues par les pièces dans l’atelier et des contraintes liées au système qui n’ont pas encore été intégrées aux modèles proposés dans la littérature. Pour ce faire, notre première contribution scientifique consiste à développer une nouvelle méthodologie qui utilise l’algorithme A* pour identifier les distances les plus courtes entre les postes de travail de manière réaliste. La méthodologie proposée combine l’Algorithme Génétique (AG) et l’algorithme A* afin d’explorer des espaces de solutions. Pour se rapprocher de plus en plus des cas réels, notre deuxième contribution consiste à présenter une nouvelle formulation généralisée du FLP initialement étudié, en tenant compte de différentes formes et de dimensions des équipements ainsi que de l’atelier. Les résultats obtenus prouvent l’applicabilité et la faisabilité de cette approche dans diverses situations. Une étude comparative de l’approche proposée avec les essaims particulaires intégrés avec A* a prouvé la qualité de la première approche en terme de coût de transport. Finalement, notre troisième contribution consiste à traiter le FLP dans un espace 3D où des contraintes spatiales sont intégrées dans la phase de modélisation. La résolution est une extension de la méthodologie proposée pour le problème 2D, qui intègre donc l'algorithme A* et l’AG afin de générer diverses configurations dans l’espace 3D. Pour chacune de ces contributions, une analyse de sensibilité des différents paramètres d’AG utilisés a été faite à l’aide de simulations de Monte Carlo<br>To face the competition, companies seek to improve their industrial performance. One of the solutions to this challenge lies in determining the best configuration of the production workshops. This type of problem is known in English by Facility Layout Problem "FLP". In this context, our work proposes a methodology for the definition of the workshop configuration through a realistic approach. More precisely, our goal is to take into account the actual distances traveled by the parts in the workshop and system-related constraints that have not yet been incorporated into the models proposed in the literature. To do this, our first scientific contribution is to develop a new methodology that uses the A* algorithm to identify the shortest distances between workstations in a realistic way. The proposed methodology combines the Genetic Algorithm (GA) and the algorithm A* to explore solution spaces. To get closer to real cases, our second contribution is to present a new generalized formulation of FLP initially studied, taking into account different shapes and dimensions of the equipment and the workshop. The results obtained prove the applicability and the feasibility of this approach in various situations. A comparative study of the proposed approach with particle swarms integrated with A * proved the quality of the first approach in terms of transport cost. Finally, our third contribution is to treat the FLP in a 3D space where spatial constraints are integrated into the modeling phase. The resolution is an extension of the proposed methodology for the 2D problem, which therefore integrates the A * algorithm and the AG to generate various configurations in the 3D space. For each of these contributions, a sensitivity analysis of the different AG parameters used was made using Monte Carlo simulations
APA, Harvard, Vancouver, ISO, and other styles
25

Oliveira, José Benedito da Silva. "Combinação de técnicas de delineamento de experimentos e elementos finitos com a otimização via simulação Monte Carlo /." Guaratinguetá, 2019. http://hdl.handle.net/11449/183380.

Full text
Abstract:
Orientador: Aneirson Francisco da Silva<br>Resumo: A Estampagem a Frio é um processo de conformação plástica de chapas metálicas, que possibilita, por meio de ferramentas específicas, obter componentes com boas propriedades mecânicas, geometrias e espessuras variadas, diferentes especificações de materiais e com boa vantagem econômica. A multiplicidade destas variáveis gera a necessidade de utilização de técnicas estatísticas e de simulação numérica, que suportem a sua análise e adequada tomada de decisão na elaboração do projeto das ferramentas de conformação. Este trabalho foi desenvolvido em uma empresa brasileira multinacional de grande porte que atua no setor de autopeças, em seu departamento de engenharia de projetos de ferramentas, com o propósito de reduzir o estiramento e a ocorrência de trincas em uma travessa de 6,8 [mm] de aço LNE 380. A metodologia proposta obtém os valores dos fatores de entrada e sua influência na variável resposta com o uso de técnicas de Delineamento de Experimentos (DOE) e simulação pelo método de Elementos Finitos (FE). Uma Função Empírica é desenvolvida a partir desses dados, com o uso da técnica de regressão, obtendo-se a variável resposta y (espessura na região crítica), em função dos fatores influentes xi do processo. Com a Otimização via Simulação Monte Carlo (OvSMC) insere-se a incerteza nos coeficientes desta Função Empírica, sendo esta a principal contribuição deste trabalho, pois é o que ocorre, por via de regra, na prática com problemas experimentais. Simulando-se por FE as ferram... (Resumo completo, clicar acesso eletrônico abaixo)<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
26

Larsson, Julia, and Tyra Strandberg. "Optimization of Subscription Lines of Credit in Private Equity : An extensive analysis containing several Investment-, Bridge Facility- and Installment Strategies using Monte Carlo Simulations." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184679.

Full text
Abstract:
In recent years the use of subscription lines of credit has increased exponentially, from $86.1 millions (2014) to $5.3 billions (2018). The rapid growth of the phe- nomenon of private equity funds using subscription lines of credit when acquiring companies (instead of directly making capital drawdowns from investors), has poor academic coverage. The phenomenon has been claimed by previous research to be a return manipulation technique that only benefits the general partner of the fund. The aim with this research is to increase transparency and to create knowledge about the impact and effects of using subscription lines of credit for both parties involved in a private equity deal. An extensive model is built based on the formula of a Geometric Brownian Motion, different subscription lines of credit strategies are evaluated by the performance of the fund and the returns distributed to each party. This in order to optimize the usage for both parties involved in a private equity fund.  The main result of this thesis is that private equity funds can achieve higher perfor- mance and increase the actual return distributed to both the limited partners and the general partner by using subscription lines of credit. These results go beyond previous research stating that using subscription lines of credit will increase the per- formance in terms of the measure Internal Rate of Return, without increasing the actual return distributed to the limited partners. Through this research strategies have been found that yield higher performance in terms of a several performance measures and at the same time increase the actual return made compared to the case of not using subscription lines of credit.
APA, Harvard, Vancouver, ISO, and other styles
27

Alexis, Sara. "Combinatorial and price efficient optimization of the underlying assets in basket options." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204861.

Full text
Abstract:
The purpose of this thesis is to develop an optimization model that chooses the optimal and price efficient combination of underlying assets for a equally weighted basket option. To obtain a price efficient combination of underlying assets a function that calculates the basket option price is needed, for further use in an optimization model. The closed-form basket option pricing is a great challenge, due to the lack of a distribution describing the augmented stochastic price process. Many types of approaches to price an basket option has been made. In this thesis, an analytical approximation of the basket option price has been used, where the analytical approximation aims to develop a method to describe the augmented price process. The approximation is done by moment matching, i.e. matching the first two moments of the real distribution of the basket option with an lognormal distribution. The obtained price function is adjusted and used as the objective function in the optimization model. Furthermore, since the goal is to obtain en equally weighted basket option, the appropriate class of optimization models to use are binary optimization problems. This kind of optimization model is in general hard to solve - especially for increasing dimensions. Three different continuous relaxations of the binary problem has been applied in order to obtain continuous problems, that are easier to solve. The results shows that the purpose of this thesis is fulfilled when formulating and solving the optimization problem - both as an binary and continuous nonlinear optimization model. Moreover, the results from a Monte Carlo simulation for correlated stochastic processes shows that the moment matching technique with a lognormal distribution is a good approximation for pricing a basket option.<br>Syftet med detta examensarbete är att utveckla ett optimeringsverktyg som väljer den optimala och priseffektiva kombinationen av underliggande tillgångar för en likaviktad aktiekorg. För att kunna hitta en priseffektiv kombination av underliggande tillgångar behöver man finna en passande funktion som bestämmer priset på en likaviktad aktiekorg. Prissättningen av dessa typer av optioner är en stor utmaning. Detta är på grund av bristen av en sannolikhetsfördelning som kan beskriva den utökade och korrelerade stokastiska prisprocess som uppstår för en aktiekorg. Många typer av prissättningar har undersökts och tillämpats. I detta arbete har en analytisk approximation använts för att kunna beskriva den underliggande pris processen approximativt. Uppskattningen görs genom att matcha de tvåförsta momenten av den verkliga fördelningen med motsvarande moment för en lognormal fördelning. Den erhållna prisfunktionen justeras och används som målfunktionen i optimeringsmodellen. Binära ickelinjära optimeringsproblem är i allmänhet svåra att lösa - särskilt för ökande dimensioner av variabler. Tre olika kontinuerliga omformuleringar av det binära optimeringsproblemet har gjorts för att erhålla kontinuerliga problem som är lättare att lösa. Resultaten visar att en optimal och priseffektiv kombination av underliggande aktier är möjlig att hitta genom att formulera ett optimeringsproblem - både som en binär och kontinuerlig ickelinjär optimeringsmodell. Dessutom visar resultaten från en Monte Carlo-simulering, i detta fall för korrelerade stokastiska processer, att moment matching metoden utförd med en lognormal fördelning är en god approximation för prissättningen av aktiekorgar.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Seungman. "Optimization and Simulation Based Cost-Benefit Analysis on a Residential Demand Response : Applications to the French and South Korean Demand Response Mechanisms." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED054.

Full text
Abstract:
À cause de la préoccupation mondiale sur les émissions de CO2, le changement climatique et la transition énergétique, nous faisons plus d'attention à la maîtrise de la demande d'électricité. En particulier, avec l'effacement de consommation électrique, nous pouvons profiter de plusieurs avantages, comme l'augmentation de l'efficacité de l'ensemble du marché de l'électricité, la sécurité d'approvisionnement d'électricité renforcée, et l'investissement plus efficace et souhaitable ainsi que l'avantage de l'environnement et le soutien aux énergies renouvelables. En Europe, la France a démarré le mécanisme de NEBEF à la fin de 2013, et la Corée du Sud a lancé le programme de l'effacement de consommation électrique basé sur le marché fin 2014. Parmi un certain nombre de questions et d’hypothèses que nous devons prendre en compte en termes de l'effacement, l'estimation de la courbe de référence est l'un des éléments les plus importants et les plus fondamentaux. Dans cette recherche, sur la base du profil de consommation redimensionné pour un ménage moyen, plusieurs méthodes d'estimation de la courbe de référence sont établies et examinées à la fois pour les mécanismes de l'effacement français et coréen. Cette investigation sur les méthodes de l'estimation pourrait contribuer à la recherche d'une méthode d'estimation meilleure et plus précise qui augmentera les motivations pour les participants. Avec les courbes de référence estimées, les analyses coûts-bénéfices ont été réalisées, elles-mêmes utilisées dans l'analyse décisionnelle pour les participants. Pour réaliser les analyses coûts-bénéfices, un modèle mathématique simple utilisant l'algèbre linéaire est créé et modifié afin de bien représenter les paramètres de chaque mécanisme de l'effacement. Ce modèle nous permet une compréhension intuitive et claire des mécanismes. Ce modèle générique peut être utilisé pour différents pays et secteurs, résidentiel, commercial et industriel, avec quelques modifications de modèle. La simulation de Monte Carlo est utilisée afin de refléter la nature stochastique de la réalité, et l'optimisation est également utilisée pour représenter et comprendre la rationalité des participants, et pour fournir des explications microéconomiques sur les comportements des participants. Afin de dégager des implications significatives pour une meilleure architecture du marché de l'effacement, plusieurs analyses de sensibilité sont effectuées sur les éléments clés du modèle pour les mécanismes<br>Worldwide concern on CO2 emissions, climate change, and the energy transition made us to pay more attention to Demand-side Management (DSM). In particular, with Demand Response (DR), we could expect several benefits, such as increased efficiency of the entire electricity market, enhanced security of electricity supply by reducing peak demand, and more efficient and desirable investment as well as the environmental advantage and the support for renewable energy sources. In Europe, France launched the NEBEF mechanism at the end of 2013, and South Korea inaugurated the market-based DR program at the end of 2014. Among a number of economic issues and assumptions that we need to take into consideration for DR, Customer Baseline Load (CBL) estimation is one of the most important and fundamental elements. In this research, based on the re-scaled load profile for an average household, several CBL estimation methods are established and examined thoroughly both for Korean and French DR mechanisms. This investigation on CBL estimation methods could contribute to searching for a better and accurate CBL estimation method that will increase the motivations for DR participants. With those estimated CBLs, the Cost-Benefit Analyses (CBAs) are conducted which, in turn, are utilized in the Decision-making Analysis for DR participants. For the CBAs, a simple mathematical model using linear algebra is set up and modified in order to well represent for each DR mechanism's parameters. With this model, it is expected to provide intuitive and clear understanding on DR mechanisms. This generic DR model can be used for different countries and sectors (e.g. residential, commercial, and industrial) with a few model modifications. The Monte Carlo simulation is used to reflect the stochastic nature of the reality and the optimization is also used to represent and understand the rationality of the DR participants, and to provide micro-economic explanations on DR participants' behaviours. In order to draw some meaningful implications for a better DR market design several Sensitivity Analyses (SAs) are conducted on the key elements of the model for DR mechanisms
APA, Harvard, Vancouver, ISO, and other styles
29

Hilber, Patrik. "Maintenance optimization for power distribution systems." Doctoral thesis, Stockholm : Electrical Engineering, Elektrotekniska system, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ittiwattana, Waraporn. "A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Dabruck, Jan Philipp [Verfasser], Bruno [Akademischer Betreuer] Thomauske, Rahim [Akademischer Betreuer] Nabbi, and Achim [Akademischer Betreuer] Stahl. "Optimization studies on the target station for the high-brilliance neutron source HBS based on Monte Carlo simulations / Jan Philipp Dabruck ; Bruno Thomauske, Rahim Nabbi, Achim Stahl." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1180734238/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Oleś, Katarzyna A. "Searching for the optimal control strategy of epidemics spreading on different types of networks." Thesis, University of Stirling, 2014. http://hdl.handle.net/1893/21199.

Full text
Abstract:
The main goal of my studies has been to search for the optimal control strategy of controlling epidemics when taking into account both economical and social costs of the disease. Three control scenarios emerge with treating the whole population (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy, LS) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. Although the properties of the pathogen might not be known in advance for emerging diseases, the prediction of the optimal strategy can be made based on economic analysis only. The details of the local strategy and in particular the size of the optimal treatment neighbourhood weakly depends on disease infectivity but strongly depends on other epidemiological factors (rate of occurring the symptoms, spontaneously recovery). The required extent of prevention is proportional to the size of the infection neighbourhood, but this relationship depends on time till detection and time till treatment in a non-nonlinear (power) law. The spontaneous recovery also affects the choice of the control strategy. I have extended my results to two contrasting and yet complementary models, in which individuals that have been through the disease can either be treated or not. Whether the removed individuals (i.e., those who have been through the disease but then spontaneously recover or die) are part of the treatment plan depends on the type of the disease agent. The key factor in choosing the right model is whether it is possible - and desirable - to distinguish such individuals from those who are susceptible. If the removed class is identified with dead individuals, the distinction is very clear. However, if the removal means recovery and immunity, it might not be possible to identify those who are immune. The models are similar in their epidemiological part, but differ in how the removed/recovered individuals are treated. The differences in models affect choice of the strategy only for very cheap treatment and slow spreading disease. However for the combinations of parameters that are important from the epidemiological perspective (high infectiousness and expensive treatment) the models give similar results. Moreover, even where the choice of the strategy is different, the total cost spent on controlling the epidemic is very similar for both models. Although regular and small-world networks capture some aspects of the structure of real networks of contacts between people, animals or plants, they do not include the effect of clustering noted in many real-life applications. The use of random clustered networks in epidemiological modelling takes an impor- tant step towards application of the modelling framework to realistic systems. Network topology and in particular clustering also affects the applicability of the control strategy.
APA, Harvard, Vancouver, ISO, and other styles
33

Hilber, Patrik. "Component reliability importance indices for maintenance optimization of electrical networks." Licentiate thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Razavi, Seyed Mostafa. "OPTIMIZATION OF A TRANSFERABLE SHIFTED FORCE FIELD FOR INTERFACES AND INHOMOGENEOUS FLUIDS USING THERMODYNAMIC INTEGRATION." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1481881698375321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Brushammar, Tobias, and Erik Windelhed. "An Optimization-Based Approach to the Funding of a Loan Portfolio." Thesis, Linköping University, Department of Mathematics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2664.

Full text
Abstract:
<p>This thesis grew out of a problem encountered by a subsidiary of a Swedish multinational industrial corporation. This subsidiary is responsible for the corporation’s customer financing activities. In the thesis, we refer to these entities as the Division and the Corporation. The Division needed to find a new approach to finance its customer loan portfolio. Risk control and return maximization were important aspects of this need. The objective of this thesis is to devise and implement a method that allows the Division to make optimal funding decisions, given a certain risk limit. </p><p>We propose a funding approach based on stochastic programming. Our approach allows the Division’s portfolio manager to minimize the funding costs while hedging against market risk. We employ principal component analysis and Monte Carlo simulation to develop a multicurrency scenario generation model for interest and exchange rates. Market rate scenarios are used as input to three different optimization models. Each of the optimization models presents the optimal funding decision as positions in a unique set of financial instruments. By choosing between the optimization models, the portfolio manager can decide which financial instruments he wants to use to fund the loan portfolio. </p><p>To validate our models, we perform empirical tests on historical market data. Our results show that our optimization models have the potential to deliver sound and profitable funding decisions. In particular, we conclude that the utilization of one of our optimization models would have resulted in an increase in the Division’s net income over the past 3.5 years.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Ghazisaeidi, Amirhossein. "Advanced Numerical Techniques for Design and Optimization of Optical Links Employing Nonlinear Semiconductor Optical Amplifiers." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27541/27541.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Rai, Ajit. "Estimation de la disponibilité par simulation, pour des systèmes incluant des contraintes logistiques." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S105/document.

Full text
Abstract:
L'analyse des FDM (Reliability, Availability and Maintainability en anglais) fait partie intégrante de l'estimation du coût du cycle de vie des systèmes ferroviaires. Ces systèmes sont hautement fiables et présentent une logistique complexe. Les simulations Monte Carlo dans leur forme standard sont inutiles dans l'estimation efficace des paramètres des FDM à cause de la problématique des événements rares. C'est ici que l'échantillonnage préférentiel joue son rôle. C'est une technique de réduction de la variance et d'accélération de simulations. Cependant, l'échantillonnage préférentiel inclut un changement de lois de probabilité (changement de mesure) du modèle mathématique. Le changement de mesure optimal est inconnu même si théoriquement il existe et fournit un estimateur avec une variance zéro. Dans cette thèse, l'objectif principal est d'estimer deux paramètres pour l'analyse des FDM: la fiabilité des réseaux statiques et l'indisponibilité asymptotique pour les systèmes dynamiques. Pour ce faire, la thèse propose des méthodes pour l'estimation et l'approximation du changement de mesure optimal et l'estimateur final. Les contributions se présentent en deux parties: la première partie étend la méthode de l'approximation du changement de mesure de l'estimateur à variance zéro pour l'échantillonnage préférentiel. La méthode estime la fiabilité des réseaux statiques et montre l'application à de réels systèmes ferroviaires. La seconde partie propose un algorithme en plusieurs étapes pour l'estimation de la distance de l'entropie croisée. Cela permet d'estimer l'indisponibilité asymptotique pour les systèmes markoviens hautement fiables avec des contraintes logistiques. Les résultats montrent une importante réduction de la variance et un gain par rapport aux simulations Monte Carlo<br>RAM (Reliability, Availability and Maintainability) analysis forms an integral part in estimation of Life Cycle Costs (LCC) of passenger rail systems. These systems are highly reliable and include complex logistics. Standard Monte-Carlo simulations are rendered useless in efficient estimation of RAM metrics due to the issue of rare events. Systems failures of these complex passenger rail systems can include rare events and thus need efficient simulation techniques. Importance Sampling (IS) are an advanced class of variance reduction techniques that can overcome the limitations of standard simulations. IS techniques can provide acceleration of simulations, meaning, less variance in estimation of RAM metrics in same computational budget as a standard simulation. However, IS includes changing the probability laws (change of measure) that drive the mathematical models of the systems during simulations and the optimal IS change of measure is usually unknown, even though theroretically there exist a perfect one (zero-variance IS change of measure). In this thesis, we focus on the use of IS techniques and its application to estimate two RAM metrics : reliability (for static networks) and steady state availability (for dynamic systems). The thesis focuses on finding and/or approximating the optimal IS change of measure to efficiently estimate RAM metrics in rare events context. The contribution of the thesis is broadly divided into two main axis : first, we propose an adaptation of the approximate zero-variance IS method to estimate reliability of static networks and show the application on real passenger rail systems ; second, we propose a multi-level Cross-Entropy optimization scheme that can be used during pre-simulation to obtain CE optimized IS rates of Markovian Stochastic Petri Nets (SPNs) transitions and use them in main simulations to estimate steady state unavailability of highly reliably Markovian systems with complex logistics involved. Results from the methods show huge variance reduction and gain compared to MC simulations
APA, Harvard, Vancouver, ISO, and other styles
38

Zebbache, Ahmed. "Analyse et synthèse statistiques des circuits électroniques : mise en œuvre du simulateur ouvert SPICE-PAC et de la méthode du recuit simule." Châtenay-Malabry, Ecole centrale de Paris, 1996. http://www.theses.fr/1996ECAP0467.

Full text
Abstract:
Dans la conception des circuits électroniques les variations statistiques des valeurs des paramètres des circuits doivent être prises en compte car elles ont un impact sur la qualité des circuits fabriques. Ces fluctuations statistiques essentiellement dues au processus de fabrication des tolérances, induisent des variations dans les réponses des circuits fabriqués. Le rendement de fabrication, qui est la proportion des circuits fabriqués répondant à un ensemble de contraintes d'acceptabilité spécifiées par le concepteur, est d'un intérêt considérable. Obtenir un rendement acceptable est l'un des principaux objectifs d'un concepteur de circuits électroniques. La méthode de Monte-Carlo a été utilisée pour simuler les variations dans les paramètres des composants ainsi que pour estimer le rendement de fabrication. Cette méthode est choisie a cause de sa généralité et de sa faible dépendance des variables stochastiques. Un programme fondée sur cette méthode a été élaboré et couplé au simulateur SPICE-PAC dont la structure modulaire se prête mieux aux exigences de l'analyse statistiques. Lorsque le rendement de fabrication est faible, il est nécessaire de l'améliorer. C’est la méthode des centres de gravite qui a été utilisée pour optimiser le rendement. L’échantillonnage statistique est amélioré grâce à la méthode des points communs. Malgré que la méthode des centres de gravite soit robuste et d'une utilisation aisée, elle peut être piégée dans optimum local du rendement. Pour résoudre ce problème, nous avons propose une approche d'optimisation du rendement de fabrication qui combine la méthode des centres de gravite et la méthode du recuit simulé
APA, Harvard, Vancouver, ISO, and other styles
39

Gao, Yong. "A Degradation-based Burn-in Optimization for Light Display Devices with Two-phase Degradation Patterns considering Warranty Durations and Measurement Errors." Ohio University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1509109739168013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Janzon, Krister. "Monte Carlo Path Simulation and the Multilevel Monte Carlo Method." Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-151975.

Full text
Abstract:
A standard problem in the field of computational finance is that of pricing derivative securities. This is often accomplished by estimating an expected value of a functional of a stochastic process, defined by a stochastic differential equation (SDE). In such a setting the random sampling algorithm Monte Carlo (MC) is useful, where paths of the process are sampled. However, MC in its standard form (SMC) is inherently slow. Additionally, if the analytical solution to the underlying SDE is not available, a numerical approximation of the process is necessary, adding another layer of computational complexity to the SMC algorithm. Thus, the computational cost of achieving a certain level of accuracy of the estimation using SMC may be relatively high. In this thesis we introduce and review the theory of the SMC method, with and without the need of numerical approximation for path simulation. Two numerical methods for path approximation are introduced: the Euler–Maruyama method and Milstein's method. Moreover, we also introduce and review the theory of a relatively new (2008) MC method – the multilevel Monte Carlo (MLMC) method – which is only applicable when paths are approximated. This method boldly claims that it can – under certain conditions – eradicate the additional complexity stemming from the approximation of paths. With this in mind, we wish to see whether this claim holds when pricing a European call option, where the underlying stock process is modelled by geometric Brownian motion. We also want to compare the performance of MLMC in this scenario to that of SMC, with and without path approximation. Two numerical experiments are performed. The first to determine the optimal implementation of MLMC, a static or adaptive approach. The second to illustrate the difference in performance of adaptive MLMC and SMC – depending on the used numerical method and whether the analytical solution is available. The results show that SMC is inferior to adaptive MLMC if numerical approximation of paths is needed, and that adaptive MLMC seems to meet the complexity of SMC with an analytical solution. However, while the complexity of adaptive MLMC is impressive, it cannot quite compensate for the additional cost of approximating paths, ending up roughly ten times slower than SMC with an analytical solution.
APA, Harvard, Vancouver, ISO, and other styles
41

Basak, Subhasish. "Multipathogen quantitative risk assessment in raw milk soft cheese : monotone integration and Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG021.

Full text
Abstract:
Ce manuscrit se concentre sur l'optimisation Bayésienne d'un modèle d'appréciation quantitative des risques microbiologiques (AQRM) dans le cadre du projet ArtiSaneFood soutenu par l'Union européenne. L'objectif est d'établir des stratégies de bio-intervention efficaces pour les fabricants de fromage au lait cru en France, en s'appuyant sur trois types de travaux : 1) le développement d'un modèle AQRM multipathogène pour un fromage de type pâte molle au lait cru, 2) étudier des méthodes d'intégration monotone pour l'estimation des sorties du modèle AQRM et 3) la conception d'un algorithme d'optimisation Bayésienne adapté à un simulateur stochastique et coûteux.Dans la première partie, nous proposons un modèle AQRM multipathogène construit sur la base d'études existantes (voir, par exemple, Bonifait et al., 2021, Perrin et al., 2014, Sanaa et al., 2004, Strickland et al., 2023). Ce modèle est conçu pour estimer l'impact des maladies d'origine alimentaire sur la santé publique, causées par des agents pathogènes tels que Escherichia coli entérohémorragiques (EHEC), Salmonella et Listeria monocytogenes, potentiellement présents dans le fromage de type pâte molle au lait cru. Ce modèle “farm-to-fork” intègre les mesure de maitrise liées aux tests microbiologiques du lait et du fromage, permettant d'estimer les coûts associés aux interventions. Une implémentation du modèle AQRM pour EHEC est fournie en R et dans le cadre FSKX (Basak et al., under review). La deuxième partie de ce manuscrit explore l'application potentielle de méthodes d'intégration séquentielle, exploitant les propriétés de monotonie et de bornage des sorties du simulateur. Nous menons une revue de littérature approfondie sur les méthodes d'intégration existantes (voir, par exemple, Kiefer, 1957, Novak, 1992), et examinons les résultats théoriques concernant leur convergence. Notre contribution comprend la proposition d'améliorations à ces méthodes et la discussion des défis associés à leur application dans le domaine de l'AQRM.Dans la dernière partie de ce manuscrit, nous proposons un algorithme Bayésien d'optimisation multiobjectif pour estimer les entrées optimales de Pareto d'un simulateur stochastique et coûteux en calcul. L'approche proposée est motivée par le principe de “Stepwise Uncertainty Reduction” (SUR) (voir, par exemple, Vazquez and Bect, 2009, Vazquez and Martinez, 2006, Villemonteix et al., 2007), avec un critère d'échantillonnage basé sur weighted integrated mean squared error (w-IMSE). Nous présentons une évaluation numérique comparant l'algorithme proposé avec PALS (Pareto Active Learning for Stochastic simulators) (Barracosa et al., 2021), sur un ensemble de problèmes de test bi-objectifs. Nous proposons également une extension (Basak et al., 2022a) de l'algorithme PALS, adaptée au cas d'application de l'AQRM<br>This manuscript focuses on Bayesian optimization of a quantitative microbiological risk assessment (QMRA) model, in the context of the European project ArtiSaneFood, supported by the PRIMA program. The primary goal is to establish efficient bio-intervention strategies for cheese producers in France.This work is divided into three broad directions: 1) development and implementation of a multipathogen QMRA model for raw milk soft cheese, 2) studying monotone integration methods for estimating outputs of the QMRA model, and 3) designing a Bayesian optimization algorithm tailored for a stochastic and computationally expensive simulator.In the first part we propose a multipathogen QMRA model, built upon existing studies in the literature (see, e.g., Bonifait et al., 2021, Perrin et al., 2014, Sanaa et al., 2004, Strickland et al., 2023). This model estimates the impact of foodborne illnesses on public health, caused by pathogenic STEC, Salmonella and Listeria monocytogenes, which can potentially be present in raw milk soft cheese. This farm-to-fork model also implements the intervention strategies related to mlik and cheese testing, which allows to estimate the cost of intervention. An implementation of the QMRA model for STEC is provided in R and in the FSKX framework (Basak et al., under review). The second part of this manuscript investigates the potential application of sequential integration methods, leveraging the monotonicity and boundedness properties of the simulator outputs. We conduct a comprehensive literature review on existing integration methods (see, e.g., Kiefer, 1957, Novak, 1992), and delve into the theoretical findings regarding their convergence. Our contribution includes proposing enhancements to these methods and discussion on the challenges associated with their application in the QMRA domain.In the final part of this manuscript, we propose a Bayesian multiobjective optimization algorithm for estimating the Pareto optimal inputs of a stochastic and computationally expensive simulator. The proposed approach is motivated by the principle of Stepwise Uncertainty Reduction (SUR) (see, e.g., Vazquezand Bect, 2009, Vazquez and Martinez, 2006, Villemonteix et al., 2007), with a weighted integrated mean squared error (w-IMSE) based sampling criterion, focused on the estimation of the Pareto front. A numerical benchmark is presented, comparing the proposed algorithm with PALS (Pareto Active Learning for Stochastic simulators) (Barracosa et al., 2021), over a set of bi-objective test problems. We also propose an extension (Basak et al., 2022a) of the PALS algorithm, tailored to the QMRA application case
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Xichen. "Monte Carlo Simulation and Integration." Scholarship @ Claremont, 2018. https://scholarship.claremont.edu/cmc_theses/2009.

Full text
Abstract:
In this paper, we introduce the Tootsie Pop Algorithm and explore its use in different contexts. It can be used to estimate more general problems where a measure is defined, or in the context of statistics application, integration involving high dimensions. The Tootsie Pop Algorithm was introduced by Huber and Schott[2] The general process of Tootsie Pop Algorithm, just like what its name suggests, is a process of peeling down the outer shell, which is the larger enclosing set, to the center, which is the smaller enclosed. We obtain the average number of peels, which gives us an understanding of the ratio between the size of the shell and the size of the center. Each peel is generated by a random draw within the outer shell: if the drawn point is located in the center, we are done, else we update the outer shell such that the drawn point is right on its edge.
APA, Harvard, Vancouver, ISO, and other styles
43

Marceau, Caron Gaetan. "Optimization and uncertainty handling in air traffic management." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112183/document.

Full text
Abstract:
Cette thèse traite de la gestion du trafic aérien et plus précisément, de l’optimisation globale des plans de vol déposés par les compagnies aériennes sous contrainte du respect de la capacité de l’espace aérien. Une composante importante de ce travail concerne la gestion de l’incertitude entourant les trajectoires des aéronefs. Dans la première partie du travail, nous identifions les principales causes d’incertitude au niveau de la prédiction de trajectoires. Celle-ci est la composante essentielle à l’automatisation des systèmes de gestion du trafic aérien. Nous étudions donc le problème du réglage automatique et en-ligne des paramètres de la prédiction de trajectoires au cours de la phase de montée avec l’algorithme d’optimisation CMA-ES. La principale conclusion, corroborée par d’autres travaux de la littérature, implique que la prédiction de trajectoires des centres de contrôle n’est pas suffisamment précise aujourd’hui pour supporter l’automatisation complète des tâches critiques. Ainsi, un système d’optimisation centralisé de la gestion du traficaérien doit prendre en compte le facteur humain et l’incertitude de façon générale.Par conséquent, la seconde partie traite du développement des modèles et des algorithmes dans une perspective globale. De plus, nous décrivons un modèle stochastique qui capture les incertitudes sur les temps de passage sur des balises de survol pour chaque trajectoire. Ceci nous permet d’inférer l’incertitude engendrée sur l’occupation des secteurs de contrôle par les aéronefs à tout moment.Dans la troisième partie, nous formulons une variante du problème classique du Air Traffic Flow and Capacity Management au cours de la phase tactique. L’intérêt est de renforcer les échanges d’information entre le gestionnaire du réseau et les contrôleurs aériens. Nous définissons donc un problème d’optimisation dont l’objectif est de minimiser conjointement les coûts de retard et de congestion tout en respectant les contraintes de séquencement au cours des phases de décollage et d’attérissage. Pour combattre le nombre de dimensions élevé de ce problème, nous choisissons un algorithme évolutionnaire multiobjectif avec une représentation indirecte du problème en se basant sur des ordonnanceurs gloutons. Enfin, nous étudions les performances et la robustesse de cette approche en utilisant le modèle stochastique défini précédemment. Ce travail est validé à l’aide de problèmes réels obtenus du Central Flow Management Unit en Europe, que l’on a aussi densifiés artificiellement<br>In this thesis, we investigate the issue of optimizing the aircraft operators' demand with the airspace capacity by taking into account uncertainty in air traffic management. In the first part of the work, we identify the main causes of uncertainty of the trajectory prediction (TP), the core component underlying automation in ATM systems. We study the problem of online parameter-tuning of the TP during the climbing phase with the optimization algorithm CMA-ES. The main conclusion, corroborated by other works in the literature, is that ground TP is not sufficiently accurate nowadays to support fully automated safety-critical applications. Hence, with the current data sharing limitations, any centralized optimization system in Air Traffic Control should consider the human-in-the-loop factor, as well as other uncertainties. Consequently, in the second part of the thesis, we develop models and algorithms from a network global perspective and we describe a generic uncertainty model that captures flight trajectories uncertainties and infer their impact on the occupancy count of the Air Traffic Control sectors. This usual indicator quantifies coarsely the complexity managed by air traffic controllers in terms of number of flights. In the third part of the thesis, we formulate a variant of the Air Traffic Flow and Capacity Management problem in the tactical phase for bridging the gap between the network manager and air traffic controllers. The optimization problem consists in minimizing jointly the cost of delays and the cost of congestion while meeting sequencing constraints. In order to cope with the high dimensionality of the problem, evolutionary multi-objective optimization algorithms are used with an indirect representation and some greedy schedulers to optimize flight plans. An additional uncertainty model is added on top of the network model, allowing us to study the performances and the robustness of the proposed optimization algorithm when facing noisy context. We validate our approach on real-world and artificially densified instances obtained from the Central Flow Management Unit in Europe
APA, Harvard, Vancouver, ISO, and other styles
44

Dali, Sarabjyot Singh. "Analysis of dense colloidal dispersions with multiwavelength frequency domain photon migration measurements." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Ming Ripman, and 李明. "Monte Carlo simulation for confined electrolytes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31240513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Swetnam, Adam D. "Monte Carlo simulation of lattice polymers." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/49196/.

Full text
Abstract:
The phase behaviour of lattice polymers and peptides, under various conditions, is investigated using Monte Carlo simulation. Wang-Landau sampling is used so that, in principle, phase diagrams can be determined from a single simulation. It is demonstrated that the pseudophase diagram for polymer molecules, in several environments, can be plotted when sampling only from the internal degrees of freedom, by determining an appropriate density of states. Several improvements to the simulation methods used are detailed. A new prescription for setting the modification factor in the Wang-Landau algorithm is described, tested and found, for homopolymers, to result in near optimum convergence throughout the simulation. Different methods of selecting moves from the pull move set are detailed, and their relative efficiencies determined. Finally, it is shown that results for a polymer in a slit with one attractive surface can be determined by sampling only from the internal degrees of freedom of a lattice polymer. Adsorption of lattice polymers and peptides is investigated by determining pseudophase diagrams for individual molecules. The phase diagram for a homopolymer molecule, near a surface with a pattern of interaction, is determined, with a pseudophase identified where the polymer is commensurate with the pattern. For an example lattice peptide, the existence of the new pseudophase is found to depend on whether both hydrophobic and polar beads are attracted to the surface. The phase diagram for a ring polymer under applied force, with variable solvent quality, is determined for the first time. The effect, on the phase diagram, of topological knots in the ring polymer is investigated. In addition to eliminating pseudophases where the polymer is flattened into a single layer, it is found that non-trivial knots result in additional pseudophases for tensile force.
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Ming Ripman. "Monte Carlo simulation for confined electrolytes /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22055009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rakotomanga, Prisca. "Inversion de modèle et séparation de signaux de spectroscopie optique pour la caractérisation in vivo de tissus cutanés." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0329.

Full text
Abstract:
La détection de tissus biologiques pré-cancéreux au cours d’un diagnostic clinique est rendue possible grâce au développement de méthodes spectroscopiques optiques, appelées également biopsie optique. Ces méthodes permettent de mettre en évidence les modifications du métabolisme et de la structure des tissus biologiques. Dans cette thèse, les mesures optiques sont réalisées sur les tissus cutanés à l’aide d’un instrument intégrant deux modalités : la réflectance diffuse résolue spatialement et l’autofluorescence. L’exploitation des spectres consiste à résoudre un problème inverse non-linéaire pour estimer les propriétés optiques du milieu sondé à l’aide d’un modèle d’interaction lumière-tissu. À partir de ces propriétés optiques, la caractérisation de l’état du tissu, sain ou anormal, est possible. La première contribution de la thèse est de proposer une méthode d'inversion qui intègre une simulation Monte Carlo rapide et une fonction coût gérant les deux modalités spectrales dans le but d’améliorer la précision de l’estimation des propriétés optiques, quelle que soit la géométrie de la sonde. La deuxième contribution de la thèse est d'étudier la cinétique des fluorophores présents dans les tissus par une approche de séparation de sources, qui ne nécessite pas le recours à une simulation Monte Carlo. Dans cette approche, une collection de spectres acquis au cours du temps est analysée conjointement, avec peu d'hypothèses a priori de la forme des signaux sources. La sensibilité de cette approche est validée par l’emploi d’agents de clarification optique qui augmentent la transparence des tissus en diminuant la diffusion et l’absorption des photons dans les tissus, et donnent ainsi accès à des informations plus précises en profondeur<br>The detection of pre-cancerous biological tissues during clinical diagnosis is made possible by the development of optical spectroscopic methods, also known as optical biopsy. These methods make it possible to identify changes in the metabolism and structure of biological tissues. In this thesis, optical measurements are performed on skin tissues using an optical device that integrates two modalities: spatially resolved diffuse reflectance and autofluorescence. Spectra exploitation involves solving a non-linear inverse problem to estimate the optical properties using a light-tissue interaction model. From these optical properties, it is possible to characterize the condition of the tissue, healthy or abnormal. The first contribution of the thesis is to propose an inversion method that integrates a fast Monte Carlo model and a cost function managing two spectral modalities in order to improve the accuracy of the estimation of optical properties, regardless of the geometry of the probe. The second contribution is to study the kinetics of fluorophores present in tissues by a source separation approach, which does not require the use of a Monte Carlo model. In this approach, a collection of spectra acquired over time is analyzed together, with few a priori assumptions about the shape of the source signals. The sensitivity of this approach is validated by the use of optical clearing that increase tissue transparency by decreasing photon diffusion and absorption in tissues, thus providing access to more accurate in-depth information
APA, Harvard, Vancouver, ISO, and other styles
49

Green, Robert C. II. "Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1338894641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Voegele, Simon. "Shortfall-Minimierung Theorie und Monte Carlo Simulation /." St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/02922300001/$FILE/02922300001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!