To see the other types of publications on this topic, follow the link: Overtime. Monte Carlo method. Simulation methods.

Dissertations / Theses on the topic 'Overtime. Monte Carlo method. Simulation methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 dissertations / theses for your research on the topic 'Overtime. Monte Carlo method. Simulation methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Woo, Sungkwon. "Monte Carlo simulation of labor performance during overtime and its impact on project duration /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Homem, de Mello Tito. "Simulation-based methods for stochastic optimization." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Obradovic, Borna Josip. "Multi-dimensional Monte Carlo simulation of ion implantation into complex structures /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mansour, Nabil S. "Inclusion of electron-plasmon interactions in ensemble Monte Carlo simulations of degerate GaAs." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Melson, Joshua Hiatt. "Fatigue Crack Growth Analysis with Finite Element Methods and a Monte Carlo Simulation." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48432.

Full text
Abstract:
Fatigue crack growth in engineered structures reduces the structures load carrying capacity and will eventually lead to failure. Cycles required to grow a crack from an initial length to the critical length is called the fatigue fracture life. In this thesis, five different methods for analyzing the fatigue fracture life of a center cracked plate were compared to experimental data previously collected by C.M. Hudson in a 1969 NASA report studying the R-ratio effects on crack growth in 7075-T6 aluminum alloy. The Paris, Walker, and Forman fatigue crack growth models were fit the experimental data. The Walker equation best fit the data since it incorporated R-ratio effects and had a similar Root Mean Square Error (RMSE) compared to the other models. There was insufficient data in the unstable region of crack growth to adequately fit the Forman equation. Analytical models were used as a baseline for all fatigue fracture life comparisons. Life estimates from AFGROW and finite elements with mid-side nodes moved to their quarter point location compared very with the analytical model with errors less than 3%. The Virtual Crack Closure Technique (VCCT) was selected as a method for crack propagation along a predefined path. Stress intensity factors (SIFs) for shorter crack lengths were found to be low, resulting in an overestimated life of about 8%. The eXtended Finite Element Method with Phantom Nodes (XFEM-PN) was used, allowing crack propagation along a solution dependent path, independent of the mesh. Low SIFs throughout growth resulted in life estimates 20% too large. All finite element analyses were performed in Abaqus 6-13.3. An integrated polynomial method was developed for calculating life based on Abaqus' results, leading to coarser meshes with answers closer to the analytical estimate. None of the five methods for estimating life compared well with the experimental data, with analytical errors on life ranging from 10-20%. These errors were attributed to the limited number of crack growth experiments run at each R-ratio, and the large variability typically seen in growth rates. Monte Carlo simulations were run to estimate the distribution on life. It was shown that material constants in the Walker model must be selected based on their interrelation with a multivariate normal probability density function. Both analytical and XFEM-PN simulations had similar coefficients of variation on life of approximately 3% with similar normal distributions. It was concluded that Abaqus' XFEM-PN is a reasonable means of estimating fatigue fracture life and its variation, and this method could be extended to other geometries and three-dimensional analyses.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Shrestha, Surendra Prakash. "An effective medium approximation and Monte Carlo simulation in subsurface flow modeling." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/38642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Samant, Asawari. "Multiscale Monte Carlo methods to cope with separation of scales in stochastic simulation of biological networks." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 146 p, 2007. http://proquest.umi.com/pqdweb?did=1407500711&sid=13&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

De, Ponte Candice Natasha. "Pricing barrier options with numerical methods / Candice Natasha de Ponte." Thesis, North-West University, 2013. http://hdl.handle.net/10394/8672.

Full text
Abstract:
Barrier options are becoming more popular, mainly due to the reduced cost to hold a barrier option when compared to holding a standard call/put options, but exotic options are difficult to price since the payoff functions depend on the whole path of the underlying process, rather than on its value at a specific time instant. It is a path dependent option, which implies that the payoff depends on the path followed by the price of the underlying asset, meaning that barrier options prices are especially sensitive to volatility. For basic exchange traded options, analytical prices, based on the Black-Scholes formula, can be computed. These prices are influenced by supply and demand. There is not always an analytical solution for an exotic option. Hence it is advantageous to have methods that efficiently provide accurate numerical solutions. This study gives a literature overview and compares implementation of some available numerical methods applied to barrier options. The three numerical methods that will be adapted and compared for the pricing of barrier options are: • Binomial Tree Methods • Monte-Carlo Methods • Finite Difference Methods
Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2013
APA, Harvard, Vancouver, ISO, and other styles
9

Saleh, Ali, and Ahmad Al-Kadri. "Option pricing under Black-Scholes model using stochastic Runge-Kutta method." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53783.

Full text
Abstract:
The purpose of this paper is solving the European option pricing problem under the Black–Scholes model. Our approach is to use the so-called stochastic Runge–Kutta (SRK) numericalscheme to find the corresponding expectation of the functional to the stochastic differentialequation under the Black–Scholes model. Several numerical solutions were made to study howquickly the result converges to the theoretical value. Then, we study the order of convergenceof the SRK method with the help of MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
10

PIUVEZAM, FILHO HELIO. "Estudo de um sistema de coincidência 4-pi-beta-gama para a medida absoluta de atividade de radionuclídeos empregando cintiladores plásticos." reponame:Repositório Institucional do IPEN, 2007. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11507.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:52:39Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:02:17Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
11

Gorelov, Vitaly. "Quantum Monte Carlo methods for electronic structure calculations : application to hydrogen at extreme conditions." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASF002.

Full text
Abstract:
Le problème de la métallisation de l'hydrogène, posé il y a près de 80 ans, a été désigné comme la troisième question ouverte en physique du XXIe siècle. En effet, en raison de sa légèreté et de sa réactivité, les informations expérimentales sur l'hydrogène à haute pression sont limitées et extrêmement difficiles à obtenir. Il est donc essentiel de mettre au point des méthodes précises pour guider les expériences. Dans cette thèse, nous nous concentrons sur l'étude de la structure électronique, y compris les phénomènes d'état excité, en utilisant les techniques de Monte Carlo quantique (QMC). En particulier, nous développons une nouvelle méthode de calcul pour le gap accompagnée d'un traitement précis de l'erreur induit par la taille finie de la cellule de simulation. Nous établissons un lien formel entre l'erreur de la taille finie et la constante diélectrique du matériau. Avant d'étudier l'hydrogène, la nouvelle méthode est testée sur du silicium cristallin et du carbone de diamant, pour lesquels des informations expérimentales sur l'écart sont disponibles. Nos résultats montrent que le biais dû à la supercellule de taille finie peut être corrigé, de sorte que des valeurs précises dans la limite thermodynamique peuvent être obtenues pour les petites supercellules sans avoir besoin d'une extrapolation numérique. Comme l'hydrogène est un matériau très léger, les effets quantiques nucléaires sont importants. Une description précise des effets nucléaires peut être réalisée par la méthode de Monte Carlo à ions et électrons couplés (CEIMC), une méthode de simulation des premiers principes basée sur le QMC. Nous utilisons les résultats de la méthode CEIMC pour discuter les effets quantiques et thermiques des nuclei sur des propriétés électroniques. Nous introduisons une méthode formelle de traitement du gap électronique et de la structure des bandes à température finie dans l'approximation adiabatique et discutons des approximations qui doivent être faites. Nous proposons également une nouvelle méthode pour calculer des propriétés optiques à basse température, qui constituera une amélioration par rapport à l'approximation semi-classique couramment utilisée. Enfin, nous appliquons tout le développement méthodologique de cette thèse pour étudier la métallisation de l'hydrogène solide et liquide. Nous constatons que pour l'hydrogène moléculaire dans sa structure cristalline parfaite, le gap QMC est en accord avec les calculs précédents de GW. Le traitement des effets quantiques nucléaires entraîne une forte réduction du gap (2 eV). Selon la structure, le gap indirect fondamental se ferme entre 380 et 530 GPa pour les cristaux idéaux et 330-380 GPa pour les cristaux quantiques, ce qui dépend moins de la symétrie cristalline. Au-delà de cette pression, le système entre dans une phase de mauvais métal où la densité des états au niveau de Fermi augmente avec la pression jusqu'à 450-500 GPa lorsque l'écart direct se ferme. Notre travail soutient en partie l'interprétation des récentes expériences sur l'hydrogène à haute pression. Nous explorons également la possibilité d'utiliser une représentation multidéterminante des états excités pour modéliser les excitations neutres et calculer la conductivité via la formule de Kubo. Nous avons appliqué cette méthodologie à l'hydrogène cristallin idéal et limité au niveau de Monte Carlo variationnel de la théorie. Pour l'hydrogène liquide, la principale conclusion est que la fermeture du gap est continue et coïncide avec la transition de dissociation moléculaire. Nous avons été en mesure d'étalonner les fonctions de la théorie fonctionnelle de la densité (DFT) en nous basant sur la densité QMC des états. En utilisant les valeurs propres des calculs Kohn-Sham corrigé par QMC pour calculer les propriétés optiques dans le cadre de la théorie de Kubo-Greenwood , nous avons constaté que l'absorption optique théorique calculée précédemment s'est déplacée vers des énergies plus faibles
The hydrogen metallization problem posed almost 80 years ago, was named as the third open question in physics of the XXI century. Indeed, due to its lightness and reactivity, experimental information on high pressure hydrogen is limited and extremely difficult to obtain. Therefore, the development of accurate methods to guide experiments is essential. In this thesis, we focus on studying the electronic structure, including excited state phenomena, using quantum Monte Carlo (QMC) techniques. In particular, we develop a new method of computing energy gaps accompanied by an accurate treatment of the finite simulation cell error. We formally relate finite size error to the dielectric constant of the material. Before studying hydrogen, the new method is tested on crystalline silicon and carbon diamond, systems for which experimental information on the gap is available. Although finite-size corrected gap values for carbon and silicon are larger than the experimental ones, our results demonstrate that the bias due to the finite size supercell can be corrected for, so precise values in the thermodynamic limit can be obtained for small supercells without need for numerical extrapolation. As hydrogen is a very light material, the nuclear quantum effects are important. An accurate capturing of nuclear effects can be done within the Coupled Electron Ion Monte Carlo (CEIMC) method, a QMC-based first-principles simulation method. We use the results of CEIMC to discuss the thermal renormalization of electronic properties. We introduce a formal way of treating the electronic gap and band structure at a finite temperature within the adiabatic approximation and discuss the approximations that have to be made. We propose as well a novel way of renormalizing the optical properties at low temperature, which will be an improvement upon the commonly used semiclassical approximation. Finally, we apply all the methodological development of this thesis to study the metallization of solid and liquid hydrogen. We find that for ideal crystalline molecular hydrogen the QMC gap is in agreement with previous GW calculations. Treating nuclear zero point effects cause a large reduction in the gap (2 eV). Determining the crystalline structure of solid hydrogen is still an open problem. Depending on the structure, the fundamental indirect gap closes between 380 and 530 GPa for ideal crystals and 330–380 GPa for quantum crystals, which depends less on the crystalline symmetry. Beyond this pressure, the system enters into a bad metal phase where the density of states at the Fermi level increases with pressure up to 450–500 GPa when the direct gap closes. Our work partially supports the interpretation of recent experiments in high pressure hydrogen. However, the scenario where solid hydrogen metallization is accompanied by the structural change, for example, a molecular dissociation, can not be disproved. We also explore the possibility to use a multideterminant representation of excited states to model neutral excitations and compute the conductivity via the Kubo formula. We applied this methodology to ideal crystalline hydrogen and limited to the variational Monte Carlo level of the theory. For liquid hydrogen, the main finding is that the gap closure is continuous and coincides with the molecular dissociation transition. We were able to benchmark density functional theory (DFT) functionals based on the QMC density of states. When using the QMC renormalized Kohn-Sham eigenvalues to compute optical properties within the Kubo-Greenwood theory, we found that previously calculated theoretical optical absorption has a shift towards lower energies
APA, Harvard, Vancouver, ISO, and other styles
12

Omaghomi, Toritseju O. "Analysis of Methods for Estimating Water Demand in Buildings." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406881340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Leme, Rafael Reis [UNESP]. "Teoria quântica do campo escalar real com autoacoplamento quártico - simulações de Monte Carlo na rede com um algoritmo worm." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/92038.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:25:34Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-13Bitstream added on 2014-06-13T19:53:28Z : No. of bitstreams: 1 leme_rr_me_ift.pdf: 924435 bytes, checksum: 92fdbfbe29ac1970f3d28a01d822ca6c (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Neste trabalho apresentamos resultados de simulações de Monte Carlo de uma teoria quântica de campos escalar com autointeração ´fi POT. 4' em uma rede (1+1) empregando o recentemente proposto algoritmo worm. Em simulações de Monte Carlo, a eficiência de um algoritmo é medida em termos de um expoente dinâmico 'zeta', que se relaciona com o tempo de autocorrelação 'tau' entre as medidas de acordo com a relação 'tau' 'alfa' 'L POT. zeta', onde L é o comprimento da rede. O tempo de autocorrelação fornece uma medida para a “memória” do processo de atualização de uma simulação de Monte Carlo. O algoritmo worm possui um 'zeta' comparável aos obtidos com os eficientes algoritmos do tipo cluster, entretanto utiliza apenas processos de atualização locais. Apresentamos resultados para observáveis em função dos parâmetros não renormalizados do modelo 'lâmbda' e 'mü POT. 2'. Particular atenção é dedicada ao valor esperado no vácuo < 'fi'('qui')> e a função de correlação de dois pontos <'fi'('qui')'fi'('qui' POT. 1')>. Determinamos a linha crítica ( ´lâmbda IND. C', 'mu IND C POT. 2') que separa a fase simétrica e com quebra espontânea de simetria e comparamos os resultados com a literatura
In this work we will present results of Monte Carlo simulations of the ´fi POT. 4'quantum field theory on a (1 + 1) lattice employing the recently-proposed worm algorithm. In Monte Carlo simulations, the efficiency of an algorithm is measured in terms of a dynamical critical exponent 'zeta', that is related with the autocorrelation time 'tau' of measurements as 'tau' 'alfa' 'L POT. zeta', where L is the lattice length. The autocorrelation time provides a measure of the “memory” of the Monte Carlo updating process. The worm algorithm has a 'zeta' comparable with the ones obtained with the efficient cluster algorithms, but uses local updates only. We present results for observables as functions of the unrenormalized parameters of the theory 'lâmbda and 'mü POT. 2'. Particular attention is devoted to the vacuum expectation value < 'fi'('qui')> and the two-point correlation function <'fi'('qui')fi(qui pot. 1')>. We determine the critical line( ´lâmbda IND. C', 'mu IND C POT. 2') that separates the symmetric and spontaneously-broken phases and compare with results of the literature
APA, Harvard, Vancouver, ISO, and other styles
14

BRITO, ANDREIA B. de. "Determinação da taxa de desintegração de Tc-99m e In-111 em sistemas de coincidências." reponame:Repositório Institucional do IPEN, 2011. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10047.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:34:11Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:06:46Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
15

Centeno, Hugo Alexandre do Carmo. "MODELAGEM DE CRONOGRAMA DE PROJETO PELA FERRAMENTA DSM COM APOIO AO GERENCIAMENTO E TOMADA DE DECISÕES PELA SIMULAÇÃO DE MONTE CARLO." Pontifícia Universidade Católica de Goiás, 2018. http://tede2.pucgoias.edu.br:8080/handle/tede/3971.

Full text
Abstract:
Submitted by admin tede (tede@pucgoias.edu.br) on 2018-05-14T19:00:50Z No. of bitstreams: 1 HUGO ALEXANDRE DO CARMO CENTENO.pdf: 1058068 bytes, checksum: 733a3144c0c1d4e70087e3a321855b0b (MD5)
Made available in DSpace on 2018-05-14T19:00:50Z (GMT). No. of bitstreams: 1 HUGO ALEXANDRE DO CARMO CENTENO.pdf: 1058068 bytes, checksum: 733a3144c0c1d4e70087e3a321855b0b (MD5) Previous issue date: 2018-04-04
Delays in construction projects are a recurrent fact in several countries and regions for the most varied types of works. These delays, in addition to negatively impacting the image of companies contracted in the provision of construction services, cause several financial losses to interested parties: contractor and hired. Several researches that address the factors that lead to delays in the work schedule indicate the variation in the duration of the activities and, or the lack of an inadequate estimate for the duration of the activities, among the ten major delay factors in the projects. They contribute to the inadequate estimation of production of the activities, the lack of knowledge of the complexity of the activities, and also the lack or insufficiency of historical data that allow a safe estimate of the duration of the activities. Thus, this study aims to describe the behavior of variability in the duration of activities in an environment with little activity productivity data. The productivity data analyzed were taken from three similar construction projects carried out by the same company and which were provided as case studies of this work. For the analysis of the variability and description of the behavior of the activities, after the data collection of productivity of the project activities, the non-parametric Bootstrap data resampling associated with Monte Carlo Simulation (MCS) was used as techniques. Later, in order to verify the reliability of the results of variation in the duration of the activities, the schedule of the case studies was modeled using the Critical Path Method (CPM) and the Dependency Structure Matrix (DSM); and submitted to MCS. The simulated results of the total duration of the projects were adequate for the actual completion period of the projects studied, leading to the conclusion that the results of variation in the duration of the activities obtained by the cited technique are reliable.
Os atrasos em projetos de construção civil são um fato recorrente em diversos países e regiões para os mais variados tipos de obras. Esses atrasos, além de impactar negativamente a imagem das empresas contratadas na prestação de serviços de construção, acarretam diversos prejuízos financeiros às partes interessadas: contratante e contratado. Diversas pesquisas que abordam os fatores que acarretam em atrasos no cronograma de obras elencam a variação na duração das atividades e, ou, a falta de estimativa inadequada para duração das atividades, dentre os dez maiores fatores de atraso nos projetos. Contribuem para a inadequada estimativa de produção das atividades o desconhecimento da complexidade das mesmas e, também, a falta ou insuficiência de dados históricos que permitam estimar com segurança a duração das atividades. Assim, este trabalho visa descrever o comportamento de variabilidade na duração das atividades em um ambiente com poucos dados de produtividade das atividades. Os dados de produtividade analisados foram tomados de três projetos de construção civil semelhantes executados pela mesma empresa e que se prestaram como estudos de caso deste trabalho. Para análise da variabilidade e descrição do comportamento das atividades, após a coleta de dados de produtividade das atividades dos projetos, foram utilizadas como técnicas a reamostragem Bootstrap não paramétrica dos dados associada a Simulação de Monte Carlo (SMC). Posteriormente, para verificar a confiabilidade dos resultados de variação na duração das atividades, o cronograma dos estudos de caso foi modelado utilizando o Método do Caminho Crítico (CPM) e a Matriz de Estrutura de Dependência (DSM); e submetido a SMC. Os resultados simulados de duração total dos projetos mostraram-se adequados ao prazo real de conclusão dos projetos estudados, conduzindo à conclusão de que os resultados de variação na duração das atividades, obtidos pela técnica citada, são confiáveis.
APA, Harvard, Vancouver, ISO, and other styles
16

MATOS, IZABELA T. de. "Padronização dos radionuclídeos F-18 e In-111 e determinação dos coeficientes de conversão interna total para o In-111 em sistema de coincidência por software." reponame:Repositório Institucional do IPEN, 2014. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10615.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:42:28Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:02:00Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
17

Nhongo, Tawuya D. R. "Pricing exotic options using C++." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1008373.

Full text
Abstract:
This document demonstrates the use of the C++ programming language as a simulation tool in the efficient pricing of exotic European options. Extensions to the basic problem of simulation pricing are undertaken including variance reduction by conditional expectation, control and antithetic variates. Ultimately we were able to produce a modularized, easily extend-able program which effectively makes use of Monte Carlo simulation techniques to price lookback, Asian and barrier exotic options. Theories of variance reduction were validated except in cases where we used control variates in combination with the other variance reduction techniques in which case we observed increased variance. Again, the main aim of this half thesis was to produce a C++ program which would produce stable pricings of exotic options.
APA, Harvard, Vancouver, ISO, and other styles
18

Ittiwattana, Waraporn. "A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yevseyeva, Olga. "Modelagem computacional de tomografia com feixe de prótons." Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=956.

Full text
Abstract:
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Nessa tese foi feito um estudo preliminar, destinado à elaboração do programa experimental inicial para a primeira instalação da tomografia com prótons (pCT) brasileira por meio de modelagem computacional. A terapia com feixe de prótons é uma forma bastante precisa de tratamento de câncer. Atualmente, o planejamento de tratamento é baseado na tomografia computadorizada com raios X, alternativamente, a tomografia com prótons pode ser usada. Algumas questões importantes, como efeito de escala e a Curva de Calibração (fonte de dados iniciais para planejamento de terapia com prótons), foram estudados neste trabalho. A passagem de prótons com energias iniciais de 19,68MeV; 23MeV; 25MeV; 49,10MeV e 230MeV pelas camadas de materiais variados (água, alumínio, polietileno, ouro) foi simulada usando códigos Monte Carlo populares como SRIM e GEANT4. Os resultados das simulações foram comparados com a previsão teórica (baseada na solução aproximada da equação de transporte de Boltzmann) e com resultados das simulações feitas com outro popular código Monte Carlo MCNPX. Análise comparativa dos resultados das simulações com dados experimentais publicados na literatura científica para alvos grossos e na faixa de energias de prótons usada em medidas em pCT foi feita. Foi observado que apesar de que todos os códigos mostram os resultados parecidos alguns deslocamentos não sistemáticos podem ser observados. Foram feitas observações importantes sobre a precisão dos códigos e uma necessidade em medidas sistemáticas de frenagem de prótons em alvos grossos foi declarada.
In the present work a preliminary research via computer simulations was made in order to elaborate a prior program for the first experimental pCT setup in Brazil. Proton therapy is a high precise form of a cancer treatment. Treatment planning nowadays is performed basing on X ray Computer Tomography data (CT), alternatively the same procedure could be performed using proton Computer Tomography (pCT). Some important questions, as a scale effect and so called Calibration Curve (as a source of primary data for pCT treatment planning) were studied in this work. The 19.68MeV; 23MeV; 25MeV; 49.10MeV e 230MeV protons passage through varied absorbers (water, aluminum, polyethylene, gold) were simulated by such popular Monte Carlo packages as SRIM and GEANT4. The simulation results were compared with a theoretic prevision based on approximate solution of the Boltzmann transport equation and with simulation results of the other popular Monte Carlo code MCNPX. The comparative analysis of the simulations results with the experimental data published in scientific literature for thick absorbers and within the energy range used in the pCT measurements was made. It was noted in spite of the fact that all codes showed similar results some nonsystematic displacements can be observed. Some important observations about the codes precision were made and a necessity of the systematic measurements of the proton stopping power in thick absorbers was declared.
APA, Harvard, Vancouver, ISO, and other styles
20

MARCHANT, FUENTES Carolina Ivonne. "Essays on multivariate generalized Birnbaum-Saunders methods." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18647.

Full text
Abstract:
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-04-26T17:07:37Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5)
Made available in DSpace on 2017-04-26T17:07:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5) Previous issue date: 2016-10-31
CAPES; BOLSA DO CHILE.
In the last decades, univariate Birnbaum-Saunders models have received considerable attention in the literature. These models have been widely studied and applied to fatigue, but they have also been applied to other areas of the knowledge. In such areas, it is often necessary to model several variables simultaneously. If these variables are correlated, individual analyses for each variable can lead to erroneous results. Multivariate regression models are a useful tool of the multivariate analysis, which takes into account the correlation between variables. In addition, diagnostic analysis is an important aspect to be considered in the statistical modeling. Furthermore, multivariate quality control charts are powerful and simple visual tools to determine whether a multivariate process is in control or out of control. A multivariate control chart shows how several variables jointly affect a process. First, we propose, derive and characterize multivariate generalized logarithmic Birnbaum-Saunders distributions. Also, we propose new multivariate generalized Birnbaum-Saunders regression models. We use the method of maximum likelihood estimation to estimate their parameters through the expectation-maximization algorithm. We carry out a simulation study to evaluate the performance of the corresponding estimators based on the Monte Carlo method. We validate the proposed models with a regression analysis of real-world multivariate fatigue data. Second, we conduct a diagnostic analysis for multivariate generalized Birnbaum-Saunders regression models. We consider the Mahalanobis distance as a global influence measure to detect multivariate outliers and use it for evaluating the adequacy of the distributional assumption. Moreover, we consider the local influence method and study how a perturbation may impact on the estimation of model parameters. We implement the obtained results in the R software, which are illustrated with real-world multivariate biomaterials data. Third and finally, we develop a robust methodology based on multivariate quality control charts for generalized Birnbaum-Saunders distributions with the Hotelling statistic. We use the parametric bootstrap method to obtain the distribution of this statistic. A Monte Carlo simulation study is conducted to evaluate the proposed methodology, which reports its performance to provide earlier alerts of out-of-control conditions. An illustration with air quality real-world data of Santiago-Chile is provided. This illustration shows that the proposed methodology can be useful for alerting episodes of extreme air pollution.
Nas últimas décadas, o modelo Birnbaum-Saunders univariado recebeu considerável atenção na literatura. Esse modelo tem sido amplamente estudado e aplicado inicialmente à modelagem de fadiga de materiais. Com o passar dos anos surgiram trabalhos com aplicações em outras áreas do conhecimento. Em muitas das aplicações é necessário modelar diversas variáveis simultaneamente incorporando a correlação entre elas. Os modelos de regressão multivariados são uma ferramenta útil de análise multivariada, que leva em conta a correlação entre as variáveis de resposta. A análise de diagnóstico é um aspecto importante a ser considerado no modelo estatístico e verifica as suposições adotadas como também sua sensibilidade. Além disso, os gráficos de controle de qualidade multivariados são ferramentas visuais eficientes e simples para determinar se um processo multivariado está ou não fora de controle. Este gráfico mostra como diversas variáveis afetam conjuntamente um processo. Primeiro, propomos, derivamos e caracterizamos as distribuições Birnbaum-Saunders generalizadas logarítmicas multivariadas. Em seguida, propomos um modelo de regressão Birnbaum-Saunders generalizado multivariado. Métodos para estimação dos parâmetros do modelo, tal como o método de máxima verossimilhança baseado no algoritmo EM, foram desenvolvidos. Estudos de simulação de Monte Carlo foram realizados para avaliar o desempenho dos estimadores propostos. Segundo, realizamos uma análise de diagnóstico para modelos de regressão Birnbaum-Saunders generalizados multivariados. Consideramos a distância de Mahalanobis como medida de influência global de detecção de outliers multivariados utilizando-a para avaliar a adequacidade do modelo. Além disso, desenvolvemos medidas de diagnósticos baseadas em influência local sob alguns esquemas de perturbações. Implementamos a metodologia apresentada no software R, e ilustramos com dados reais multivariados de biomateriais. Terceiro, e finalmente, desenvolvemos uma metodologia robusta baseada em gráficos de controle de qualidade multivariados para a distribuição Birnbaum-Saunders generalizada usando a estatística de Hotelling. Baseado no método bootstrap paramétrico encontramos aproximações da distribuição desta estatística e obtivemos limites de controle para o gráfico proposto. Realizamos um estudo de simulação de Monte Carlo para avaliar a metodologia proposta indicando seu bom desempenho para fornecer alertas precoces de processos fora de controle. Uma ilustração com dados reais de qualidade do ar de Santiago-Chile é fornecida. Essa ilustração mostra que a metodologia proposta pode ser útil para alertar sobre episódios de poluição extrema do ar, evitando efeitos adversos na saúde humana.
APA, Harvard, Vancouver, ISO, and other styles
21

Hidalgo, Francisco Luiz Campos. "Quantificação da incerteza do problema de flexão estocástica de uma viga de Euler-Bernoulli, apoiada em fundação de Pasternak, utilizando o método estocástico de Galerkin e o método dos elementos finitos estocásticos." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/1069.

Full text
Abstract:
Este trabalho apresenta uma metodologia, baseada no método de Galerkin, para quantificar a incerteza no problema de flexão estocástica da viga de Euler-Bernoulli repousando em fundação de Pasternak. A incerteza nos coeficientes de rigidez da viga e da fundação é representada por meio de processos estocásticos parametrizados. A limitação em probabilidade dos parâmetros randômicos e a escolha adequada do espaço de soluções aproximadas, necessárias à posterior demonstração de unicidade e existência do problema, são consideradas por meio de hipóteses teóricas. O espaço de soluções aproximadas de dimensão finita é construído pelo produto tensorial entre espaços (determinístico e randômico), obtendo-se um espaço denso no espaço das soluções teóricas. O esquema de Wiener-Askey dos polinômios do caos generalizados é utilizado na representação do processo estocástico de deslocamento da viga. O método dos elementos finitos estocásticos é apresentado e empregado na solução numérica de exemplos selecionados. Os resultados, em termos de momentos estatísticos, são comparados aos obtidos por meio de simulações de Monte Carlo.
This study presents a methodology, based on the Galerkin method, to quantify the uncertainty in the stochastic bending problem of an Euler-Bernoulli beam resting on a Pasternak foundation. The uncertainty in the stiffness coefficients of the beam and foundation is represented by parametrized stochastic processes. The probability limitation on the random parameters and the choice of an appropriated approximate solution space, necessary for the subsequent demonstration of uniqueness and existence of the problem, are considered by means of theoretical hypothesis. The finite dimensional space of approximate solutions is built by tensor product between spaces (deterministic and randomic), obtaining a dense space in the theoretical solution space. The Wiener-Askey scheme of generalizes chaos polynomials is used to represent the stochastic process of the beam deflection. The stochastic finite element method is presented and employed in the numerical solution of selected examples. The results, in terms of statistical moments, are compared to results obtained through Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
22

Santos, Marcelo Borges dos. "Estimativas dos momentos estatísticos para o problema de flexão estocástica de viga em uma fundação Pasternak." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1301.

Full text
Abstract:
A presente dissertação propõe a resolução do problema de flexão estocástica em uma viga Euler-Bernoulli, sobre uma fundação do tipo Pasternak, através de um método computacional baseado na simulação de Monte Carlo. A incerteza está presente nos coeficientes elásticos da viga e da fundação. Primeiramente, é estabelecida a formulação matemática do problema que é oriunda, de um modelo físico de deslocamento da viga, que leva em consideração a influência da fundação sobre a resposta do problema. Portanto foi realizado um estudo a cerca dos modelos mais usuais de fundação, que são: o modelo do tipo Winkler, e modelo de Pasternak. Logo a seguir foi provado que o problema variacional abstrato, derivado da formulação forte do problema, apresenta solução e esta é única. Para a obtenção da solução do problema, foi realizada uma fundamentação matemática, dos seguintes assuntos: representação da incerteza, método de Galerkin, série de Neumann, e por fim das cotas inferiores e superiores. Finalmente, o desempenho das cotas inferiores e superiores, em relação à simulação de Monte Carlo direto, foram avaliadas através de vários casos, nos quais a incerteza repousa sobre os diversos coeficientes que compõe a equação de flexão na forma de um problema variacional. A metodologia mostrou-se eficiente, tanto no aspecto da convergência da resposta quanto no que se refere ao custo computacional.
This work proposes the resolution of stochastic bending problem in a Euler- Bernoulli beam, on a foundation type Pasternak, through a computational method based on Monte Carlo simulation. Uncertainty is present in the elastic coefficients of the beam and foundation. First, it is established the mathematical formulation of the problem which is derived from a physical model displacement of the beam, that takes into account the influence of the foundation on the problem of response. This requires an approach that is made up on the most common models of foundation, which are: the model Winkler type and model of Pasternak.In sequence we study the existence and uniqueness of the variational problem. To obtain the solution of the problem, a mathematical reasoning is carried out, to the following matters: representation of uncertainty, Galerkin method, serial Neumann, and finally the lower and upper bounds. Finally, the performance of lower and upper bounds, derived from direct simulation of Monte Carlo were evaluated through various cases where the uncertainty lies in the different coefficients composing the equation bending as a variational problem. The method proved to be efficient, both in the response of the convergence point as regards the computational cost.
APA, Harvard, Vancouver, ISO, and other styles
23

Russew, Thomas. "Etude et simulation d'un détecteur pour l'expérience GRAAL à l'ESRF : application à la photoproduction d'étrangeté." Université Joseph Fourier (Grenoble), 1995. http://www.theses.fr/1995GRE10182.

Full text
Abstract:
Grace a son faisceau de photons de haute energie, l'experience graal va pouvoir mesurer pour la premiere fois des observables de simple et double polarisation liees a la polarisation du faisceau pour des reactions de photoproduction d'etrangete. Apres un expose des modeles phenomenologiques decrivant le mecanisme de telles reactions et de la definition des observables de polarisation, cette these presente en detail le faisceau de gammas produit par retrodiffusion compton et le detecteur de particules de l'experience graal. La simulation monte carlo permet d'etudier la reponse detaillee du detecteur aux evenements photonucleaires. Des procedures de reconstruction de traces, de reduction de bruit de fond et d'analyse finale ont ete developpees, testees et optimisees. Parmi deux procedes qui suppriment la presque totalite du bruit de fond, celui garantissant la meilleure efficacite a ete choisi. La reproduction precise des valeurs d'une observable de simple polarisation et de deux observables de double polarisation pour un ensemble de donnees correspondant a 420000 reactions gamma(p,lambda)k prouve que des telles quantites peuvent etre mesurees avec exactitude par l'experience graal
APA, Harvard, Vancouver, ISO, and other styles
24

Madeira, Marcelo Gomes. "Comparação de tecnicas de analise de risco aplicadas ao desenvolvimento de campos de petroleo." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263732.

Full text
Abstract:
Orientadores: Denis Jose Schiozer, Eliana L. Ligero
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica, Instituto de Geociencias
Made available in DSpace on 2018-08-04T09:02:20Z (GMT). No. of bitstreams: 1 Madeira_MarceloGomes_M.pdf: 1487066 bytes, checksum: 39d5e48bf728f51d3c85f7d5ef207d41 (MD5) Previous issue date: 2005
Resumo: Os processos de tomada de decisões em campos de petróleo estão associados a grandes riscos provenientes de incertezas geológicas, econômicas e tecnológicas e altos investimentos. Nas fases de avaliação e desenvolvimento dos campos, torna-se necessário modelar o processo de recuperação com confiabilidade aumentando o esforço computacional. Uma forma de acelerar o processo é através de simplificações sendo algumas discutidas neste trabalho: técnica de quantificação do risco (Monte Carlo, árvore de derivação), redução no número de atributos, tratamento simplificado de atributos e simplificação da modelagem do reservatório. Ênfase especial está sendo dada à (1) comparação entre Monte Carlo e árvore de derivação e (2) desenvolvimento de modelos rápidos através de planejamento de experimentos e superfície de resposta. Trabalhos recentes estão sendo apresentados sobre estas técnicas, mas normalmente mostrando aplicações e não comparação entre alternativas. O objetivo deste trabalho é comparar estas técnicas levando em consideração a confiabilidade, a precisão dos resultados e aceleração do processo. Estas técnicas são aplicadas a um campo marítimo e os resultados mostram que (1) é possível reduzir significativamente o número de simulações do fluxo mantendo a precisão dos resultados e que (2) algumas simplificações podem afetar o processo de decisão
Abstract: Petroleum field decision-making process is associated to high risks due to geological, economic and technological uncertainties, and high investments, mainly in the appraisal and development phases of petroleum fields where it is necessary to model the recovery process with higher precision increasing the computational time. One way to speedup the process is by simplifying the process; some simplifications are discussed in this work: technique to quantify the risk (Monte Carlo and derivative tree), reduction of number of attributes, simplification of the treatment of attributes and simplification of the reservoir modeling process. Special emphasis is given to (1) comparison between Monte Carlo and derivative tree techniques and (2) development of fast models through experimental design and response surface method. Some works are being presented about these techniques but normally they show applications and no comparison among alternatives is presented. The objective of this work is to compare these techniques taking into account the reliability, precision of the results and speedup of the process. These techniques are applied to an offshore field and the results show that it is possible to reduce significantly the number of flow simulation maintaining the precision of the results. It is also possible to show that some simplifications can yield different results affecting the decision process
Mestrado
Reservatórios e Gestão
Mestre em Ciências e Engenharia de Petróleo
APA, Harvard, Vancouver, ISO, and other styles
25

LACERDA, FLAVIO W. "Padronização de sup(68)Ga em sistema de coincidências 4pß-?" reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10211.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:36:04Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:49Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
26

SILVA, Elson Natanael Moreira. "ESTIMAÇÃO PROBABILÍSTICA DO NÍVEL DE DISTORÇÃO HARMÔNICA TOTAL DE TENSÃO EM REDES DE DISTRIBUIÇÃO SECUNDÁRIAS COM GERAÇÃO DISTRIBUÍDA FOTOVOLTAICA." Universidade Federal do Maranhão, 2017. http://tedebc.ufma.br:8080/jspui/handle/tede/1296.

Full text
Abstract:
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-04-17T13:14:17Z No. of bitstreams: 1 Elson Moreira.pdf: 7883984 bytes, checksum: cf59b3b0b24a249a7fd9e2390b7f16de (MD5)
Made available in DSpace on 2017-04-17T13:14:17Z (GMT). No. of bitstreams: 1 Elson Moreira.pdf: 7883984 bytes, checksum: cf59b3b0b24a249a7fd9e2390b7f16de (MD5) Previous issue date: 2017-02-10
CNPQ
A problem of electric power quality that always affects the consumers of the distribution network are the harmonic distortions. Harmonic distortions arise from the presence of socalled harmonic sources, which are nonlinear equipment, i.e., equipment in which the voltage waveform differs from the current. Such equipment injects harmonic currents in the network generating distortions in the voltage waveform. Nowadays, the number of these equipment in the electrical network has increased considerably. However, the increasing use of such equipment over the network makes systems more vulnerable and prone to quality problems in the supply of electricity to consumers. In addition, it is important to note that in the current scenario, the generation of electricity from renewable sources, connected in the secondary distribution network, is increasing rapidly. This is mainly due to shortage and high costs of fossil fuels. In this context, the Photovoltaic Distributed Generation (PVDG), that uses the sun as a primary source for electric energy generation, is the main technology of renewable generation installed in distribution network. However, the PVDG is a potential source of harmonics, because the interface of the PVDG with the CA network is carried out by a CC/CA inverter, that is a highly nonlinear equipment. Thus, the electrical power quality problems associated with harmonic distortion in distribution networks tend to increase and be very frequent. One of the main indicators of harmonic distortion is the total harmonic distortion of voltage ( ) used by distribution utilities to limit the levels of harmonic distortion present in the electrical network. In the literature there are several deterministic techniques to estimate . These techniques have the disadvantage of not considering the uncertainties present in the electric network, such as: change in the network configuration, load variation, intermittence of the power injected by renewable distributed generation. Therefore, in order to provide a more accurate assessment of the harmonic distortions, this dissertation has as main objective to develop a probabilistic methodology to estimate the level of in secondary distribution networks considering the uncertainties present in the network and PVDG connected along the network. The methodology proposed in this dissertation is based on the combination of the following techniques: three-phase harmonic power flow in phase coordinate via method sum of admittance, point estimate method and series expansion of Gram-Charlier. The validation of the methodology was performed using the Monte Carlo Simulation. The methodology was tested in European secondary distribution network with 906 nodes of 416 V. The results were obtained by performing two case studies: without the presence of PVDG and with the PVDG connection. For the case studies, the following statistics for nodal were estimated: mean value, standard deviation and the 95% percentile. The results showed that the probabilistic estimation of is more complete, since it shows the variation of due to the uncertainties associated with harmonic sources and electric network. In addition, they show that the connection of PV-DG in the electric network significantly affects the levels of of the electric network.
Um problema de qualidade de energia elétrica que afeta os consumidores da rede de distribuição secundária são as distorções harmônicas. As distorções harmônicas são provenientes da presença das chamadas fontes de harmônicas que são equipamentos de características não-lineares, ou seja, equipamentos em que a forma de onda da tensão difere da de corrente. Tais equipamentos injetam correntes harmônicas na rede produzindo, portanto distorções na forma de onda da tensão. Nos dias atuais, a quantidade desses equipamentos na rede elétrica tem aumentado consideravelmente. Porém, o uso crescente desse tipo de equipamento ao longo da rede torna os sistemas mais vulneráveis e propensos a apresentarem problemas de qualidade no fornecimento de energia elétrica aos consumidores. Além disso, é importante destacar que no cenário atual, a geração de energia elétrica a partir de fontes renováveis, conectada na rede de distribuição secundária, está aumentando rapidamente. Isso se deve principalmente devido a escassez e altos custos dos combustíveis fosseis. Neste contexto, a Geração Distribuída Fotovoltaica (GDFV), que utiliza o sol como fonte primária para geração de energia elétrica, é a principal tecnologia de geração renovável instalada na rede de distribuição no Brasil. Contudo, a GDFV é uma potencial fonte de harmônica, pois a interface da GDFV com a rede CA é realizada por um inversor CC/CA, que é um equipamento altamente não-linear. Desde modo, os problemas de qualidade de energia elétrica associados à distorção harmônica nas redes de distribuição tendem a aumentar e a serem bem frequentes nos consumidores da rede de distribuição secundárias. Um dos principais indicadores de distorção harmônica é a distorção harmônica total de tensão ( do inglês “Total Harmonic Distortion of Voltage”) utilizada pelas concessionárias de energia elétrica para quantificar os níveis de distorção harmônica presentes na rede elétrica. Na literatura técnica existem várias técnicas determinísticas para estimar a . Essas técnicas possuem a desvantagem de não considerar as incertezas presentes na rede elétrica, tais como: mudança na configuração da rede, variação de carga e intermitência da potência injetada pela geração distribuída renovável. Portanto, a fim de fornecer uma avaliação mais precisa das distorções harmônicas, este trabalho tem como principal objetivo desenvolver uma metodologia probabilística para estimar o nível de em redes de distribuição secundária considerando as incertezas presentes na rede e na GDFV conectada ao longo da rede. A metodologia proposta nesta dissertação se baseia na combinação das seguintes técnicas: fluxo de potência harmônico trifásico em coordenadas de fase via método de soma de admitância, método de estimação por pontos e expansão em série de Gram-Charlier. Além disso, a validação da metodologia foi realizada utilizando a Simulação Monte Carlo. A metodologia desenvolvida foi testada na rede de distribuição secundária europeia com 906 nós de 416 V. Os resultados foram obtidos realizando dois casos de estudos: sem a presença de GDFV e com a conexão de GDFV. Para ambos os casos de estudo as seguintes estatísticas do nodal foram estimadas: valor médio, desvio padrão e o percentil de 95%. Os resultados demonstraram que a estimação probabilística da é mais completa, pois mostra a variação da devido às incertezas associadas com as fontes de harmônicas e as da rede elétrica. Os resultados também mostram que a conexão da GDFV afeta significativamente os níveis de da rede elétrica
APA, Harvard, Vancouver, ISO, and other styles
27

Yagui, Akemi. "Avaliação da interação de feixes monoenergéticos e polienergéticos por meio de simulações em GEANT4 em fantomas diversos." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2716.

Full text
Abstract:
A terapia com prótons está presente em 16 países e até 2015 tratou mais de 130 mil pacientes. No entanto, no Brasil essa terapia ainda não está presente por diversos motivos, sendo o principal o alto custo. Antes de realizar tratamentos, é necessário fazer alguns testes para verificação da entrega de energia dos feixes de prótons. Como as medidas de microdosimetria são muito caras, a principal alternativa é a realização de simulações em programas computacionais, como o GEANT4 e SRIM. O GEANT4 é um programa que permite simular geometrias complexas, enquanto que o SRIM realiza simulações mais simples e ambas trabalham com o método de Monte Carlo. Neste trabalho foram utilizadas estas duas ferramentas para realizar simulações de feixes de prótons em fantomas com três diferentes composições (água, água e tecido ósseo, tecido ósseo e cerebral). Para realizar a análise da entrega de energia dos feixes de prótons ao longo destes fantomas, tornou-se necessário criar um programa denominada “Programa de Processamento de Dados em Próton Terapia Simulada”, que proporcionou criar matrizes, além dos cálculos dos picos de Bragg para avaliação da interação. Além disso, foi analisada a homogeneidade da interação de um feixe de prótons em um detector, em que foi verificado que as simulações em GEANT4 são homogêneas, não tendo uma tendência do feixe em se localizar em uma determinada região, assim como as energias depositadas são iguais nas regiões do fantoma. Também foram avaliados os valores do alcance de profundidade dos picos de Bragg em fantomas cilíndricos com três diferentes densidades: 1,03 g/cm³, 1,53 g/cm³ e 2,03 g/cm³, sendo a primeira, a densidade fornecida pelo GEANT4 para tecido cerebral. Foi verificado que as distâncias do alcance de profundidade dos picos de Bragg são iguais nestas três diferentes densidades.
Proton therapy is present in 16 countries and by 2015 has treated more than 130,000 Patients. However, in Brazil this therapy is not yet present for several reasons, Being the main the high cost. Before performing treatments, it is necessary to do some tests to verify the energy delivery of the proton bundles. As the Microdosimetry are very expensive, the main alternative is to carry out simulations in Programs such as GEANT4 and SRIM. GEANT4 is a program that Allows you to simulate complex geometries, while SRIM performs more complex simulations. Simple and both work with the Monte Carlo method. On this work were used these twain tools to perform a proton beam simulation in phantom with three different compositions (water, bones and water, brain and bones). To perform the energy delivery analysis of the proton beams along these phantoms, has become necessary create a program denominated “Data Processing Program Proton Therapy Simulated”, which allowed to create matrices, beyond the calculations of the Bragg peaks for interaction evaluation. Besides that, it was analyzing the homogeneity of the integration of a proton beam into a detector, in which it was verified that the simulations on GEANT4 are homogeneous, not having a tendency of the beam in locating in a certain region, just as the energies deposited are equal. The value of the depth range of the Bragg peaks were also evaluated in cylindrical phantoms with three different densities: 1,03 g/cm³, 1,53g/cm³ and 2,03 g/cm³, the first being the density provided by GEANT4 for brain tissue. It has been found that the depth range distances of the Bragg peaks are the same at these three different densities.
APA, Harvard, Vancouver, ISO, and other styles
28

Maucourt, Jérôme. "Étude par simulations numériques des transitions de phase dans les modèles de spins XY désordonnés." Université Joseph Fourier (Grenoble ; 1971-2015), 1997. http://www.theses.fr/1997GRE10216.

Full text
Abstract:
L'objet de cette these est l'etude des transitions de phase dans les modeles de spins xy desordonnes. Les outils numeriques que nous avons utilises sont la methode monte carlo et la methode de groupe de renormalisation de paroi de domaine. Les resultats que nous avons obtenus sont le fruit de simulations realisees a partir de codes originaux mis au point sur machines paralleles. Le premier type de modele que nous avons etudie est un modele a desordre de liaisons : le modele bimodal ou regne une competition des interactions de signe aleatoire gele entre premiers voisins. Une etude monte carlo suivie d'une analyse d'echelle de taille finie nous a permis de determiner le diagramme de phase du modele en dimension 2 ainsi que l'ensemble de ses exposants critiques. Dans la region de desordre fort, les transitions de verre de spin et de verre chiral qui se produisent a temperature nulle, sont decouplees. Dans le cas du modele tridimensionnel a desordre maximum, nous avons grace a l'apport d'un algorithme tres efficace de recherche de l'etat fondamental, obtenu des resultats d'energies de defaut de paroi de domaine qui montrent que la transition de verre chiral se produit a temperature finie et que la transition de verre de spin se produit probablement a temperature tres faible. Nos resultats montrent que la dimension critique inferieure du modele est proche de 3. Le deuxieme type de modele aborde est un modele ou le desordre est introduit par l'existence de dephasages aleatoires geles entre les spins. Par une utilisation poussee de l'hypothese d'universalite, nous avons confirme une recente theorie qui exclu l'existence d'une phase reentrante dans le cas bidimensionnel. Ainsi, la phase kosterlitz-thouless est stable jusqu'a temperature nulle en dessous du desordre critique. En dimension 3, nous avons montre par une etude de groupe de renormalisation de paroi de domaine que la transition de verre de spin a lieu a temperature finie.
APA, Harvard, Vancouver, ISO, and other styles
29

Cao, Liang. "Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6539.

Full text
Abstract:
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
APA, Harvard, Vancouver, ISO, and other styles
30

"Monte Carlo simulation in risk estimation." 2013. http://library.cuhk.edu.hk/record=b5549771.

Full text
Abstract:
本论文主要研究两类风险估计问题:一类是美式期权价格关于模型参数的敏感性估计, 另一类是投资组合的风险估计。针对这两类问题,我们相应地提出了高效的蒙特卡洛模拟方法。这构成了本文的两个主要部分。
第二章是本文的第一部分。在这章中,我们将美式期权的敏感性估计问题提成了更具一般性的估计问题:如果一个随机最优化问题依赖于某些模型参数, 我们该如何估计其最优目标函数关于参数的敏感性。在该问题中, 由于最优决策关于模型参数可能不连续,传统的无穷小扰动分析方法不能直接应用。针对这个困难,我们提出了一种广义的无穷小扰动分析方法,得到敏感性的无偏估计。 我们的方法显示, 在估计敏感性时, 其实并不需要样本路径关于参数的可微性。这是我们在理论上的新发现。另一方面, 该方法可以非常容易的应用于美式期权的敏感性估计。在实际应用中敏感性的无偏估计可以直接嵌入流行的美式期权定价算法,从而同时得到期权价格和价格关于模型参数的敏感性。包括高维问题和多种不同的随机过程模型在内的数值实验, 均显示该估计在计算上具有显著的优越性。最后,我们还从理论上刻画了美式期权的近似最优执行策略对敏感性估计的影响,给出了误差上界。
第三章是本文的第二部分。在本章中,我们研究投资组合的风险估计问题。该问题也可被推广成一个一般性的估计问题:如何估计条件期望在作用上一个非线性泛函之后的期望。针对该类估计问题,我们提出了一种多层模拟方法。我们的估计量实际上是一些简单嵌套估计量的线性组合。我们的方法非常容易实现,并且可以被广泛应用于不同的问题结构。理论分析表明我们的方法适用于不同维度的问题并且算法复杂性低于文献中现有的方法。包括低维和高维的数值实验验证了我们的理论分析。
This dissertation mainly consists of two parts: a generalized infinitesimal perturbation analysis (IPA) approach for American option sensitivities estimation and a multilevel Monte Carlo simulation approach for portfolio risk estimation.
In the first part, we develop efficient Monte Carlo methods for estimating American option sensitivities. The problem can be re-formulated as how to perform sensitivity analysis for a stochastic optimization problem when it has model uncertainty. We introduce a generalized IPA approach to resolve the difficulty caused by discontinuity of the optimal decision with respect to the underlying parameter. The unbiased price-sensitivity estimators yielded from this approach demonstrate significant advantages numerically in both high dimensional environments and various process settings. We can easily embed them into many of the most popular pricing algorithms without extra simulation effort to obtain sensitivities as a by-product of the option price. This generalized approach also casts new insights on how to perform sensitivity analysis using IPA: we do not need pathwise differentiability to apply it. Another contribution of this chapter is to investigate how the estimation quality of sensitivities will be affected by the quality of approximated exercise times.
In the second part, we propose a multilevel nested simulation approach to estimate the expectation of a nonlinear function of a conditional expectation, which has a direct application in portfolio risk estimation problems under various risk measures. Our estimator consists of a linear combination of several standard nested estimators. It is very simple to implement and universally applicable across various problem settings. The results of theoretical analysis show that the algorithmic complexities of our estimators are independent of the problem dimensionality and are better than other alternatives in the literature. Numerical experiments, in both low and high dimensional settings, verify our theoretical analysis.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Liu, Yanchu.
"December 2012."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 89-96).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Abstract --- p.i
Abstract in Chinese --- p.iii
Acknowledgements --- p.v
Contents --- p.vii
List of Tables --- p.ix
List of Figures --- p.xii
Chapter 1. --- Overview --- p.1
Chapter 2. --- American Option Sensitivities Estimation via a Generalized IPA Approach --- p.4
Chapter 2.1. --- Introduction --- p.4
Chapter 2.2. --- Formulation of the American Option Pricing Problem --- p.10
Chapter 2.3. --- Main Results --- p.14
Chapter 2.3.1. --- A Generalized IPA Approach in the Presence of a Decision Variable --- p.16
Chapter 2.3.2. --- Unbiased First-Order Sensitivity Estimators --- p.21
Chapter 2.4. --- Implementation Issues and Error Analysis --- p.23
Chapter 2.5. --- Numerical Results --- p.26
Chapter 2.5.1. --- Effects of Dimensionality --- p.27
Chapter 2.5.2. --- Performance under Various Underlying Processes --- p.29
Chapter 2.5.3. --- Effects of Exercising Policies --- p.31
Chapter 2.6. --- Conclusion Remarks and Future Work --- p.33
Chapter 2.7. --- Appendix --- p.35
Chapter 2.7.1. --- Proofs of the Main Results --- p.35
Chapter 2.7.2. --- Likelihood Ratio Estimators --- p.43
Chapter 2.7.3. --- Derivation of Example 2.3 --- p.49
Chapter 3. --- Multilevel Monte Carlo Nested Simulation for Risk Estimation --- p.52
Chapter 3.1. --- Introduction --- p.52
Chapter 3.1.1. --- Examples --- p.53
Risk Measurement of Financial Portfolios --- p.53
Derivatives Pricing --- p.55
Partial Expected Value of Perfect Information --- p.56
Chapter 3.1.2. --- A Standard Nested Estimator --- p.57
Chapter 3.1.3. --- Literature Review --- p.59
Chapter 3.1.4. --- Summary of Our Contributions --- p.61
Chapter 3.2. --- The Multilevel Approach --- p.63
Chapter 3.2.1. --- Motivation --- p.63
Chapter 3.2.2. --- Multilevel Construction --- p.65
Chapter 3.2.3. --- Theoretical Analysis --- p.67
Chapter 3.2.4. --- Further Improvement by Extrapolation --- p.69
Chapter 3.3. --- Numerical Experiments --- p.72
Chapter 3.3.1. --- Single Asset Setting --- p.73
Chapter 3.3.2. --- Multiple Asset Setting --- p.74
Chapter 3.4. --- Concluding Remarks --- p.77
Chapter 3.5. --- Appendix: Technical Assumptions and Proofs of the Main Results --- p.79
Bibliography --- p.89
APA, Harvard, Vancouver, ISO, and other styles
31

Bidgood, Peter Mark. "Internal balance calibration and uncertainty estimation using Monte Carlo simulation." Thesis, 2014. http://hdl.handle.net/10210/9728.

Full text
Abstract:
D.Ing. (Mechanical Engineering)
The most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
APA, Harvard, Vancouver, ISO, and other styles
32

"A simulation approach to evaluate combining forecasts methods." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888016.

Full text
Abstract:
by Ho Kwong-shing Lawrence.
Thesis (M.B.A.)--Chinese University of Hong Kong, 1994.
Includes bibliographical references (leaves 43-44).
ABSTRACT --- p.ii
TABLE OF CONTENTS --- p.iii
ACKNOWLEDGEMENT --- p.iv
CHAPTER
Chapter I. --- INTRODUCTION AND LITERATURE REVIEW --- p.1
Chapter II. --- COMBINING SALES FORECASTS --- p.7
Chapter III. --- EXPERIMENTAL DESIGN --- p.14
Chapter IV. --- SIMULATION RESULTS --- p.19
Chapter V. --- SUMMARY AND CONCLUSION --- p.27
APPENDIX --- p.31
BIBLIOGRAPHY --- p.43
APA, Harvard, Vancouver, ISO, and other styles
33

James, Steven Doron. "The effect of simulation bias on action selection in Monte Carlo Tree Search." Thesis, 2016. http://hdl.handle.net/10539/21673.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science. August 2016.
Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. It combines a traditional tree-search approach with Monte Carlo simulations, using the outcome of these simulations (also known as playouts or rollouts) to evaluate states in a look-ahead tree. That MCTS does not require an evaluation function makes it particularly well-suited to the game of Go — seen by many to be chess’s successor as a grand challenge of artificial intelligence — with MCTS-based agents recently able to achieve expert-level play on 19×19 boards. Furthermore, its domain-independent nature also makes it a focus in a variety of other fields, such as Bayesian reinforcement learning and general game-playing. Despite the vast amount of research into MCTS, the dynamics of the algorithm are still not yet fully understood. In particular, the effect of using knowledge-heavy or biased simulations in MCTS still remains unknown, with interesting results indicating that better-informed rollouts do not necessarily result in stronger agents. This research provides support for the notion that MCTS is well-suited to a class of domain possessing a smoothness property. In these domains, biased rollouts are more likely to produce strong agents. Conversely, any error due to incorrect bias is compounded in non-smooth domains, and in particular for low-variance simulations. This is demonstrated empirically in a number of single-agent domains.
LG2017
APA, Harvard, Vancouver, ISO, and other styles
34

Du, Yiping. "Efficient Simulation Methods for Estimating Risk Measures." Thesis, 2011. https://doi.org/10.7916/D8J10FQ4.

Full text
Abstract:
In this thesis, we analyze the computational problem of estimating financial risk in nested Monte Carlo simulation. An outer simulation is used to generate financial scenarios, and an inner simulation is used to estimate future portfolio values in each scenario. Mean squared error (MSE) for standard nested simulation converges at the rate $k^{-2/3}$, where $k$ is the computational budget. In the first part of this thesis, we focus on one risk measure, the probability of a large loss, and we propose a new algorithm to estimate this risk. Our algorithm sequentially allocates computational effort in the inner simulation based on marginal changes in the risk estimator in each scenario. Theoretical results are given to show that the risk estimator has an asymptotic MSE of order $k^{-4/5+\epsilon}$, for all positive $\epsilon$, that is faster compared to the conventional uniform inner sampling approach. Numerical results consistent with the theory are presented. In the second part of this thesis, we introduce a regression-based nested Monte Carlo simulation method for risk estimation. The proposed regression method combines information from different risk factor realizations to provide a better estimate of the portfolio loss function. The MSE of the regression method converges at the rate $k^{-1}$ until reaching an asymptotic bias level which depends on the magnitude of the regression error. Numerical results consistent with our theoretical analysis are provided and numerical comparisons with other methods are also given. In the third part of this thesis, we propose a method based on weighted regression. Similar to the unweighted regression method, the MSE of the weighted regression method converges at the rate $k^{-1}$ until reaching an asymptotic bias level, which depends on the size of the regression error. However, the weighted approach further reduces MSE by emphasizing scenarios that are more important to the calculation of the risk measure. We find a globally optimal weighting strategy for general risk measures in an idealized setting. For applications, we propose and test a practically implementable two-pass method, where the first pass uses an unweighted regression and the second pass uses weights based on the first pass.
APA, Harvard, Vancouver, ISO, and other styles
35

Han, Baoguang. "Statistical analysis of clinical trial data using Monte Carlo methods." Thesis, 2014. http://hdl.handle.net/1805/4650.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
36

"Exact simulation of SDE: a closed form approximation approach." 2010. http://library.cuhk.edu.hk/record=b5894499.

Full text
Abstract:
Chan, Tsz Him.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (p. 94-96).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Monte Carlo method in Finance --- p.6
Chapter 2.1 --- Principle of MC and pricing theory --- p.6
Chapter 2.2 --- An illustrative example --- p.9
Chapter 3 --- Discretization method --- p.15
Chapter 3.1 --- The Euler scheme and Milstein scheme --- p.16
Chapter 3.2 --- Convergence of Mean Square Error --- p.19
Chapter 4 --- Quasi Monte Carlo method --- p.22
Chapter 4.1 --- Basic idea of QMC --- p.23
Chapter 4.2 --- Application of QMC in Finance --- p.29
Chapter 4.3 --- Another illustrative example --- p.34
Chapter 5 --- Our Methodology --- p.42
Chapter 5.1 --- Measure decomposition --- p.43
Chapter 5.2 --- QMC in SDE simulation --- p.51
Chapter 5.3 --- Towards a workable algorithm --- p.58
Chapter 6 --- Numerical Result --- p.69
Chapter 6.1 --- Case I Generalized Wiener Process --- p.69
Chapter 6.2 --- Case II Geometric Brownian Motion --- p.76
Chapter 6.3 --- Case III Ornstein-Uhlenbeck Process --- p.83
Chapter 7 --- Conclusion --- p.91
Bibliography --- p.96
APA, Harvard, Vancouver, ISO, and other styles
37

Deng, Jian. "Stochastic collocation methods for aeroelastic system with uncertainty." Master's thesis, 2009. http://hdl.handle.net/10048/557.

Full text
Abstract:
Thesis (M. Sc.)--University of Alberta, 2009.
Title from pdf file main screen (viewed on Sept. 3, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics, Department of Mathematical and Statistical Sciences, University of Alberta." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

Saha, Nilanjan. "Methods For Forward And Inverse Problems In Nonlinear And Stochastic Structural Dynamics." Thesis, 2007. http://hdl.handle.net/2005/608.

Full text
Abstract:
A main thrust of this thesis is to develop and explore linearization-based numeric-analytic integration techniques in the context of stochastically driven nonlinear oscillators of relevance in structural dynamics. Unfortunately, unlike the case of deterministic oscillators, available numerical or numeric-analytic integration schemes for stochastically driven oscillators, often modelled through stochastic differential equations (SDE-s), have significantly poorer numerical accuracy. These schemes are generally derived through stochastic Taylor expansions and the limited accuracy results from difficulties in evaluating the multiple stochastic integrals. We propose a few higher-order methods based on the stochastic version of transversal linearization and another method of linearizing the nonlinear drift field based on a Girsanov change of measures. When these schemes are implemented within a Monte Carlo framework for computing the response statistics, one typically needs repeated simulations over a large ensemble. The statistical error due to the finiteness of the ensemble (of size N, say)is of order 1/√N, which implies a rather slow convergence as N→∞. Given the prohibitively large computational cost as N increases, a variance reduction strategy that enables computing accurate response statistics for small N is considered useful. This leads us to propose a weak variance reduction strategy. Finally, we use the explicit derivative-free linearization techniques for state and parameter estimations for structural systems using the extended Kalman filter (EKF). A two-stage version of the EKF (2-EKF) is also proposed so as to account for errors due to linearization and unmodelled dynamics. In Chapter 2, we develop higher order locally transversal linearization (LTL) techniques for strong and weak solutions of stochastically driven nonlinear oscillators. For developing the higher-order methods, we expand the non-linear drift and multiplicative diffusion fields based on backward Euler and Newmark expansions while simultaneously satisfying the original vector field at the forward time instant where we intend to find the discretized solution. Since the non-linear vector fields are conditioned on the solution we wish to determine, the methods are implicit. We also report explicit versions of such linearization schemes via simple modifications. Local error estimates are provided for weak solutions. Weak linearized solutions enable faster computation vis-à-vis their strong counterparts. In Chapter 3, we propose another weak linearization method for non-linear oscillators under stochastic excitations based on Girsanov transformation of measures. Here, the non-linear drift vector is appropriately linearized such that the resulting SDE is analytically solvable. In order to account for the error in replacing of non-linear drift terms, the linearized solutions are multiplied by scalar weighting function. The weighting function is the solution of a scalar SDE(i.e.,Radon-Nikodym derivative). Apart from numerically illustrating the method through applications to non-linear oscillators, we also use the Girsanov transformation of measures to correct the truncation errors in lower order discretizations. In order to achieve efficiency in the computation of response statistics via Monte Carlo simulation, we propose in Chapter 4 a weak variance reduction strategy such that the ensemble size is significantly reduced without seriously affecting the accuracy of the predicted expectations of any smooth function of the response vector. The basis of the variance reduction strategy is to appropriately augment the governing system equations and then weakly replace the associated stochastic forcing functions through variance-reduced functions. In the process, the additional computational cost due to system augmentation is generally far less besides the accrued advantages due to a drastically reduced ensemble size. The variance reduction scheme is illustrated through applications to several non-linear oscillators, including a 3-DOF system. Finally, in Chapter 5, we exploit the explicit forms of the LTL techniques for state and parameters estimations of non-linear oscillators of engineering interest using a novel derivative-free EKF and a 2-EKF. In the derivative-free EKF, we use one-term, Euler and Newmark replacements for linearizations of the non-linear drift terms. In the 2-EKF, we use bias terms to account for errors due to lower order linearization and unmodelled dynamics in the mathematical model. Numerical studies establish the relative advantages of EKF-DLL as well as 2-EKF over the conventional forms of EKF. The thesis is concluded in Chapter 6 with an overall summary of the contributions made and suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
39

Donde, Pratik Prakash. "LES/PDF approach for turbulent reacting flows." 2012. http://hdl.handle.net/2152/19481.

Full text
Abstract:
The probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of turbulent reacting flows. In this approach, the joint-PDF of all reacting scalars is estimated by solving a PDF transport equation, thus providing detailed information about small-scale correlations between these quantities. The objective of this work is to further develop the LES/PDF approach for studying flame stabilization in supersonic combustors, and for soot modeling in turbulent flames. Supersonic combustors are characterized by strong shock-turbulence interactions which preclude the application of conventional Lagrangian stochastic methods for solving the PDF transport equation. A viable alternative is provided by quadrature based methods which are deterministic and Eulerian. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a new approach called semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach. For soot modeling in turbulent flows, an LES/PDF approach is integrated with detailed models for soot formation and growth. The PDF approach directly evolves the joint statistics of the gas-phase scalars and a set of moments of the soot number density function. This LES/PDF approach is then used to simulate a turbulent natural gas flame. A Lagrangian method formulated in cylindrical coordinates solves the high dimensional PDF transport equation and is coupled to an Eulerian LES solver. The LES/PDF simulations show that soot formation is highly intermittent and is always restricted to the fuel-rich region of the flow. The PDF of soot moments has a wide spread leading to a large subfilter variance. Further, the conditional statistics of soot moments conditioned on mixture fraction and reaction progress variable show strong correlation between the gas phase composition and soot moments.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography