Dissertations / Theses on the topic 'Overtime. Monte Carlo method. Simulation methods'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 39 dissertations / theses for your research on the topic 'Overtime. Monte Carlo method. Simulation methods.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Woo, Sungkwon. "Monte Carlo simulation of labor performance during overtime and its impact on project duration /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textHomem, de Mello Tito. "Simulation-based methods for stochastic optimization." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24846.
Full textObradovic, Borna Josip. "Multi-dimensional Monte Carlo simulation of ion implantation into complex structures /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textMansour, Nabil S. "Inclusion of electron-plasmon interactions in ensemble Monte Carlo simulations of degerate GaAs." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13862.
Full textMelson, Joshua Hiatt. "Fatigue Crack Growth Analysis with Finite Element Methods and a Monte Carlo Simulation." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48432.
Full textMaster of Science
Shrestha, Surendra Prakash. "An effective medium approximation and Monte Carlo simulation in subsurface flow modeling." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/38642.
Full textSamant, Asawari. "Multiscale Monte Carlo methods to cope with separation of scales in stochastic simulation of biological networks." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 146 p, 2007. http://proquest.umi.com/pqdweb?did=1407500711&sid=13&Fmt=2&clientId=8331&RQT=309&VName=PQD.
Full textDe, Ponte Candice Natasha. "Pricing barrier options with numerical methods / Candice Natasha de Ponte." Thesis, North-West University, 2013. http://hdl.handle.net/10394/8672.
Full textThesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2013
Saleh, Ali, and Ahmad Al-Kadri. "Option pricing under Black-Scholes model using stochastic Runge-Kutta method." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53783.
Full textPIUVEZAM, FILHO HELIO. "Estudo de um sistema de coincidência 4-pi-beta-gama para a medida absoluta de atividade de radionuclídeos empregando cintiladores plásticos." reponame:Repositório Institucional do IPEN, 2007. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11507.
Full textMade available in DSpace on 2014-10-09T14:02:17Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
Gorelov, Vitaly. "Quantum Monte Carlo methods for electronic structure calculations : application to hydrogen at extreme conditions." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASF002.
Full textThe hydrogen metallization problem posed almost 80 years ago, was named as the third open question in physics of the XXI century. Indeed, due to its lightness and reactivity, experimental information on high pressure hydrogen is limited and extremely difficult to obtain. Therefore, the development of accurate methods to guide experiments is essential. In this thesis, we focus on studying the electronic structure, including excited state phenomena, using quantum Monte Carlo (QMC) techniques. In particular, we develop a new method of computing energy gaps accompanied by an accurate treatment of the finite simulation cell error. We formally relate finite size error to the dielectric constant of the material. Before studying hydrogen, the new method is tested on crystalline silicon and carbon diamond, systems for which experimental information on the gap is available. Although finite-size corrected gap values for carbon and silicon are larger than the experimental ones, our results demonstrate that the bias due to the finite size supercell can be corrected for, so precise values in the thermodynamic limit can be obtained for small supercells without need for numerical extrapolation. As hydrogen is a very light material, the nuclear quantum effects are important. An accurate capturing of nuclear effects can be done within the Coupled Electron Ion Monte Carlo (CEIMC) method, a QMC-based first-principles simulation method. We use the results of CEIMC to discuss the thermal renormalization of electronic properties. We introduce a formal way of treating the electronic gap and band structure at a finite temperature within the adiabatic approximation and discuss the approximations that have to be made. We propose as well a novel way of renormalizing the optical properties at low temperature, which will be an improvement upon the commonly used semiclassical approximation. Finally, we apply all the methodological development of this thesis to study the metallization of solid and liquid hydrogen. We find that for ideal crystalline molecular hydrogen the QMC gap is in agreement with previous GW calculations. Treating nuclear zero point effects cause a large reduction in the gap (2 eV). Determining the crystalline structure of solid hydrogen is still an open problem. Depending on the structure, the fundamental indirect gap closes between 380 and 530 GPa for ideal crystals and 330–380 GPa for quantum crystals, which depends less on the crystalline symmetry. Beyond this pressure, the system enters into a bad metal phase where the density of states at the Fermi level increases with pressure up to 450–500 GPa when the direct gap closes. Our work partially supports the interpretation of recent experiments in high pressure hydrogen. However, the scenario where solid hydrogen metallization is accompanied by the structural change, for example, a molecular dissociation, can not be disproved. We also explore the possibility to use a multideterminant representation of excited states to model neutral excitations and compute the conductivity via the Kubo formula. We applied this methodology to ideal crystalline hydrogen and limited to the variational Monte Carlo level of the theory. For liquid hydrogen, the main finding is that the gap closure is continuous and coincides with the molecular dissociation transition. We were able to benchmark density functional theory (DFT) functionals based on the QMC density of states. When using the QMC renormalized Kohn-Sham eigenvalues to compute optical properties within the Kubo-Greenwood theory, we found that previously calculated theoretical optical absorption has a shift towards lower energies
Omaghomi, Toritseju O. "Analysis of Methods for Estimating Water Demand in Buildings." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406881340.
Full textLeme, Rafael Reis [UNESP]. "Teoria quântica do campo escalar real com autoacoplamento quártico - simulações de Monte Carlo na rede com um algoritmo worm." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/92038.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Neste trabalho apresentamos resultados de simulações de Monte Carlo de uma teoria quântica de campos escalar com autointeração ´fi POT. 4' em uma rede (1+1) empregando o recentemente proposto algoritmo worm. Em simulações de Monte Carlo, a eficiência de um algoritmo é medida em termos de um expoente dinâmico 'zeta', que se relaciona com o tempo de autocorrelação 'tau' entre as medidas de acordo com a relação 'tau' 'alfa' 'L POT. zeta', onde L é o comprimento da rede. O tempo de autocorrelação fornece uma medida para a “memória” do processo de atualização de uma simulação de Monte Carlo. O algoritmo worm possui um 'zeta' comparável aos obtidos com os eficientes algoritmos do tipo cluster, entretanto utiliza apenas processos de atualização locais. Apresentamos resultados para observáveis em função dos parâmetros não renormalizados do modelo 'lâmbda' e 'mü POT. 2'. Particular atenção é dedicada ao valor esperado no vácuo < 'fi'('qui')> e a função de correlação de dois pontos <'fi'('qui')'fi'('qui' POT. 1')>. Determinamos a linha crítica ( ´lâmbda IND. C', 'mu IND C POT. 2') que separa a fase simétrica e com quebra espontânea de simetria e comparamos os resultados com a literatura
In this work we will present results of Monte Carlo simulations of the ´fi POT. 4'quantum field theory on a (1 + 1) lattice employing the recently-proposed worm algorithm. In Monte Carlo simulations, the efficiency of an algorithm is measured in terms of a dynamical critical exponent 'zeta', that is related with the autocorrelation time 'tau' of measurements as 'tau' 'alfa' 'L POT. zeta', where L is the lattice length. The autocorrelation time provides a measure of the “memory” of the Monte Carlo updating process. The worm algorithm has a 'zeta' comparable with the ones obtained with the efficient cluster algorithms, but uses local updates only. We present results for observables as functions of the unrenormalized parameters of the theory 'lâmbda and 'mü POT. 2'. Particular attention is devoted to the vacuum expectation value < 'fi'('qui')> and the two-point correlation function <'fi'('qui')fi(qui pot. 1')>. We determine the critical line( ´lâmbda IND. C', 'mu IND C POT. 2') that separates the symmetric and spontaneously-broken phases and compare with results of the literature
BRITO, ANDREIA B. de. "Determinação da taxa de desintegração de Tc-99m e In-111 em sistemas de coincidências." reponame:Repositório Institucional do IPEN, 2011. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10047.
Full textMade available in DSpace on 2014-10-09T14:06:46Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Centeno, Hugo Alexandre do Carmo. "MODELAGEM DE CRONOGRAMA DE PROJETO PELA FERRAMENTA DSM COM APOIO AO GERENCIAMENTO E TOMADA DE DECISÕES PELA SIMULAÇÃO DE MONTE CARLO." Pontifícia Universidade Católica de Goiás, 2018. http://tede2.pucgoias.edu.br:8080/handle/tede/3971.
Full textMade available in DSpace on 2018-05-14T19:00:50Z (GMT). No. of bitstreams: 1 HUGO ALEXANDRE DO CARMO CENTENO.pdf: 1058068 bytes, checksum: 733a3144c0c1d4e70087e3a321855b0b (MD5) Previous issue date: 2018-04-04
Delays in construction projects are a recurrent fact in several countries and regions for the most varied types of works. These delays, in addition to negatively impacting the image of companies contracted in the provision of construction services, cause several financial losses to interested parties: contractor and hired. Several researches that address the factors that lead to delays in the work schedule indicate the variation in the duration of the activities and, or the lack of an inadequate estimate for the duration of the activities, among the ten major delay factors in the projects. They contribute to the inadequate estimation of production of the activities, the lack of knowledge of the complexity of the activities, and also the lack or insufficiency of historical data that allow a safe estimate of the duration of the activities. Thus, this study aims to describe the behavior of variability in the duration of activities in an environment with little activity productivity data. The productivity data analyzed were taken from three similar construction projects carried out by the same company and which were provided as case studies of this work. For the analysis of the variability and description of the behavior of the activities, after the data collection of productivity of the project activities, the non-parametric Bootstrap data resampling associated with Monte Carlo Simulation (MCS) was used as techniques. Later, in order to verify the reliability of the results of variation in the duration of the activities, the schedule of the case studies was modeled using the Critical Path Method (CPM) and the Dependency Structure Matrix (DSM); and submitted to MCS. The simulated results of the total duration of the projects were adequate for the actual completion period of the projects studied, leading to the conclusion that the results of variation in the duration of the activities obtained by the cited technique are reliable.
Os atrasos em projetos de construção civil são um fato recorrente em diversos países e regiões para os mais variados tipos de obras. Esses atrasos, além de impactar negativamente a imagem das empresas contratadas na prestação de serviços de construção, acarretam diversos prejuízos financeiros às partes interessadas: contratante e contratado. Diversas pesquisas que abordam os fatores que acarretam em atrasos no cronograma de obras elencam a variação na duração das atividades e, ou, a falta de estimativa inadequada para duração das atividades, dentre os dez maiores fatores de atraso nos projetos. Contribuem para a inadequada estimativa de produção das atividades o desconhecimento da complexidade das mesmas e, também, a falta ou insuficiência de dados históricos que permitam estimar com segurança a duração das atividades. Assim, este trabalho visa descrever o comportamento de variabilidade na duração das atividades em um ambiente com poucos dados de produtividade das atividades. Os dados de produtividade analisados foram tomados de três projetos de construção civil semelhantes executados pela mesma empresa e que se prestaram como estudos de caso deste trabalho. Para análise da variabilidade e descrição do comportamento das atividades, após a coleta de dados de produtividade das atividades dos projetos, foram utilizadas como técnicas a reamostragem Bootstrap não paramétrica dos dados associada a Simulação de Monte Carlo (SMC). Posteriormente, para verificar a confiabilidade dos resultados de variação na duração das atividades, o cronograma dos estudos de caso foi modelado utilizando o Método do Caminho Crítico (CPM) e a Matriz de Estrutura de Dependência (DSM); e submetido a SMC. Os resultados simulados de duração total dos projetos mostraram-se adequados ao prazo real de conclusão dos projetos estudados, conduzindo à conclusão de que os resultados de variação na duração das atividades, obtidos pela técnica citada, são confiáveis.
MATOS, IZABELA T. de. "Padronização dos radionuclídeos F-18 e In-111 e determinação dos coeficientes de conversão interna total para o In-111 em sistema de coincidência por software." reponame:Repositório Institucional do IPEN, 2014. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10615.
Full textMade available in DSpace on 2014-10-09T14:02:00Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Nhongo, Tawuya D. R. "Pricing exotic options using C++." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1008373.
Full textIttiwattana, Waraporn. "A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.
Full textYevseyeva, Olga. "Modelagem computacional de tomografia com feixe de prótons." Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=956.
Full textNessa tese foi feito um estudo preliminar, destinado à elaboração do programa experimental inicial para a primeira instalação da tomografia com prótons (pCT) brasileira por meio de modelagem computacional. A terapia com feixe de prótons é uma forma bastante precisa de tratamento de câncer. Atualmente, o planejamento de tratamento é baseado na tomografia computadorizada com raios X, alternativamente, a tomografia com prótons pode ser usada. Algumas questões importantes, como efeito de escala e a Curva de Calibração (fonte de dados iniciais para planejamento de terapia com prótons), foram estudados neste trabalho. A passagem de prótons com energias iniciais de 19,68MeV; 23MeV; 25MeV; 49,10MeV e 230MeV pelas camadas de materiais variados (água, alumínio, polietileno, ouro) foi simulada usando códigos Monte Carlo populares como SRIM e GEANT4. Os resultados das simulações foram comparados com a previsão teórica (baseada na solução aproximada da equação de transporte de Boltzmann) e com resultados das simulações feitas com outro popular código Monte Carlo MCNPX. Análise comparativa dos resultados das simulações com dados experimentais publicados na literatura científica para alvos grossos e na faixa de energias de prótons usada em medidas em pCT foi feita. Foi observado que apesar de que todos os códigos mostram os resultados parecidos alguns deslocamentos não sistemáticos podem ser observados. Foram feitas observações importantes sobre a precisão dos códigos e uma necessidade em medidas sistemáticas de frenagem de prótons em alvos grossos foi declarada.
In the present work a preliminary research via computer simulations was made in order to elaborate a prior program for the first experimental pCT setup in Brazil. Proton therapy is a high precise form of a cancer treatment. Treatment planning nowadays is performed basing on X ray Computer Tomography data (CT), alternatively the same procedure could be performed using proton Computer Tomography (pCT). Some important questions, as a scale effect and so called Calibration Curve (as a source of primary data for pCT treatment planning) were studied in this work. The 19.68MeV; 23MeV; 25MeV; 49.10MeV e 230MeV protons passage through varied absorbers (water, aluminum, polyethylene, gold) were simulated by such popular Monte Carlo packages as SRIM and GEANT4. The simulation results were compared with a theoretic prevision based on approximate solution of the Boltzmann transport equation and with simulation results of the other popular Monte Carlo code MCNPX. The comparative analysis of the simulations results with the experimental data published in scientific literature for thick absorbers and within the energy range used in the pCT measurements was made. It was noted in spite of the fact that all codes showed similar results some nonsystematic displacements can be observed. Some important observations about the codes precision were made and a necessity of the systematic measurements of the proton stopping power in thick absorbers was declared.
MARCHANT, FUENTES Carolina Ivonne. "Essays on multivariate generalized Birnbaum-Saunders methods." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18647.
Full textMade available in DSpace on 2017-04-26T17:07:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5) Previous issue date: 2016-10-31
CAPES; BOLSA DO CHILE.
In the last decades, univariate Birnbaum-Saunders models have received considerable attention in the literature. These models have been widely studied and applied to fatigue, but they have also been applied to other areas of the knowledge. In such areas, it is often necessary to model several variables simultaneously. If these variables are correlated, individual analyses for each variable can lead to erroneous results. Multivariate regression models are a useful tool of the multivariate analysis, which takes into account the correlation between variables. In addition, diagnostic analysis is an important aspect to be considered in the statistical modeling. Furthermore, multivariate quality control charts are powerful and simple visual tools to determine whether a multivariate process is in control or out of control. A multivariate control chart shows how several variables jointly affect a process. First, we propose, derive and characterize multivariate generalized logarithmic Birnbaum-Saunders distributions. Also, we propose new multivariate generalized Birnbaum-Saunders regression models. We use the method of maximum likelihood estimation to estimate their parameters through the expectation-maximization algorithm. We carry out a simulation study to evaluate the performance of the corresponding estimators based on the Monte Carlo method. We validate the proposed models with a regression analysis of real-world multivariate fatigue data. Second, we conduct a diagnostic analysis for multivariate generalized Birnbaum-Saunders regression models. We consider the Mahalanobis distance as a global influence measure to detect multivariate outliers and use it for evaluating the adequacy of the distributional assumption. Moreover, we consider the local influence method and study how a perturbation may impact on the estimation of model parameters. We implement the obtained results in the R software, which are illustrated with real-world multivariate biomaterials data. Third and finally, we develop a robust methodology based on multivariate quality control charts for generalized Birnbaum-Saunders distributions with the Hotelling statistic. We use the parametric bootstrap method to obtain the distribution of this statistic. A Monte Carlo simulation study is conducted to evaluate the proposed methodology, which reports its performance to provide earlier alerts of out-of-control conditions. An illustration with air quality real-world data of Santiago-Chile is provided. This illustration shows that the proposed methodology can be useful for alerting episodes of extreme air pollution.
Nas últimas décadas, o modelo Birnbaum-Saunders univariado recebeu considerável atenção na literatura. Esse modelo tem sido amplamente estudado e aplicado inicialmente à modelagem de fadiga de materiais. Com o passar dos anos surgiram trabalhos com aplicações em outras áreas do conhecimento. Em muitas das aplicações é necessário modelar diversas variáveis simultaneamente incorporando a correlação entre elas. Os modelos de regressão multivariados são uma ferramenta útil de análise multivariada, que leva em conta a correlação entre as variáveis de resposta. A análise de diagnóstico é um aspecto importante a ser considerado no modelo estatístico e verifica as suposições adotadas como também sua sensibilidade. Além disso, os gráficos de controle de qualidade multivariados são ferramentas visuais eficientes e simples para determinar se um processo multivariado está ou não fora de controle. Este gráfico mostra como diversas variáveis afetam conjuntamente um processo. Primeiro, propomos, derivamos e caracterizamos as distribuições Birnbaum-Saunders generalizadas logarítmicas multivariadas. Em seguida, propomos um modelo de regressão Birnbaum-Saunders generalizado multivariado. Métodos para estimação dos parâmetros do modelo, tal como o método de máxima verossimilhança baseado no algoritmo EM, foram desenvolvidos. Estudos de simulação de Monte Carlo foram realizados para avaliar o desempenho dos estimadores propostos. Segundo, realizamos uma análise de diagnóstico para modelos de regressão Birnbaum-Saunders generalizados multivariados. Consideramos a distância de Mahalanobis como medida de influência global de detecção de outliers multivariados utilizando-a para avaliar a adequacidade do modelo. Além disso, desenvolvemos medidas de diagnósticos baseadas em influência local sob alguns esquemas de perturbações. Implementamos a metodologia apresentada no software R, e ilustramos com dados reais multivariados de biomateriais. Terceiro, e finalmente, desenvolvemos uma metodologia robusta baseada em gráficos de controle de qualidade multivariados para a distribuição Birnbaum-Saunders generalizada usando a estatística de Hotelling. Baseado no método bootstrap paramétrico encontramos aproximações da distribuição desta estatística e obtivemos limites de controle para o gráfico proposto. Realizamos um estudo de simulação de Monte Carlo para avaliar a metodologia proposta indicando seu bom desempenho para fornecer alertas precoces de processos fora de controle. Uma ilustração com dados reais de qualidade do ar de Santiago-Chile é fornecida. Essa ilustração mostra que a metodologia proposta pode ser útil para alertar sobre episódios de poluição extrema do ar, evitando efeitos adversos na saúde humana.
Hidalgo, Francisco Luiz Campos. "Quantificação da incerteza do problema de flexão estocástica de uma viga de Euler-Bernoulli, apoiada em fundação de Pasternak, utilizando o método estocástico de Galerkin e o método dos elementos finitos estocásticos." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/1069.
Full textThis study presents a methodology, based on the Galerkin method, to quantify the uncertainty in the stochastic bending problem of an Euler-Bernoulli beam resting on a Pasternak foundation. The uncertainty in the stiffness coefficients of the beam and foundation is represented by parametrized stochastic processes. The probability limitation on the random parameters and the choice of an appropriated approximate solution space, necessary for the subsequent demonstration of uniqueness and existence of the problem, are considered by means of theoretical hypothesis. The finite dimensional space of approximate solutions is built by tensor product between spaces (deterministic and randomic), obtaining a dense space in the theoretical solution space. The Wiener-Askey scheme of generalizes chaos polynomials is used to represent the stochastic process of the beam deflection. The stochastic finite element method is presented and employed in the numerical solution of selected examples. The results, in terms of statistical moments, are compared to results obtained through Monte Carlo simulations.
Santos, Marcelo Borges dos. "Estimativas dos momentos estatísticos para o problema de flexão estocástica de viga em uma fundação Pasternak." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1301.
Full textThis work proposes the resolution of stochastic bending problem in a Euler- Bernoulli beam, on a foundation type Pasternak, through a computational method based on Monte Carlo simulation. Uncertainty is present in the elastic coefficients of the beam and foundation. First, it is established the mathematical formulation of the problem which is derived from a physical model displacement of the beam, that takes into account the influence of the foundation on the problem of response. This requires an approach that is made up on the most common models of foundation, which are: the model Winkler type and model of Pasternak.In sequence we study the existence and uniqueness of the variational problem. To obtain the solution of the problem, a mathematical reasoning is carried out, to the following matters: representation of uncertainty, Galerkin method, serial Neumann, and finally the lower and upper bounds. Finally, the performance of lower and upper bounds, derived from direct simulation of Monte Carlo were evaluated through various cases where the uncertainty lies in the different coefficients composing the equation bending as a variational problem. The method proved to be efficient, both in the response of the convergence point as regards the computational cost.
Russew, Thomas. "Etude et simulation d'un détecteur pour l'expérience GRAAL à l'ESRF : application à la photoproduction d'étrangeté." Université Joseph Fourier (Grenoble), 1995. http://www.theses.fr/1995GRE10182.
Full textMadeira, Marcelo Gomes. "Comparação de tecnicas de analise de risco aplicadas ao desenvolvimento de campos de petroleo." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263732.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica, Instituto de Geociencias
Made available in DSpace on 2018-08-04T09:02:20Z (GMT). No. of bitstreams: 1 Madeira_MarceloGomes_M.pdf: 1487066 bytes, checksum: 39d5e48bf728f51d3c85f7d5ef207d41 (MD5) Previous issue date: 2005
Resumo: Os processos de tomada de decisões em campos de petróleo estão associados a grandes riscos provenientes de incertezas geológicas, econômicas e tecnológicas e altos investimentos. Nas fases de avaliação e desenvolvimento dos campos, torna-se necessário modelar o processo de recuperação com confiabilidade aumentando o esforço computacional. Uma forma de acelerar o processo é através de simplificações sendo algumas discutidas neste trabalho: técnica de quantificação do risco (Monte Carlo, árvore de derivação), redução no número de atributos, tratamento simplificado de atributos e simplificação da modelagem do reservatório. Ênfase especial está sendo dada à (1) comparação entre Monte Carlo e árvore de derivação e (2) desenvolvimento de modelos rápidos através de planejamento de experimentos e superfície de resposta. Trabalhos recentes estão sendo apresentados sobre estas técnicas, mas normalmente mostrando aplicações e não comparação entre alternativas. O objetivo deste trabalho é comparar estas técnicas levando em consideração a confiabilidade, a precisão dos resultados e aceleração do processo. Estas técnicas são aplicadas a um campo marítimo e os resultados mostram que (1) é possível reduzir significativamente o número de simulações do fluxo mantendo a precisão dos resultados e que (2) algumas simplificações podem afetar o processo de decisão
Abstract: Petroleum field decision-making process is associated to high risks due to geological, economic and technological uncertainties, and high investments, mainly in the appraisal and development phases of petroleum fields where it is necessary to model the recovery process with higher precision increasing the computational time. One way to speedup the process is by simplifying the process; some simplifications are discussed in this work: technique to quantify the risk (Monte Carlo and derivative tree), reduction of number of attributes, simplification of the treatment of attributes and simplification of the reservoir modeling process. Special emphasis is given to (1) comparison between Monte Carlo and derivative tree techniques and (2) development of fast models through experimental design and response surface method. Some works are being presented about these techniques but normally they show applications and no comparison among alternatives is presented. The objective of this work is to compare these techniques taking into account the reliability, precision of the results and speedup of the process. These techniques are applied to an offshore field and the results show that it is possible to reduce significantly the number of flow simulation maintaining the precision of the results. It is also possible to show that some simplifications can yield different results affecting the decision process
Mestrado
Reservatórios e Gestão
Mestre em Ciências e Engenharia de Petróleo
LACERDA, FLAVIO W. "Padronização de sup(68)Ga em sistema de coincidências 4pß-?" reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10211.
Full textMade available in DSpace on 2014-10-09T13:59:49Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
SILVA, Elson Natanael Moreira. "ESTIMAÇÃO PROBABILÍSTICA DO NÍVEL DE DISTORÇÃO HARMÔNICA TOTAL DE TENSÃO EM REDES DE DISTRIBUIÇÃO SECUNDÁRIAS COM GERAÇÃO DISTRIBUÍDA FOTOVOLTAICA." Universidade Federal do Maranhão, 2017. http://tedebc.ufma.br:8080/jspui/handle/tede/1296.
Full textMade available in DSpace on 2017-04-17T13:14:17Z (GMT). No. of bitstreams: 1 Elson Moreira.pdf: 7883984 bytes, checksum: cf59b3b0b24a249a7fd9e2390b7f16de (MD5) Previous issue date: 2017-02-10
CNPQ
A problem of electric power quality that always affects the consumers of the distribution network are the harmonic distortions. Harmonic distortions arise from the presence of socalled harmonic sources, which are nonlinear equipment, i.e., equipment in which the voltage waveform differs from the current. Such equipment injects harmonic currents in the network generating distortions in the voltage waveform. Nowadays, the number of these equipment in the electrical network has increased considerably. However, the increasing use of such equipment over the network makes systems more vulnerable and prone to quality problems in the supply of electricity to consumers. In addition, it is important to note that in the current scenario, the generation of electricity from renewable sources, connected in the secondary distribution network, is increasing rapidly. This is mainly due to shortage and high costs of fossil fuels. In this context, the Photovoltaic Distributed Generation (PVDG), that uses the sun as a primary source for electric energy generation, is the main technology of renewable generation installed in distribution network. However, the PVDG is a potential source of harmonics, because the interface of the PVDG with the CA network is carried out by a CC/CA inverter, that is a highly nonlinear equipment. Thus, the electrical power quality problems associated with harmonic distortion in distribution networks tend to increase and be very frequent. One of the main indicators of harmonic distortion is the total harmonic distortion of voltage ( ) used by distribution utilities to limit the levels of harmonic distortion present in the electrical network. In the literature there are several deterministic techniques to estimate . These techniques have the disadvantage of not considering the uncertainties present in the electric network, such as: change in the network configuration, load variation, intermittence of the power injected by renewable distributed generation. Therefore, in order to provide a more accurate assessment of the harmonic distortions, this dissertation has as main objective to develop a probabilistic methodology to estimate the level of in secondary distribution networks considering the uncertainties present in the network and PVDG connected along the network. The methodology proposed in this dissertation is based on the combination of the following techniques: three-phase harmonic power flow in phase coordinate via method sum of admittance, point estimate method and series expansion of Gram-Charlier. The validation of the methodology was performed using the Monte Carlo Simulation. The methodology was tested in European secondary distribution network with 906 nodes of 416 V. The results were obtained by performing two case studies: without the presence of PVDG and with the PVDG connection. For the case studies, the following statistics for nodal were estimated: mean value, standard deviation and the 95% percentile. The results showed that the probabilistic estimation of is more complete, since it shows the variation of due to the uncertainties associated with harmonic sources and electric network. In addition, they show that the connection of PV-DG in the electric network significantly affects the levels of of the electric network.
Um problema de qualidade de energia elétrica que afeta os consumidores da rede de distribuição secundária são as distorções harmônicas. As distorções harmônicas são provenientes da presença das chamadas fontes de harmônicas que são equipamentos de características não-lineares, ou seja, equipamentos em que a forma de onda da tensão difere da de corrente. Tais equipamentos injetam correntes harmônicas na rede produzindo, portanto distorções na forma de onda da tensão. Nos dias atuais, a quantidade desses equipamentos na rede elétrica tem aumentado consideravelmente. Porém, o uso crescente desse tipo de equipamento ao longo da rede torna os sistemas mais vulneráveis e propensos a apresentarem problemas de qualidade no fornecimento de energia elétrica aos consumidores. Além disso, é importante destacar que no cenário atual, a geração de energia elétrica a partir de fontes renováveis, conectada na rede de distribuição secundária, está aumentando rapidamente. Isso se deve principalmente devido a escassez e altos custos dos combustíveis fosseis. Neste contexto, a Geração Distribuída Fotovoltaica (GDFV), que utiliza o sol como fonte primária para geração de energia elétrica, é a principal tecnologia de geração renovável instalada na rede de distribuição no Brasil. Contudo, a GDFV é uma potencial fonte de harmônica, pois a interface da GDFV com a rede CA é realizada por um inversor CC/CA, que é um equipamento altamente não-linear. Desde modo, os problemas de qualidade de energia elétrica associados à distorção harmônica nas redes de distribuição tendem a aumentar e a serem bem frequentes nos consumidores da rede de distribuição secundárias. Um dos principais indicadores de distorção harmônica é a distorção harmônica total de tensão ( do inglês “Total Harmonic Distortion of Voltage”) utilizada pelas concessionárias de energia elétrica para quantificar os níveis de distorção harmônica presentes na rede elétrica. Na literatura técnica existem várias técnicas determinísticas para estimar a . Essas técnicas possuem a desvantagem de não considerar as incertezas presentes na rede elétrica, tais como: mudança na configuração da rede, variação de carga e intermitência da potência injetada pela geração distribuída renovável. Portanto, a fim de fornecer uma avaliação mais precisa das distorções harmônicas, este trabalho tem como principal objetivo desenvolver uma metodologia probabilística para estimar o nível de em redes de distribuição secundária considerando as incertezas presentes na rede e na GDFV conectada ao longo da rede. A metodologia proposta nesta dissertação se baseia na combinação das seguintes técnicas: fluxo de potência harmônico trifásico em coordenadas de fase via método de soma de admitância, método de estimação por pontos e expansão em série de Gram-Charlier. Além disso, a validação da metodologia foi realizada utilizando a Simulação Monte Carlo. A metodologia desenvolvida foi testada na rede de distribuição secundária europeia com 906 nós de 416 V. Os resultados foram obtidos realizando dois casos de estudos: sem a presença de GDFV e com a conexão de GDFV. Para ambos os casos de estudo as seguintes estatísticas do nodal foram estimadas: valor médio, desvio padrão e o percentil de 95%. Os resultados demonstraram que a estimação probabilística da é mais completa, pois mostra a variação da devido às incertezas associadas com as fontes de harmônicas e as da rede elétrica. Os resultados também mostram que a conexão da GDFV afeta significativamente os níveis de da rede elétrica
Yagui, Akemi. "Avaliação da interação de feixes monoenergéticos e polienergéticos por meio de simulações em GEANT4 em fantomas diversos." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2716.
Full textProton therapy is present in 16 countries and by 2015 has treated more than 130,000 Patients. However, in Brazil this therapy is not yet present for several reasons, Being the main the high cost. Before performing treatments, it is necessary to do some tests to verify the energy delivery of the proton bundles. As the Microdosimetry are very expensive, the main alternative is to carry out simulations in Programs such as GEANT4 and SRIM. GEANT4 is a program that Allows you to simulate complex geometries, while SRIM performs more complex simulations. Simple and both work with the Monte Carlo method. On this work were used these twain tools to perform a proton beam simulation in phantom with three different compositions (water, bones and water, brain and bones). To perform the energy delivery analysis of the proton beams along these phantoms, has become necessary create a program denominated “Data Processing Program Proton Therapy Simulated”, which allowed to create matrices, beyond the calculations of the Bragg peaks for interaction evaluation. Besides that, it was analyzing the homogeneity of the integration of a proton beam into a detector, in which it was verified that the simulations on GEANT4 are homogeneous, not having a tendency of the beam in locating in a certain region, just as the energies deposited are equal. The value of the depth range of the Bragg peaks were also evaluated in cylindrical phantoms with three different densities: 1,03 g/cm³, 1,53g/cm³ and 2,03 g/cm³, the first being the density provided by GEANT4 for brain tissue. It has been found that the depth range distances of the Bragg peaks are the same at these three different densities.
Maucourt, Jérôme. "Étude par simulations numériques des transitions de phase dans les modèles de spins XY désordonnés." Université Joseph Fourier (Grenoble ; 1971-2015), 1997. http://www.theses.fr/1997GRE10216.
Full textCao, Liang. "Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6539.
Full text"Monte Carlo simulation in risk estimation." 2013. http://library.cuhk.edu.hk/record=b5549771.
Full text第二章是本文的第一部分。在这章中,我们将美式期权的敏感性估计问题提成了更具一般性的估计问题:如果一个随机最优化问题依赖于某些模型参数, 我们该如何估计其最优目标函数关于参数的敏感性。在该问题中, 由于最优决策关于模型参数可能不连续,传统的无穷小扰动分析方法不能直接应用。针对这个困难,我们提出了一种广义的无穷小扰动分析方法,得到敏感性的无偏估计。 我们的方法显示, 在估计敏感性时, 其实并不需要样本路径关于参数的可微性。这是我们在理论上的新发现。另一方面, 该方法可以非常容易的应用于美式期权的敏感性估计。在实际应用中敏感性的无偏估计可以直接嵌入流行的美式期权定价算法,从而同时得到期权价格和价格关于模型参数的敏感性。包括高维问题和多种不同的随机过程模型在内的数值实验, 均显示该估计在计算上具有显著的优越性。最后,我们还从理论上刻画了美式期权的近似最优执行策略对敏感性估计的影响,给出了误差上界。
第三章是本文的第二部分。在本章中,我们研究投资组合的风险估计问题。该问题也可被推广成一个一般性的估计问题:如何估计条件期望在作用上一个非线性泛函之后的期望。针对该类估计问题,我们提出了一种多层模拟方法。我们的估计量实际上是一些简单嵌套估计量的线性组合。我们的方法非常容易实现,并且可以被广泛应用于不同的问题结构。理论分析表明我们的方法适用于不同维度的问题并且算法复杂性低于文献中现有的方法。包括低维和高维的数值实验验证了我们的理论分析。
This dissertation mainly consists of two parts: a generalized infinitesimal perturbation analysis (IPA) approach for American option sensitivities estimation and a multilevel Monte Carlo simulation approach for portfolio risk estimation.
In the first part, we develop efficient Monte Carlo methods for estimating American option sensitivities. The problem can be re-formulated as how to perform sensitivity analysis for a stochastic optimization problem when it has model uncertainty. We introduce a generalized IPA approach to resolve the difficulty caused by discontinuity of the optimal decision with respect to the underlying parameter. The unbiased price-sensitivity estimators yielded from this approach demonstrate significant advantages numerically in both high dimensional environments and various process settings. We can easily embed them into many of the most popular pricing algorithms without extra simulation effort to obtain sensitivities as a by-product of the option price. This generalized approach also casts new insights on how to perform sensitivity analysis using IPA: we do not need pathwise differentiability to apply it. Another contribution of this chapter is to investigate how the estimation quality of sensitivities will be affected by the quality of approximated exercise times.
In the second part, we propose a multilevel nested simulation approach to estimate the expectation of a nonlinear function of a conditional expectation, which has a direct application in portfolio risk estimation problems under various risk measures. Our estimator consists of a linear combination of several standard nested estimators. It is very simple to implement and universally applicable across various problem settings. The results of theoretical analysis show that the algorithmic complexities of our estimators are independent of the problem dimensionality and are better than other alternatives in the literature. Numerical experiments, in both low and high dimensional settings, verify our theoretical analysis.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Liu, Yanchu.
"December 2012."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 89-96).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Abstract --- p.i
Abstract in Chinese --- p.iii
Acknowledgements --- p.v
Contents --- p.vii
List of Tables --- p.ix
List of Figures --- p.xii
Chapter 1. --- Overview --- p.1
Chapter 2. --- American Option Sensitivities Estimation via a Generalized IPA Approach --- p.4
Chapter 2.1. --- Introduction --- p.4
Chapter 2.2. --- Formulation of the American Option Pricing Problem --- p.10
Chapter 2.3. --- Main Results --- p.14
Chapter 2.3.1. --- A Generalized IPA Approach in the Presence of a Decision Variable --- p.16
Chapter 2.3.2. --- Unbiased First-Order Sensitivity Estimators --- p.21
Chapter 2.4. --- Implementation Issues and Error Analysis --- p.23
Chapter 2.5. --- Numerical Results --- p.26
Chapter 2.5.1. --- Effects of Dimensionality --- p.27
Chapter 2.5.2. --- Performance under Various Underlying Processes --- p.29
Chapter 2.5.3. --- Effects of Exercising Policies --- p.31
Chapter 2.6. --- Conclusion Remarks and Future Work --- p.33
Chapter 2.7. --- Appendix --- p.35
Chapter 2.7.1. --- Proofs of the Main Results --- p.35
Chapter 2.7.2. --- Likelihood Ratio Estimators --- p.43
Chapter 2.7.3. --- Derivation of Example 2.3 --- p.49
Chapter 3. --- Multilevel Monte Carlo Nested Simulation for Risk Estimation --- p.52
Chapter 3.1. --- Introduction --- p.52
Chapter 3.1.1. --- Examples --- p.53
Risk Measurement of Financial Portfolios --- p.53
Derivatives Pricing --- p.55
Partial Expected Value of Perfect Information --- p.56
Chapter 3.1.2. --- A Standard Nested Estimator --- p.57
Chapter 3.1.3. --- Literature Review --- p.59
Chapter 3.1.4. --- Summary of Our Contributions --- p.61
Chapter 3.2. --- The Multilevel Approach --- p.63
Chapter 3.2.1. --- Motivation --- p.63
Chapter 3.2.2. --- Multilevel Construction --- p.65
Chapter 3.2.3. --- Theoretical Analysis --- p.67
Chapter 3.2.4. --- Further Improvement by Extrapolation --- p.69
Chapter 3.3. --- Numerical Experiments --- p.72
Chapter 3.3.1. --- Single Asset Setting --- p.73
Chapter 3.3.2. --- Multiple Asset Setting --- p.74
Chapter 3.4. --- Concluding Remarks --- p.77
Chapter 3.5. --- Appendix: Technical Assumptions and Proofs of the Main Results --- p.79
Bibliography --- p.89
Bidgood, Peter Mark. "Internal balance calibration and uncertainty estimation using Monte Carlo simulation." Thesis, 2014. http://hdl.handle.net/10210/9728.
Full textThe most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
"A simulation approach to evaluate combining forecasts methods." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888016.
Full textThesis (M.B.A.)--Chinese University of Hong Kong, 1994.
Includes bibliographical references (leaves 43-44).
ABSTRACT --- p.ii
TABLE OF CONTENTS --- p.iii
ACKNOWLEDGEMENT --- p.iv
CHAPTER
Chapter I. --- INTRODUCTION AND LITERATURE REVIEW --- p.1
Chapter II. --- COMBINING SALES FORECASTS --- p.7
Chapter III. --- EXPERIMENTAL DESIGN --- p.14
Chapter IV. --- SIMULATION RESULTS --- p.19
Chapter V. --- SUMMARY AND CONCLUSION --- p.27
APPENDIX --- p.31
BIBLIOGRAPHY --- p.43
James, Steven Doron. "The effect of simulation bias on action selection in Monte Carlo Tree Search." Thesis, 2016. http://hdl.handle.net/10539/21673.
Full textMonte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. It combines a traditional tree-search approach with Monte Carlo simulations, using the outcome of these simulations (also known as playouts or rollouts) to evaluate states in a look-ahead tree. That MCTS does not require an evaluation function makes it particularly well-suited to the game of Go — seen by many to be chess’s successor as a grand challenge of artificial intelligence — with MCTS-based agents recently able to achieve expert-level play on 19×19 boards. Furthermore, its domain-independent nature also makes it a focus in a variety of other fields, such as Bayesian reinforcement learning and general game-playing. Despite the vast amount of research into MCTS, the dynamics of the algorithm are still not yet fully understood. In particular, the effect of using knowledge-heavy or biased simulations in MCTS still remains unknown, with interesting results indicating that better-informed rollouts do not necessarily result in stronger agents. This research provides support for the notion that MCTS is well-suited to a class of domain possessing a smoothness property. In these domains, biased rollouts are more likely to produce strong agents. Conversely, any error due to incorrect bias is compounded in non-smooth domains, and in particular for low-variance simulations. This is demonstrated empirically in a number of single-agent domains.
LG2017
Du, Yiping. "Efficient Simulation Methods for Estimating Risk Measures." Thesis, 2011. https://doi.org/10.7916/D8J10FQ4.
Full textHan, Baoguang. "Statistical analysis of clinical trial data using Monte Carlo methods." Thesis, 2014. http://hdl.handle.net/1805/4650.
Full textIn medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
"Exact simulation of SDE: a closed form approximation approach." 2010. http://library.cuhk.edu.hk/record=b5894499.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (p. 94-96).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Monte Carlo method in Finance --- p.6
Chapter 2.1 --- Principle of MC and pricing theory --- p.6
Chapter 2.2 --- An illustrative example --- p.9
Chapter 3 --- Discretization method --- p.15
Chapter 3.1 --- The Euler scheme and Milstein scheme --- p.16
Chapter 3.2 --- Convergence of Mean Square Error --- p.19
Chapter 4 --- Quasi Monte Carlo method --- p.22
Chapter 4.1 --- Basic idea of QMC --- p.23
Chapter 4.2 --- Application of QMC in Finance --- p.29
Chapter 4.3 --- Another illustrative example --- p.34
Chapter 5 --- Our Methodology --- p.42
Chapter 5.1 --- Measure decomposition --- p.43
Chapter 5.2 --- QMC in SDE simulation --- p.51
Chapter 5.3 --- Towards a workable algorithm --- p.58
Chapter 6 --- Numerical Result --- p.69
Chapter 6.1 --- Case I Generalized Wiener Process --- p.69
Chapter 6.2 --- Case II Geometric Brownian Motion --- p.76
Chapter 6.3 --- Case III Ornstein-Uhlenbeck Process --- p.83
Chapter 7 --- Conclusion --- p.91
Bibliography --- p.96
Deng, Jian. "Stochastic collocation methods for aeroelastic system with uncertainty." Master's thesis, 2009. http://hdl.handle.net/10048/557.
Full textTitle from pdf file main screen (viewed on Sept. 3, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics, Department of Mathematical and Statistical Sciences, University of Alberta." Includes bibliographical references.
Saha, Nilanjan. "Methods For Forward And Inverse Problems In Nonlinear And Stochastic Structural Dynamics." Thesis, 2007. http://hdl.handle.net/2005/608.
Full textDonde, Pratik Prakash. "LES/PDF approach for turbulent reacting flows." 2012. http://hdl.handle.net/2152/19481.
Full texttext