To see the other types of publications on this topic, follow the link: Random walk numerical methods.

Dissertations / Theses on the topic 'Random walk numerical methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Random walk numerical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gjetvaj, Filip. "Experimental characterization and modeling non-Fickian dispersion in aquifers." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS204/document.

Full text
Abstract:
Ces travaux ont pour objectif de modéliser les mécanismes de dispersion dans les aquifères. L’hétérogénéité du champ de vitesse et le transfert de masse entre zones immobiles et mobiles sont deux origines possibles du comportement non-Fickéen, jusqu’alors étudiées de façon séparée. Notre hypothèse de départ est que ces deux mécanismes coexistent. Nos travaux comprennent : 1) des expériences de traçage sur colonnes de billes de verre et carottes de grès de Berea, en mode flow-through et push-pull, et 2) des simulations numériques réalisées à partir d’images en microtomographie RX segmentées en trois phases : solide, vide et microporosité. L’analyse du champ de vitesse (Stokes) montre l’importance de la discrétisation spatiale et de la prise en compte de la microporosité. Les résultats des simulations de transport (en utilisant la méthode time domain random walk) permettent de quantifier l’effet combiné de l’hétérogénéité du champ de vitesse et des transferts diffusifs dans la fraction micro-poreuse de la roche sur la dispersion non-Fickéenne, caractérisée à partir des courbes de restitution (BTC). Ces résultats sont cohérents avec les observations expérimentales. Nous concluons que ces deux effets doivent être pris en compte même si leur identification à partir de la forme des BTCs issues des traçages des milieux naturels (souvent caractérisés par de faible valeurs du nombre de Peclet ) reste difficile. Enfin, un modèle moyen macroscopique 1D est proposé dans le cadre d’une approche de type continuous time random walk dans laquelle des distributions spécifiques du temps de transfert des particules sont construites pour chacun des deux mécanismes de transport
His work aims at modeling hydrodynamic dispersion mechanisms in aquifers. So far both flow field heterogeneity and mobile-immobile mass transfer have been studied separately for explaining the ubiquitously observed non-Fickian behaviors, but we postulate that both mechanisms contribute simultaneously. Our investigations combine laboratory experiments and pore scale numerical modeling. The experimental rig was designed to enable push-pull and flow through tracer tests on glass bead columns and Berea sandstone cores. Modeling consists in solving Stokes flow and solute transport on 3D X-ray microtomography images segmented into three phases: solid, void and microporosity. Transport is modeled using time domain random walk. Statistical analysis of the flow field emphasizes the importance of the mesh resolution and the inclusion of the microporosity. Results from the simulations show that both the flow field heterogeneity and the diffusive transport in the microporous fraction of the rock contribute to the overall non-Fickian transport behavior observed, for instance, on the breakthrough curves (BTC). These results are supported by our experiments. We conclude that, in general, this dual control must be taken into account, even if these different influences can hardly be distinguished from a qualitative appraisal of the BTC shape, specifically for the low values of the Peclet number that occurs in natural conditions. Finally, a 1D up-scaled model is developed in the framework of the continuous time random walk, where the influences of the flow field heterogeneity and mobile-immobile mass transfer are both taken into account using distinct transition time distributions
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Tao. "Higher-order Random Walk Methods for Data Analysis." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10790747.

Full text
Abstract:

Markov random walk models are powerful analytical tools for multiple areas in machine learning, numerical optimizations and data mining tasks. The key assumption of a first-order Markov chain is memorylessness, which restricts the dependence of the transition distribution to the current state only. However in many applications, this assumption is not appropriate. We propose a set of higher-order random walk techniques and discuss their applications to tensor co-clustering, user trails modeling, and solving linear systems. First, we develop a new random walk model that we call the super-spacey random surfer, which simultaneously clusters the rows, columns, and slices of a nonnegative three-mode tensor. This algorithm generalizes to tensors with any number of modes. We partition the tensor by minimizing the exit probability between clusters when the super-spacey random walk is at stationary. The second application is user trails modeling, where user trails record sequences of activities when individuals interact with the Internet and the world. We propose the retrospective higher-order Markov process as a two-step process by first choosing a state from the history and then transitioning as a first-order chain conditional on that state. This way the total number of parameters is restricted and thus the model is protected from overfitting. Lastly we propose to use a time-inhomogeneous Markov chain to approximate the solution of a linear system. Multiple simulations of the random walk are conducted to approximate the solution. By allowing the random walk to transition based on multiple matrices, we decrease the variance of the simulations, and thus increase the speed of the solver.

APA, Harvard, Vancouver, ISO, and other styles
3

Fakhereddine, Rana. "Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM047/document.

Full text
Abstract:
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel
Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Wei, and 余韡. "Reverse Top-k search using random walk with restart." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/197515.

Full text
Abstract:
With the increasing popularity of social networking applications, large volumes of graph data are becoming available. Large graphs are also derived by structure extraction from relational, text, or scientific data (e.g., relational tuple networks, citation graphs, ontology networks, protein-protein interaction graphs). Nodeto-node proximity is the key building block for many graph based applications that search or analyze the data. Among various proximity measures, random walk with restart (RWR) is widely adapted because of its ability to consider the global structure of the whole network. Although RWR-based similarity search has been well studied before, there is no prior work on reverse top-k proximity search in graphs based on RWR. We discuss the applicability of this query and show that the direct application of existing methods on RWR-based similarity search to solve reverse top-k queries has very high computational and storage demands. To address this issue, we propose an indexing technique, paired with an on-line reverse top-k search algorithm. In the indexing step, we compute from the graph G a graph index, which is based on a K X |V| matrix, containing in each column v the K largest approximate proximity values from v to any other node in G. K is application-dependent and represents the highest value of k in a practical reverse top-k query. At each column v of the index, the approximate values are lower bounds of the K largest proximity values from v to all other nodes. Given the graph index and a reverse top-k query q (k _ K), we prove that the exact proximities from any node v to query q can be efficiently computed by applying the power method. By comparing these with the corresponding lower bounds taken from the k-th row of the graph index, we are able to determine which nodes are certainly not in the reverse top-k result of q. For some of the remaining nodes, we may also be able to determine that they are certainly in the reverse top-k result of q, based on derived upper bounds for the k-th largest proximity value from them. Finally, for any candidate that remains, we progressively refine its approximate proximities, until based on its lower or upper bound it can be determined not to be or to be in the result. The proximities refined during a reverse top-k are used to update the graph index, making its values progressively more accurate for future queries. Our experimental evaluation shows that our technique is efficient and has manageable storage requirements even when applied on very large graphs. We also show the effectiveness of the reverse top-k search in the scenarios of spam detection and determining the popularity of authors.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
5

Costaouec, Ronan, and Ronan Costaouec. "Numerical methods for homogenization : applications to random media." Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00674957.

Full text
Abstract:
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
APA, Harvard, Vancouver, ISO, and other styles
6

Costaouec, Ronan. "Numerical methods for homogenization : applications to random media." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1012/document.

Full text
Abstract:
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation de matériaux présentant à une petite échelle des hétérogénéités aléatoires. Sous certaines hypothèses, la théorie mathématique de l'homogénéisation stochastique permet d'expliciter les propriétés effectives de tels matériaux. Néanmoins, en pratique, la détermination de ces propriétés demeure difficile. En effet, celle-ci requiert la résolution d'équations aux dérivées partielles stochastiques posées sur l'espace tout entier. Dans cette thèse, cette difficulté est abordée de deux manières différentes. Les méthodes classiques d'approximation conduisent à approcher les propriétés effectives par des quantités aléatoires. Réduire la variance de ces quantités est l'objectif des travaux de la Partie I. On montre ainsi comment adapter au cadre de l'homogénéisation stochastique une technique de réduction de variance déjà éprouvée dans d'autres domaines. Les travaux de la Partie II s'intéressent à des cas pour lesquels le matériau d'intérêt est considéré comme une petite perturbation aléatoire d'un matériau de référence. On montre alors numériquement et théoriquement que cette simplification de la modélisation permet effectivement une réduction très importante du coût calcul
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
APA, Harvard, Vancouver, ISO, and other styles
7

Van, Vleck Erik S. "Random and numerical aspects of the shadowing lemma." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/29357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Coskun, Mustafa Coskun. "ALGEBRAIC METHODS FOR LINK PREDICTIONIN VERY LARGE NETWORKS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1499436242956926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kolgushev, Oleg. "Influence of Underlying Random Walk Types in Population Models on Resulting Social Network Types and Epidemiological Dynamics." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955128/.

Full text
Abstract:
Epidemiologists rely on human interaction networks for determining states and dynamics of disease propagations in populations. However, such networks are empirical snapshots of the past. It will greatly benefit if human interaction networks are statistically predicted and dynamically created while an epidemic is in progress. We develop an application framework for the generation of human interaction networks and running epidemiological processes utilizing research on human mobility patterns and agent-based modeling. The interaction networks are dynamically constructed by incorporating different types of Random Walks and human rules of engagements. We explore the characteristics of the created network and compare them with the known theoretical and empirical graphs. The dependencies of epidemic dynamics and their outcomes on patterns and parameters of human motion and motives are encountered and presented through this research. This work specifically describes how the types and parameters of random walks define properties of generated graphs. We show that some configurations of the system of agents in random walk can produce network topologies with properties similar to small-world networks. Our goal is to find sets of mobility patterns that lead to empirical-like networks. The possibility of phase transitions in the graphs due to changes in the parameterization of agent walks is the focus of this research as this knowledge can lead to the possibility of disruptions to disease diffusions in populations. This research shall facilitate work of public health researchers to predict the magnitude of an epidemic and estimate resources required for mitigation.
APA, Harvard, Vancouver, ISO, and other styles
10

Matteuzzi, Tommaso. "Network diffusion methods for omics big bio data analytics and interpretation with application to cancer datasets." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13660/.

Full text
Abstract:
Nella attuale ricerca biomedica un passo fondamentale verso una comprensione dei meccanismi alla radice di una malattia è costituito dalla identificazione dei disease modules, cioè quei sottonetwork dell'interattoma, il network delle interazioni tra proteine, con un alto numero di alterazioni geniche. Tuttavia, l'incompletezza del network e l'elevata variabilità dei geni alterati rendono la soluzione di questo problema non banale. I metodi fisici che sfruttano le proprietà dei processi diffusivi su network, dei quali mi sono occupato in questo lavoro di tesi, sono quelli che consentono di ottenere le migliori prestazioni. Nella prima parte del mio lavoro, ho indagato la teoria relativa alla diffusione ed ai random walk su network, trovando interessanti relazioni con le tecniche di clustering e con altri modelli fisici la cui dinamica è descritta dalla matrice laplaciana. Ho poi implementato un tecnica basata sulla diffusione su rete applicandola a dati di espressione genica e mutazioni somatiche di tre diverse tipologie di cancro. Il metodo è organizzato in due parti. Dopo aver selezionato un sottoinsieme dei nodi dell'interattoma, associamo ad ognuno di essi un'informazione iniziale che riflette il "grado" di alterazione del gene. L'algoritmo di diffusione propaga l'informazione iniziale nel network raggiungendo, dopo un transiente, lo stato stazionario. A questo punto, la quantità di fluido in ciascun nodo è utilizzata per costruire un ranking dei geni. Nella seconda parte, i disease modules sono identificati mediante una procedura di network resampling. L'analisi condotta ci ha permesso di identificare un numero consistente di geni già noti nella letteratura relativa ai tipi di cancro studiati, nonché un insieme di altri geni correlati a questi che potrebbero essere interessanti candidati per ulteriori approfondimenti.Attraverso una procedura di Gene Set Enrichment abbiamo infine testato la correlazione dei moduli identificati con pathway biologici noti.
APA, Harvard, Vancouver, ISO, and other styles
11

Heiderich, Anne. "Diffusion multiple en milieu non linéaire ou anisotrope." Université Joseph Fourier (Grenoble), 1995. http://www.theses.fr/1995GRE10200.

Full text
Abstract:
Cette these est composee de trois parties: 1) une etude analytique de la retrodiffusion coherente de la lumiere en milieu desordonne et faiblement non lineaire: en utilisant une equation auto-consistante du type bethe-salpeter pour l'intensite moyenne, nous calculons le cone de retrodiffusion en milieu non lineaire. Le domaine de validite de la theorie est obtenu et discute. De plus, nous calculons le cone de retrodiffusion pour une couche mince non lineaire immergee dans un milieu lineaire. 2) une etude analytique du champ moyen dans des cristaux liquides en phase nematique orientee en presence des fluctuations thermiques de l'orientation des molecules (la solution de l'equation de dyson): l'operateur de masse en fonction du vecteur d'onde est calcule d'abord dans une approximation scalaire, puis pour un champ vectoriel. (seule sa partie imaginaire en fonction de la direction etait connue. ) l'influence des fluctuations a longue portee est discutee. Puis nous obtenons les lois de dispersion renormalisees, la longueur de diffusion, la fonction spectrale et le champ moyen. 3) une simulation numerique de type monte carlo de l'intensite moyenne dans ces materiaux (la solution de l'equation de bethe-salpeter): nous avons ecrit un code de simulation numerique de type monte carlo, afin d'etudier le transfert radiatif dans les nematiques. Celui-ci nous permet de simuler la polarisation de la lumiere en regime de diffusion multiple, le coefficient de transmission, le cone de retrodiffusion spacialement anisotrope, la distribution des temps de vol et le tenseur du coefficient de diffusion. Les resultats sont discutes et en partie compares avec un calcul analytique realise par van tiggelen
APA, Harvard, Vancouver, ISO, and other styles
12

Hoel, Håkon. "Complexity and Error Analysis of Numerical Methods for Wireless Channels, SDE, Random Variables and Quantum Mechanics." Doctoral thesis, KTH, Numerisk analys, NA (stängd 2012-06-30), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-94150.

Full text
Abstract:
This thesis consists of the four papers which consider different aspects of stochastic process modeling, error analysis, and minimization of computational cost.      In Paper I, we construct a Multipath Fading Channel (MFC) model for wireless channels with noise introduced through scatterers flipping on and off. By coarse graining the MFC model a Gaussian process channel model is developed. Complexity and accuracy comparisons of the models are conducted.      In Paper II, we generalize a multilevel Forward Euler Monte Carlo method introduced by Mike Giles for the approximation of expected values depending on solutions of Ito stochastic differential equations. Giles' work proposed and analyzed a Forward Euler Multilevel Monte Carlo (MLMC) method based on realizations on a hierarchy of uniform time discretizations and a coarse graining based control variates idea to reduce the computational cost required by a standard single level Forward Euler Monte Carlo method. This work is an extension of Giles' MLMC method from uniform to adaptive time grids. It has the same improvement in computational cost and is applicable to a larger set of problems.      In paper III, we consider the problem to estimate the mean of a random variable by a sequential stopping rule Monte Carlo method. The performance of a typical second moment based sequential stopping rule is shown to be unreliable both by numerical examples and by analytical arguments. Based on analysis and approximation of error bounds we construct a higher moment based stopping rule which performs more reliably.      In paper IV, Born-Oppenheimer dynamics is shown to provide an accurate approximation of time-independent Schrödinger observables for a molecular system with an electron spectral gap, in the limit of large ratio of nuclei and electron masses, without assuming that the nuclei are localized to vanishing domains. The derivation, based on a Hamiltonian system interpretation of the Schrödinger equation and stability of the corresponding hitting time Hamilton-Jacobi equation for non ergodic dynamics, bypasses the usual separation of nuclei and electron wave functions, includes caustic states and gives a different perspective on the Born-Oppenheimer approximation, Schrödinger Hamiltonian systems and numerical simulation in molecular dynamics modeling at constant energy.

QC 20120508

APA, Harvard, Vancouver, ISO, and other styles
13

Alves, Paladim Daniel. "Multiscale numerical methods for the simulation of diffusion processes in random heterogeneous media with guaranteed accuracy." Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/100344/.

Full text
Abstract:
The possibility of combining several constituents to obtain properties that cannot be obtained with any of them alone, explains the growing proliferation of composites in mechanical structures. However, the modelling of such heterogeneous systems poses extreme challenges to computational mechanics. The direct simulation of the aforementioned gives rise to computational models that are extremely expensive if not impossible to solve. Through homogenisation, the excessive computational burden is eliminated by separating the two scales (the scale of the constituents and the scale of the structure). Nonetheless, the hypotheses under which homogenisation applies are usually violated. Traditional homogenisation schemes provide no means to quantify this error. The �rst contribution of this thesis is the development of a method to quantify the homogenisation error. In this method, the heterogeneous medium is represented by a stochastic partial di�erential equation where each realisation corresponds to a particle layout. This representation allows us to derive guaranteed error estimates with a low computational cost. The e�ectivity (ratio between true error and estimate) is characterised and a relation is established between the error estimates and classical results in micromechanics. Moreover, a strategy to reduce the homogenisation error is presented. The second contribution of this thesis is made by developing a numerical method with guaranteed error bounds that directly approximates the solution of heterogeneous models by using shape functions that incorporate information of the microscale. The construction of those shape functions resembles the methods of computational homogenisation where microscale boundary value problems are solved to obtain homogenised properties.
APA, Harvard, Vancouver, ISO, and other styles
14

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226748.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice
Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert
APA, Harvard, Vancouver, ISO, and other styles
15

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Technische Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A20754.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice.
Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
APA, Harvard, Vancouver, ISO, and other styles
16

Huschto, Tony [Verfasser], and Sebastian [Akademischer Betreuer] Sager. "Numerical Methods for Random Parameter Optimal Control and the Optimal Control of Stochastic Differential Equations / Tony Huschto ; Betreuer: Sebastian Sager." Heidelberg : Universitätsbibliothek Heidelberg, 2014. http://d-nb.info/118030067X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Robertson, Blair Lennon. "Direct Search Methods for Nonsmooth Problems using Global Optimization Techniques." Thesis, University of Canterbury. Mathematics and Statistics, 2010. http://hdl.handle.net/10092/5060.

Full text
Abstract:
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods. In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems. Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
APA, Harvard, Vancouver, ISO, and other styles
18

Asai, Yusuke [Verfasser], Peter E. [Akademischer Betreuer] Kloeden, and Andreas [Akademischer Betreuer] Neuenkirch. "Numerical methods for random ordinary differential equations and their applications in biology and medicine / Yusuke Asai. Gutachter: Peter E. Kloeden ; Andreas Neuenkirch." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2016. http://d-nb.info/1100523782/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kale, Hikmet Emre. "Segmentation Of Human Facial Muscles On Ct And Mri Data Using Level Set And Bayesian Methods." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613352/index.pdf.

Full text
Abstract:
Medical image segmentation is a challenging problem, and is studied widely. In this thesis, the main goal is to develop automatic segmentation techniques of human mimic muscles and to compare them with ground truth data in order to determine the method that provides best segmentation results. The segmentation methods are based on Bayesian with Markov Random Field (MRF) and Level Set (Active Contour) models. Proposed segmentation methods are multi step processes including preprocess, main muscle segmentation step and post process, and are applied on three types of data: Magnetic Resonance Imaging (MRI) data, Computerized Tomography (CT) data and unified data, in which case, information coming from both modalities are utilized. The methods are applied both in three dimensions (3D) and two dimensions (2D) data cases. A simulation data and two patient data are utilized for tests. The patient data results are compared statistically with ground truth data which was labeled by an expert radiologist.
APA, Harvard, Vancouver, ISO, and other styles
20

Trias, Mansilla Daniel. "Analysis and Simulation of Transverse Random Fracture of Long Fibre Reinforced Composites." Doctoral thesis, Universitat de Girona, 2005. http://hdl.handle.net/10803/7762.

Full text
Abstract:
La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, càlcul de propietats efectives i criteris de fallada transversal en materials compòsits.
El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen.
Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents.
Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecàniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.
This thesis proposes a methodology for the probabilistic simulation of the transverse failure of Carbon Fibre Reinforced Polymers (CFRP) by analyzing the random distribution of the fibres within the composite. First chapters are devoted to the State-of-the-art review on the modelization of random materials, the computation of effective properties and the transverse failure of fibre reinforced polymers.
The first step in the proposed methodology is the definition of a Statistical Representative Volume Element (SRVE). This SRVE has to satisfy criteria based on the analysis of the volume fraction, the effective properties, the Hill Condition, the statistics of the stress and strain components, the probability density function of the stress and strain components and the inter-fibre distance statistical distributions. Once this SRVE has been achieved, a comparison between a periodic model and a random model is performed to quantitatively analyze the differences between the results they provide.
Also a methodology for the statistical analysis of the distribution of the fibre within the composite from digital images of the transverse section. This analysis is performed for four different materials.
Finally, a two-scale computational method for the transverse failure of unidirectional laminae is proposed. This method is able to provide probability density functions of the mechanical variables in the composite. Some applications and possibilities of the method are given and the simulation results are compared with experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
21

Oumouni, Mestapha. "Analyse numérique de méthodes performantes pour les EDP stochastiques modélisant l'écoulement et le transport en milieux poreux." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00904512.

Full text
Abstract:
Ce travail présente un développement et une analyse des approches numériques déterministes et probabilistes efficaces pour les équations aux dérivées partielles avec des coefficients et données aléatoires. On s'intéresse au problème d'écoulement stationnaire avec des données aléatoires. Une méthode de projection dans le cas unidimensionnel est présentée, permettant de calculer efficacement la moyenne de la solution. Nous utilisons la méthode de collocation anisotrope des grilles clairsemées. D'abord, un indicateur de l'erreur satisfaisant une borne supérieure de l'erreur est introduit, il permet de calculer les poids d'anisotropie de la méthode. Ensuite, nous démontrons une amélioration de l'erreur a priori de la méthode. Elle confirme l'efficacité de la méthode en comparaison avec celle de Monte Carlo et elle sera utilisée pour accélérer la méthode par l'extrapolation de Richardson. Nous présentons aussi une analyse numérique d'une méthode probabiliste pour quantifier la migration d'un contaminant dans un milieu aléatoire. Nous considérons le problème d'écoulement couplé avec l'équation d'advection-diffusion, où on s'intéresse à la moyenne de l'extension et de la dispersion du soluté. Le modèle d'écoulement est discrétisé par une méthode des éléments finis mixtes, la concentration du soluté est une densité d'une solution d'une équation différentielle stochastique, qui sera discrétisée par un schéma d'Euler. Enfin, nous présentons une formule explicite de la dispersion et des estimations de l'erreur a priori optimales.
APA, Harvard, Vancouver, ISO, and other styles
22

Oukili, Hamza. "Flow and transport in complex porous media : particle methods." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0056.

Full text
Abstract:
Les méthodes utilisant des particules ont été largement utilisées pour modéliser les problèmes de transport dans les sols poreux, les aquifères et les réservoirs. Ils réduisent ou évitent certains desproblèmes des méthodes eulériennes, par exemple instabilités, diffusion artificielle excessive, bilan massique et / ou oscillations pouvant conduire à des concentrations négatives. Cette thèsedéveloppe de nouvelles méthodes de particules lagrangiennes pour modéliser les phénomènes d'écoulement et de transport dans des milieux poreux complexes avec des hétérogénéités. Pour ce faire, cette thèse passe d'abord en revue les processus stochastiques et leurs relations avec l'équation (EDP) macroscopique d'Advection-Diffusion ADE. Cette mise en revue permet de trouver les conditions nécessaires à un processus stochastique pour que sa densité de probabilité vérifie l’équation EDP de Fokker-Planck et donc l’ADE. Cependant, l’une de ces conditions est la différentiabilité des coefficients de transport. Il est donc difficile de traiter les discontinuités et les hétérogénéités, en particulier la diffusion et la porosité discontinues. Dans la littérature sur les marches aléatoires de particules, les méthodes précédentes utilisées pour traiter ce problème de discontinuité nécessitaient de petits pas de temps afin de converger vers la solution attendue. Ces restrictions sur le pas de temps conduisent à des algorithmes inefficaces. Dans cette étude, nous proposons une nouvelle approche sans restrictions sur la taille des pas de temps. L’algorithme RWPT (Random Walk Particle Tracking) proposé ici est discret en temps et continu en espace (sans grille). Le nouvel algorithme RWPT est basé sur un pas de temps adaptatif « Stop&Go », combiné à des schémas de réflexion partielle/réfraction, et étendu à trois nouveaux concepts : particules de masse négative ; particules de masse adaptative ; et particules à tête chercheuse ("homing"). Les algorithmes en domaines infinis ont ensuite été généralisés au cas de domaines finis ou semi-infinis. Les conditions aux limites de Dirichlet (concentrations) sont particulièrement difficiles à mettre en œuvre dans les méthodes particulaires. Ainsi, cette thèse propose-t-elle différentes méthodes de mise en œuvre des conditions de Dirichlet avec l'algorithme RWPT utilisé pour traiter les discontinuités. Pour tester les nouveaux schémas RWPT Stop&Go, nous développons des solutions analytiques et semi-analytiques pour la diffusion en présence de multiples interfaces (milieu multicouche discontinu) dans des domaines infinis, semi-infinis et finis avec des conditions limites de Dirichlet. Les résultats montrent que les schémas RWPT Stop&Go proposés correspondent extrêmement bien aux solutions semi-analytiques, même pour des contrastes très forts des coefficients de diffusion et porosités, y compris au voisinage des interfaces. Ensuite, la méthode RWPT est appliquée pour étudier les processus de diffusion à différentes échelles dans des supports composites (systèmes grains/pores 2D). Une condition de flux nul est appliquée localement aux interfaces grain/pore. Au niveau macroscopique, la diffusion se produit dans un milieu homogène avec des paramètres macro-échelle (porosité et coefficients de diffusion effectifs) induits par des méthodes de montée d’échelle à l'aide des moments spatiaux d’ordre 2. L'algorithme RWPT est ensuite appliqué à des géométries plus complexes de grains et pores. Tout d’abord, différentes configurations ou structures micro-échelle sont choisies afin d'obtenir des milieux composites isotropes ayant différentes porosités. Puis, en choisissant des micro-structures allongées, des effets d’anisotropies apparaissent au niveau macroscopique. Les différentes méthodes proposées dans cette thèse pourraient être utilisées pour différents problèmes, chacune ayant ses inconvénients et ses avantages. Les schémas proposés semblent prometteurs dans la perspective d’extensions vers des géométries 3D plus complexes
Particle methods have been extensively used for modeling transport problems in porous soils, aquifers, and reservoirs. They reduce or avoid some of the problems of Eulerian methods, e.g. instabilities, excessive artificial diffusion, mass balance, and/or oscillations that could lead to negative concentrations. This thesis develops a new class of gridless Lagrangian particle methods for modeling flow and transport phenomena in complex porous media with heterogeneities and discontinuities. Firstly, stochastic processes are reviewed, in relation to particle positions X(t) and to the corresponding macroscopic Advection-Diffusion Equation (ADE). This review leads to the conditions required for the Probability Density Function (PDF) of X(t) to satisfy the Fokker-Planck equation (and the ADE). However, one of these conditions is the differentiability of transport coefficients: therefore, discontinuities are difficult to treat, particularly discontinuous diffusion D(x) and porosity q(x). In the literature on particle Random Walks, the methods used to handle discontinuous diffusion required excessively small time steps. These restrictions on the time step lead to inefficient algorithms. In this study, we propose a novel approach without restrictions on time step size. The novel RWPT (Random Walk Particle Tracking) algorithms proposed here are discrete in time and continuous in space (gridless). They are based on an adaptive “Stop&Go” time-stepping, combined with partial reflection/refraction schemes, and extended with three new concepts: negative mass particles; adaptive mass particles; and “homing” particles. To test the new Stop&Go RWPT schemes in infinite domains, we develop analytical and semi-analyticalsolutions for diffusion in the presence of multiple interfaces (discontinuous multi-layered medium) in infinite domains. The results show that the proposed Stop&Go RWPT schemes (with adaptive, negative, or homing particles) fit extremely well the semi-analytical solutions, even for very high contrasts for transport properties even in the neighborhood of the interfaces. The schemes provide a correct diffusive solution in only a few macro-steps (macroscopic time steps), with a precision that depends only on the number of particles, and not on the macro-step. The algorithms are then, extended from infinite to semi-infinite and finite domains. Dirichlet conditions are particularly difficult to implement in particle methods. Thus, in this thesis we propose different methods on how to implement Dirichlet boundary conditions with the “discontinuous” RWPT algorithm. This study proposes an algorithm to solve diffusion equations semi-analytically in heterogeneous semi-infinite and finite domains with Dirichlet boundary conditions. The RWPT Dirichlet methods are then checked analytically and verified for various configurations. Finally, the RWPT method is applied for studying diffusion at different scales in 2D composite media (grain/pore systems). A zero-flux condition is assumed locally at the grain/pore interfaces. At the macro-scale, diffusion occurs in an equivalent effective homogeneous medium with macroscopic parameters (porosity and effective diffusion coefficients) obtained from the temporal evolution of second order moments. The RWPT algorithm is then applied to more complex geometries of grains and pores. Different configurations or structures at the micro-scale level will be chosen in order to obtain composite isotropic media at the macro-scale level with different porosities. Then, by choosing elongated micro-structures, anisotropy effects emerge at the macroscopic level. Effective macro-scale properties (porosities, effective diffusion tensors, tortuosities) are calculated using the second order moment. The different methods proposed in this thesis can be used for different problems, since each has its drawbacks and advantages. The schemes proposed seem promising with a view to extensions towards more complex 3D geometries
APA, Harvard, Vancouver, ISO, and other styles
23

Lemaitre, Sophie. "Modélisation des matériaux composites multiphasiques à microstructures complexes : Etude des propriétés effectives par des méthodes d'homogénéisation." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC217/document.

Full text
Abstract:
Ce mémoire aborde les questions relatives à la mise en place de procédures de conception rapide, fiable et automatisée des volumes élémentaires représentatifs (VER) d’un matériau composite à microstructure complexe (matrice/inclusions), et de la détermination de leurs propriétés homogénéisées ou effectives. Nous avons conçu et développé des algorithmes conduisant à des outils efficaces permettant la génération aléatoire de tels matériaux à inclusions sphériques, cylindriques, elliptiques ou toute combinaison de celles-ci. Ces outils sont également capables d’altérer les inclusions : inflation, déflation, arrachements aléatoires, ondulation et de les pelliculer permettant ainsi de générer des VER s’approchant des matériaux composites fabriqués. Un soin particulier a été porté sur la génération de VER périodiques. Les caractéristiques homogénéisées ou propriétés effectives de matériaux constitués de tels VER périodiques peuvent alors être déterminées selon le principe d’homogénéisation périodique, soit par une méthode basée sur un schéma itératif utilisant la FFT (Transformation de Fourier Rapide) via l’équation de Lippmann-Schwinger, soit par une méthode d’éléments finis. Le caractère aléatoire de la génération nous amène à réaliser des études en moyenne à partir d’un ensemble de paramètres morphologiques déterminé : nombre d’inclusions, type et forme, fraction volumique, orientation des inclusions, prise en compte d’une éventuelle altération. Deux études particulières sur la conductivité thermique apparente ont été menées, la première sur les composites à inclusions sphériques pelliculées de façon à déterminer l’influence de l’épaisseur de la pellicule et la seconde sur les composites de type stratifié en polymère et fibre de carbone, cousu par un fil de cuivre pour évaluer l'apport de la couture en cuivre selon la fibre de carbone utilisée
This thesis focuses on setting up of fast, reliable and automated approaches to design representative volume elements (RVE) of composite materials with complex microstructures (matrix/inclusions) and the evaluation of their effective properties via a homogenization process. We developed algorithms and efficient tools for the random generation of such materials. Inclusions shapes may be spherical, cylindrical, elliptical or any combinations of them. Inflation, deflation, dislocation, undulation and coating are also available to generate RVE. The aim is to approach realistic materials subjected to be damaged during production. Particular attention has been focused on the periodic RVE generation.The homogenized characteristics or effective properties of materials formed from such periodic RVE may then be determined according to the principle of periodic homogenization, by an iterative scheme using FFT (Fast Fourier Transform) via the integral Lippmann-Schwinger or by a finite elements method.The stochastic generation of RVE and the set of morphological parameters studied: number of inclusions, type and shape, volume fraction, orientation of the inclusions lead to achieve an average process. Moreover, a special study has been led to take into account the behavior of altered inclusions. Furthermore, we studied two particular cases on the apparent thermal conductivity of the composite, the first for coated spherical inclusions in order to determine the influence of the layer thickness and the second for laminated polymer and carbon fiber composite sewn by a copper wire, in order to determine the influence of the sewing contribution according to the carbon fiber used
APA, Harvard, Vancouver, ISO, and other styles
24

Willersjö, Nyfelt Emil. "Comparison of the 1st and 2nd order Lee–Carter methods with the robust Hyndman–Ullah method for fitting and forecasting mortality rates." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48383.

Full text
Abstract:
The 1st and 2nd order Lee–Carter methods were compared with the Hyndman–Ullah method in regards to goodness of fit and forecasting ability of mortality rates. Swedish population data was used from the Human Mortality Database. The robust estimation property of the Hyndman–Ullah method was also tested with inclusion of the Spanish flu and a hypothetical scenario of the COVID-19 pandemic. After having presented the three methods and making several comparisons between the methods, it is concluded that the Hyndman–Ullah method is overall superior among the three methods with the implementation of the chosen dataset. Its robust estimation of mortality shocks could also be confirmed.
APA, Harvard, Vancouver, ISO, and other styles
25

Peña, Monferrer Carlos. "Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90493.

Full text
Abstract:
The study and modelling of two-phase flow, even the simplest ones such as the bubbly flow, remains a challenge that requires exploring the physical phenomena from different spatial and temporal resolution levels. CFD (Computational Fluid Dynamics) is a widespread and promising tool for modelling, but nowadays, there is no single approach or method to predict the dynamics of these systems at the different resolution levels providing enough precision of the results. The inherent difficulties of the events occurring in this flow, mainly those related with the interface between phases, makes that low or intermediate resolution level approaches as system codes (RELAP, TRACE, ...) or 3D TFM (Two-Fluid Model) have significant issues to reproduce acceptable results, unless well-known scenarios and global values are considered. Instead, methods based on high resolution level such as Interfacial Tracking Method (ITM) or Volume Of Fluid (VOF) require a high computational effort that makes unfeasible its use in complex systems. In this thesis, an open-source simulation framework has been designed and developed using the OpenFOAM library to analyze the cases from microescale to macroscale levels. The different approaches and the information that is required in each one of them have been studied for bubbly flow. In the first part, the dynamics of single bubbles at a high resolution level have been examined through VOF. This technique has allowed to obtain accurate results related to the bubble formation, terminal velocity, path, wake and instabilities produced by the wake. However, this approach has been impractical for real scenarios with more than dozens of bubbles. Alternatively, this thesis proposes a CFD Discrete Element Method (CFD-DEM) technique, where each bubble is represented discretely. A novel solver for bubbly flow has been developed in this thesis. This includes a large number of improvements necessary to reproduce the bubble-bubble and bubble-wall interactions, turbulence, velocity seen by the bubbles, momentum and mass exchange term over the cells or bubble expansion, among others. But also new implementations as an algorithm to seed the bubbles in the system have been incorporated. As a result, this new solver gives more accurate results as the provided up to date. Following the decrease on resolution level, and therefore the required computational resources, a 3D TFM have been developed with a population balance equation solved with an implementation of the Quadrature Method Of Moments (QMOM). The solver is implemented with the same closure models as the CFD-DEM to analyze the effects involved with the lost of information due to the averaging of the instantaneous Navier-Stokes equation. The analysis of the results with CFD-DEM reveals the discrepancies found by considering averaged values and homogeneous flow in the models of the classical TFM formulation. Finally, for the lowest resolution level approach, the system code RELAP5/MOD3 is used for modelling the bubbly flow regime. The code has been modified to reproduce properly the two-phase flow characteristics in vertical pipes, comparing the performance of the calculation of the drag term based on drift-velocity and drag coefficient approaches.
El estudio y modelado de flujos bifásicos, incluso los más simples como el bubbly flow, sigue siendo un reto que conlleva aproximarse a los fenómenos físicos que lo rigen desde diferentes niveles de resolución espacial y temporal. El uso de códigos CFD (Computational Fluid Dynamics) como herramienta de modelado está muy extendida y resulta prometedora, pero hoy por hoy, no existe una única aproximación o técnica de resolución que permita predecir la dinámica de estos sistemas en los diferentes niveles de resolución, y que ofrezca suficiente precisión en sus resultados. La dificultad intrínseca de los fenómenos que allí ocurren, sobre todo los ligados a la interfase entre ambas fases, hace que los códigos de bajo o medio nivel de resolución, como pueden ser los códigos de sistema (RELAP, TRACE, etc.) o los basados en aproximaciones 3D TFM (Two-Fluid Model) tengan serios problemas para ofrecer resultados aceptables, a no ser que se trate de escenarios muy conocidos y se busquen resultados globales. En cambio, códigos basados en alto nivel de resolución, como los que utilizan VOF (Volume Of Fluid), requirieren de un esfuerzo computacional tan elevado que no pueden ser aplicados a sistemas complejos. En esta tesis, mediante el uso de la librería OpenFOAM se ha creado un marco de simulación de código abierto para analizar los escenarios desde niveles de resolución de microescala a macroescala, analizando las diferentes aproximaciones, así como la información que es necesaria aportar en cada una de ellas, para el estudio del régimen de bubbly flow. En la primera parte se estudia la dinámica de burbujas individuales a un alto nivel de resolución mediante el uso del método VOF (Volume Of Fluid). Esta técnica ha permitido obtener resultados precisos como la formación de la burbuja, velocidad terminal, camino recorrido, estela producida por la burbuja e inestabilidades que produce en su camino. Pero esta aproximación resulta inviable para entornos reales con la participación de más de unas pocas decenas de burbujas. Como alternativa, se propone el uso de técnicas CFD-DEM (Discrete Element Methods) en la que se representa a las burbujas como partículas discretas. En esta tesis se ha desarrollado un nuevo solver para bubbly flow en el que se han añadido un gran número de nuevos modelos, como los necesarios para contemplar los choques entre burbujas o con las paredes, la turbulencia, la velocidad vista por las burbujas, la distribución del intercambio de momento y masas con el fluido en las diferentes celdas por cada una de las burbujas o la expansión de la fase gaseosa entre otros. Pero también se han tenido que incluir nuevos algoritmos como el necesario para inyectar de forma adecuada la fase gaseosa en el sistema. Este nuevo solver ofrece resultados con un nivel de resolución superior a los desarrollados hasta la fecha. Siguiendo con la reducción del nivel de resolución, y por tanto los recursos computacionales necesarios, se efectúa el desarrollo de un solver tridimensional de TFM en el que se ha implementado el método QMOM (Quadrature Method Of Moments) para resolver la ecuación de balance poblacional. El solver se desarrolla con los mismos modelos de cierre que el CFD-DEM para analizar los efectos relacionados con la pérdida de información debido al promediado de las ecuaciones instantáneas de Navier-Stokes. El análisis de resultados de CFD-DEM permite determinar las discrepancias encontradas por considerar los valores promediados y el flujo homogéneo de los modelos clásicos de TFM. Por último, como aproximación de nivel de resolución más bajo, se investiga el uso uso de códigos de sistema, utilizando el código RELAP5/MOD3 para analizar el modelado del flujo en condiciones de bubbly flow. El código es modificado para reproducir correctamente el flujo bifásico en tuberías verticales, comparando el comportamiento de aproximaciones para el cálculo del término d
L'estudi i modelatge de fluxos bifàsics, fins i tot els més simples com bubbly flow, segueix sent un repte que comporta aproximar-se als fenòmens físics que ho regeixen des de diferents nivells de resolució espacial i temporal. L'ús de codis CFD (Computational Fluid Dynamics) com a eina de modelatge està molt estesa i resulta prometedora, però ara per ara, no existeix una única aproximació o tècnica de resolució que permeta predir la dinàmica d'aquests sistemes en els diferents nivells de resolució, i que oferisca suficient precisió en els seus resultats. Les dificultat intrínseques dels fenòmens que allí ocorren, sobre tots els lligats a la interfase entre les dues fases, fa que els codis de baix o mig nivell de resolució, com poden ser els codis de sistema (RELAP,TRACE, etc.) o els basats en aproximacions 3D TFM (Two-Fluid Model) tinguen seriosos problemes per a oferir resultats acceptables , llevat que es tracte d'escenaris molt coneguts i se persegueixen resultats globals. En canvi, codis basats en alt nivell de resolució, com els que utilitzen VOF (Volume Of Fluid), requereixen d'un esforç computacional tan elevat que no poden ser aplicats a sistemes complexos. En aquesta tesi, mitjançant l'ús de la llibreria OpenFOAM s'ha creat un marc de simulació de codi obert per a analitzar els escenaris des de nivells de resolució de microescala a macroescala, analitzant les diferents aproximacions, així com la informació que és necessària aportar en cadascuna d'elles, per a l'estudi del règim de bubbly flow. En la primera part s'estudia la dinàmica de bambolles individuals a un alt nivell de resolució mitjançant l'ús del mètode VOF. Aquesta tècnica ha permès obtenir resultats precisos com la formació de la bambolla, velocitat terminal, camí recorregut, estela produida per la bambolla i inestabilitats que produeix en el seu camí. Però aquesta aproximació resulta inviable per a entorns reals amb la participació de més d'unes poques desenes de bambolles. Com a alternativa en aqueix cas es proposa l'ús de tècniques CFD-DEM (Discrete Element Methods) en la qual es representa a les bambolles com a partícules discretes. En aquesta tesi s'ha desenvolupat un nou solver per a bubbly flow en el qual s'han afegit un gran nombre de nous models, com els necessaris per a contemplar els xocs entre bambolles o amb les parets, la turbulència, la velocitat vista per les bambolles, la distribució de l'intercanvi de moment i masses amb el fluid en les diferents cel·les per cadascuna de les bambolles o els models d'expansió de la fase gasosa entre uns altres. Però també s'ha hagut d'incloure nous algoritmes com el necessari per a injectar de forma adequada la fase gasosa en el sistema. Aquest nou solver ofereix resultats amb un nivell de resolució superior als desenvolupat fins la data. Seguint amb la reducció del nivell de resolució, i per tant els recursos computacionals necessaris, s'efectua el desenvolupament d'un solver tridimensional de TFM en el qual s'ha implementat el mètode QMOM (Quadrature Method Of Moments) per a resoldre l'equació de balanç poblacional. El solver es desenvolupa amb els mateixos models de tancament que el CFD-DEM per a analitzar els efectes relacionats amb la pèrdua d'informació a causa del promitjat de les equacions instantànies de Navier-Stokes. L'anàlisi de resultats de CFD-DEM permet determinar les discrepàncies ocasionades per considerar els valors promitjats i el flux homogeni dels models clàssics de TFM. Finalment, com a aproximació de nivell de resolució més baix, s'analitza l'ús de codis de sistema, utilitzant el codi RELAP5/MOD3 per a analitzar el modelatge del fluxos en règim de bubbly flow. El codi és modificat per a reproduir correctament les característiques del flux bifàsic en canonades verticals, comparant el comportament d'aproximacions per al càlcul del terme de drag basades en velocitat de drift flux model i de les basades en coe
Peña Monferrer, C. (2017). Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90493
TESIS
APA, Harvard, Vancouver, ISO, and other styles
26

Guessasm, Mohamed. "Contribution à la détermination des domaines de résistance de matériaux hétérogènes non périodiques." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE10010.

Full text
Abstract:
Le travail realise a pour objet la determination des domaines de resistance macroscopiques de materiaux heterogenes non periodiques, dans le cadre de la theorie du calcul a la rupture. Afin de caracteriser le comportement non lineaire des materiaux aleatoirement heterogenes, le modele extremal heterogene (m. E. H. ) propose une formulation interessante dans le cas ou le comportement des materiaux constitutifs derive d'un potentiel. Un modele en contraintes, utilisant le cadre conceptuel du m. E. H. , est developpe. Ce modele concerne les materiaux dont le domaine de resistance est, soit convexe (dans ce cas il est identique au m. E. H. ), soit simplement etoile de centre l'origine des contraintes. Une application est realisee sur un materiau perfore aleatoirement. Le modele en contraintes adopte deux descriptions pour le materiau perfore. La premiere description est basee sur les fractions volumiques des materiaux constitutifs. La deuxieme description suppose que le materiau perfore aleatoirement est un agregat de plusieurs materiaux perfores periodiquement. Cette modelisation fournit une famille de domaines de resistance dependant d'un parametre d'heterogeneite r. Ce dernier est determine par calage des previsions numeriques aux resultats experimentaux pour une sollicitation donnee. Le domaine de resistance ainsi obtenu est valide par confrontation des previsions numeriques aux resultats experimentaux pour d'autres types de sollicitations. Parallelement a ce travail, des methodes de resolution sont developpees (a la fois simples et performantes) pour les problemes d'optimisation auxquels conduisent les approches en contraintes (du calcul a la rupture ou de l'homogeneisation). Sous des hypotheses relatives aux domaines de resistance, ces problemes d'optimisation, au depart sous conditions non lineaires, sont ramenes a des problemes d'infmax sans conditions avec une reduction significative du nombre de variables. Leur resolution est basee sur une methode de regularisation originale, effectuee sur une fonctionnelle independante de l'operateur max.
APA, Harvard, Vancouver, ISO, and other styles
27

Paditz, Ludwig. "Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-115105.

Full text
Abstract:
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1
In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Peng Neng, and 王鵬能. "3-D Random Walk Numerical Model For Natural River Pollutant Transport." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/11173039115016265740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yen, Jui-Chih, and 顏瑞池. "NORTA Initialization for Random VectorGeneration by Numerical Methods." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/46247187635292900360.

Full text
Abstract:
碩士
中原大學
工業工程研究所
90
ABSTRACT NORTA Initialization for Random Vector Generation by Numerical Methods Jui-Chih Yen We propose a numerical method for generating observations of a n-dimensional random vector with arbitrarily specified marginal distributions and correlation matrix. Our random vector generation (RVG) method uses the NORTA (NORmal To Any- thing) approach. NORTA generates a random vector by first generating a standard normal random vector. Then, transform it into a random vector with specified marginal distributions. During initialization for NORTA, n(n-1)/2 nonlinear equations need to be solved to assure that the generated random vector has the specified correlation structure. The root-finding function is a two-dimensional integral. For NORTA initialization, there are three approaches: analytical, numerical, and simulation. The analytical approach is exact but applicable only for special cases, such as normal random vectors. Chen (2001) uses the simulation approach to solve the n(n-1)/2 equations by treating it as a stochastic root-finding problem, solving equa- tions using only the estimates of the function values. The disadvantage is that the computation time is usually longer than the numerical approach. We use the numerical approach to solves these equations. Since the root-finding function is a two-dimensional integral, our numerical method includes two parts: integration and root-finding. For integration, when the specified correlation is close to 1 or-1, the bivariate normal density function in the integrand is steep; the density is high along the 45 or 135 degree line and almost zero everywhere else. In this situation, the numerical integration error could be large. Therefore, we divide the integration area to five parts. The efficient Gaussian-quadrature integration method is used for each part. For rootfinding, the combination of the bisection and Newton’s methods is used to guarantee convergence. Simulation experiments are conducted to evaluate the accuracy of the numerical integration and root-finding methods. The results show that the numerical integration method is quite accurate when the skewness of the specified marginal distribution is small. When the skewness is high, the integration method may have large errors. The simulation results also show that our numerical RVG method is more accurate and efficient than Chen’s simulation method when the skewness is small. When the skew- ness is high, the numerical method is still faster but less accurate. In this case, the simulation method is a better choice. Keywords: multivariate random vector generation, NORTA, stochastic root finding, numerical analysis, Gaussian quadrature, Newton’s method
APA, Harvard, Vancouver, ISO, and other styles
30

"Insulator Flashover Probability Investigation Based on Numerical Electric Field Calculation and Random Walk Theory." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.39438.

Full text
Abstract:
abstract: Overhead high voltage transmission lines are widely used around the world to deliver power to customers because of their low losses and high transmission capability. Well-coordinated insulation systems are capable of withstanding lightning and switching surge voltages. However, flashover is a serious issue to insulation systems, especially if the insulator is covered by a pollution layer. Many experiments in the laboratory have been conducted to investigate this issue. Since most experiments are time-consuming and costly, good mathematical models could contribute to predicting the insulator flashover performance as well as guide the experiments. This dissertation proposes a new statistical model to calculate the flashover probability of insulators under different supply voltages and contamination levels. An insulator model with water particles in the air is simulated to analyze the effects of rain and mist on flashover performance in reality. Additionally, insulator radius and number of sheds affect insulator surface resistivity and leakage distance. These two factors are studied to improve the efficiency of insulator design. This dissertation also discusses the impact of insulator surface hydrophobicity on flashover voltage. Because arc propagation is a stochastic process, an arc could travel on different paths based on the electric field distribution. Some arc paths jump between insulator sheds instead of travelling along the insulator surfaces. The arc jumping could shorten the leakage distance and intensify the electric field. Therefore, the probabilities of arc jumping at different locations of sheds are also calculated in this dissertation. The new simulation model is based on numerical electric field calculation and random walk theory. The electric field is calculated by the variable-grid finite difference method. The random walk theory from the Monte Carlo Method is utilized to describe the random propagation process of arc growth. This model will permit insulator engineers to design the reasonable geometry of insulators, to reduce the flashover phenomena under a wide range of operating conditions.
Dissertation/Thesis
Doctoral Dissertation Engineering 2016
APA, Harvard, Vancouver, ISO, and other styles
31

CHEN, CHIA-HUNG, and 陳佳鴻. "2-D Random Walk Numerical Model for Riverine Pollutant Transport of Continuous Point Source." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/pmzng3.

Full text
Abstract:
碩士
逢甲大學
土木及水利工程所
90
Abstract The midstream and downstream of major and minor rivers in Taiwan have been polluted seriously. The pollution is much caused by various continuous pollutants. This study uses Random Walk Method to set up a numerical model to simulate the riverine pollutant transport of continuous point source. The random walk method considers the released mass at each time step as being made up of thousands of particles. The particles released not only advect with the flow, but also walk randomly due to diffusion (or dispersion) effect. At desirable time the concentration at one position may be obtain by dividing the total mass in the concerned volume by the volume. The position of every particle is memorized by the computer at every time step. For saving the memories of the computer, the memories of a particle flows out of the flow field are replaced by that of a new released particle. In this study, model structure is discussed, and calculated results are compared with the data of laboratory experiment to analyze the mechanism of the dispersion in meandering channel.
APA, Harvard, Vancouver, ISO, and other styles
32

Bloem-Reddy, Benjamin Michael. "Random Walk Models, Preferential Attachment, and Sequential Monte Carlo Methods for Analysis of Network Data." Thesis, 2017. https://doi.org/10.7916/D8348R5Q.

Full text
Abstract:
Networks arise in nearly every branch of science, from biology and physics to sociology and economics. A signature of many network datasets is strong local dependence, which gives rise to phenomena such as sparsity, power law degree distributions, clustering, and structural heterogeneity. Statistical models of networks require a careful balance of flexibility to faithfully capture that dependence, and simplicity, to make analysis and inference tractable. In this dissertation, we introduce a class of models that insert one network edge at a time via a random walk, permitting the location of new edges to depend explicitly on the structure of the existing network, while remaining probabilistically and computationally tractable. Connections to graph kernels are made through the probability generating function of the random walk length distribution. The limiting degree distribution is shown to exhibit power law behavior, and the properties of the limiting degree sequence are studied analytically with martingale methods. In the second part of the dissertation, we develop a class of particle Markov chain Monte Carlo algorithms to perform inference for a large class of sequential random graph models, even when the observation consists only of a single graph. Using these methods, we derive a particle Gibbs sampler for random walk models. Fit to synthetic data, the sampler accurately recovers the model parameters; fit to real data, the model offers insight into the typical length scale of dependence in the network, and provides a new measure of vertex centrality. The arrival times of new vertices are the key to obtaining results for both theory and inference. In the third part, we undertake a careful study of the relationship between the arrival times, sparsity, and heavy tailed degree distributions in preferential attachment-type models of partitions and graphs. A number of constructive representations of the limiting degrees are obtained, and connections are made to exchangeable Gibbs partitions as well as to recent results on the limiting degrees of preferential attachment graphs.
APA, Harvard, Vancouver, ISO, and other styles
33

Mehta, Kurang Jvalant. "Numerical methods to implement time reversal of waves propagating in complex random media." 2003. http://www.lib.ncsu.edu/theses/available/etd-05192003-160819/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Rajani, Vishaal. "Quantitative analysis of single particle tracking experiments: applying ecological methods in cellular biology." Master's thesis, 2010. http://hdl.handle.net/10048/1316.

Full text
Abstract:
Single-particle tracking (SPT) is a method used to study the diffusion of various molecules within the cell. SPT involves tagging proteins with optical labels and observing their individual two-dimensional trajectories with a microscope. The analysis of this data provides important information about protein movement and mechanism, and is used to create multistate biological models. One of the challenges in SPT analysis is the variety of complex environments that contribute to heterogeneity within movement paths. In this thesis, we explore the limitations of current methods used to analyze molecular movement, and adapt analytical methods used in animal movement analysis, such as correlated random walks and first-passage time variance, to SPT data of leukocyte function-associated antigen-1 (LFA-1) integral membrane proteins. We discuss the consequences of these methods in understanding different types of heterogeneity in protein movement behaviour, and provide support to results from current experimental work.
Applied Mathematics
APA, Harvard, Vancouver, ISO, and other styles
35

Choudhary, Shalu. "Numerical Methods For Solving The Eigenvalue Problem Involved In The Karhunen-Loeve Decomposition." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2308.

Full text
Abstract:
In structural analysis and design it is important to consider the effects of uncertainties in loading and material properties in a rational way. Uncertainty in material properties such as heterogeneity in elastic and mass properties can be modeled as a random field. For computational purpose, it is essential to discretize and represent the random field. For a field with known second order statistics, such a representation can be achieved by Karhunen-Lo`eve (KL) expansion. Accordingly, the random field is represented in a truncated series expansion using a few eigenvalues and associated eigenfunctions of the covariance function, and corresponding random coefficients. The eigenvalues and eigenfunctions of the covariance kernel are obtained by solving a second order Fredholm integral equation. A closed-form solution for the integral equation, especially for arbitrary domains, may not always be available. Therefore an approximate solution is sought. While finding an approximate solution, it is important to consider both accuracy of the solution and the cost of computing the solution. This work is focused on exploring a few numerical methods for estimating the solution of this integral equation. Three different methods:(i)using finite element bases(Method1),(ii) mid-point approximation(Method2), and(iii)by the Nystr¨om method(Method3), are implemented and numerically studied. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. In the first method an eigenfunction is first represented in a linear combination of a set of finite element bases. The resulting error in the integral equation is then minimized in the Galerkinsense, which results in a generalized matrix eigenvalue problem. In the second method, the domain is partitioned into a finite number of subdomains. The covariance function is discretized by approximating its value over each subdomain locally, and thereby the integral equation is transformed to a matrix eigenvalue problem. In the third method the Fredholm integral equation is approximated by a quadrature rule, which also results in a matrix eigenvalue problem. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. The first part of the numerical study involves comparing these three methods. This numerical study is first done in one dimensional domain. Then for study in two dimensions a simple rectangular domain(referred toasDomain1)is taken with an uncertain material property modeled as a Gaussian random field. For the chosen covariance model and domain, the analytical solutions are known, which allows verifying the accuracy of the numerical solutions. There by these three numerical methods are studied and are compared for a chosen target accuracy and different correlation lengths of the random field. It was observed that Method 2 and Method 3 are much faster than the Method 1. On the other hand, for Method 2 and 3, additional cost for discretizing the domain into nodes should be considered whereas for a mechanics-related problem, Method 1 can use the available finite element mesh used for solving the mechanics problem. The second part of the work focuses on studying on the effect of the geometry of the model on realizations of the random field. The objective of the study is to see the possibility of generating the random field for a complicated domain from the KL expansion for a simpler domain. For this purpose, two KL decompositions are obtained: one on the Domain1, and another on the same rectangular domain modified with a rectangular hole (referredtoasDomain2) inside it. The random process is generated and realizations are compared. It was observed from the studies that probability density functions at the nodes on both the domains, that is, on Domain 1 and Domain 2, are similar. This observation leads to a possibility that a complicated domain can be replaced by a corresponding simpler domain, thereby reducing the computational cost.
APA, Harvard, Vancouver, ISO, and other styles
36

Scheuerer, Michael. "A Comparison of Models and Methods for Spatial Interpolation in Statistics and Numerical Analysis." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3D5-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gaspar, Ana Pimentel Torres. "Contribution to control uncertainties in numerical modelling of dam performances: an application to an RCC dam." Doctoral thesis, 2014. http://hdl.handle.net/1822/35837.

Full text
Abstract:
Tese de doutoramento em "Civil Engineering" (ramo do conhecimento em "Geotechnics")
The use of fully probabilistic approaches to account for uncertainties within dam engineering is a recently emerging field on which studies have been mostly done concerning the safety evaluation of dams under service. This thesis arises within this framework as a contribution on moving the process of risk analysis of dams beyond empirical knowledge, applying probabilistic tools on the numerical modelling of a roller compacted concrete (RCC) dam during its construction phase. The work developed here aims to propose a methodology so as to account for risks related to cracking during construction which may compromise the dam’s functional and structural behaviour. In order to do so, emphasis is given to uncertainties related to the material itself (i.e. strength, water-to-cement ratio, among others) as well as to ambient conditions during the construction phase of RCC dams. A thermo-chemo-mechanical model is used to describe the RCC behaviour. Concerning the probabilistic model, two aspects are studied: how the uncertainties related to the input variables are propagated through the model, and what is the influence of their dispersion on the dispersion of the output, assessed by performing a global sensitivity analysis by means of the RBD-FAST method. Also, spatial variability of some input parameters is accounted for through bi-dimensional random fields. Furthermore, a coupling between reliability methods and finite element methods is performed in order to evaluate the cracking potential of each casted RCC layer during construction by means of a cracking density concept. As an important outcome of this applied research, probability curves for cracking density within each casted layer as functions of both age and boundary conditions are predicted, which is believed to be an original contribution of this thesis. The proposed methodology may therefore be seen as a contribution to help engineers understand how uncertainties will affect the dam behaviour during construction and rely on it in the future to improve and support the design phase of the dam project.
A aplicação de métodos probabilísticos para o estudo de incertezas no ramo da engenharia de barragens é um campo em crescente ascensão no qual a grande maioria dos estudos realizados se concentra na avaliação da segurança de barragens durante o período de serviço. Este trabalho de tese situa-se neste contexto, pretendendo contribuir para a abordagem de análise de risco em barragens em betão compactado com cilindros (BCC) durante a fase de construção. Assim, é proposta uma metodologia na qual são tidos em conta riscos relacionados com a fissuração do BCC durante a sua construção, o que poderá comprometer o comportamento funcional e estrutural da barragem. As incertezas consideradas integram algumas propriedades do material (i.e. resistência, rácio água-cimento, entre outras) bem como as condições climatéricas que se observam durante a fase de de construção de barragens em BCC. Para descrever o comportamento do BCC é utilizado um modelo termo-químico-mecânico. O modelo probabilístico considera, por um lado, a propagação das incertezas relacionadas com as variáveis de entrada e, por outro, permite avaliar qual a influência que têm na dispersão da resposta do modelo. Essa influência é avaliada através de uma análise de sensibilidade global, recorrendo ao método RBD-FAST. A variabilidade espacial de alguns parâmetros de entrada é também tida em conta através de campos aleatórios bi-dimensionais. O acoplamento entre métodos de fiabilidade e elementos finitos permite avaliar o potencial de fissuração de cada camada de BCC durante a construção da barragem. Para tal é introduzido o conceito de densidade de fissuração. Esta abordagem constitui uma contribuição original, com a obtenção de curvas de probabilidade para a densidade de fissuração, avaliadas ao nível de cada camada e em função da idade e condições de fronteira. A metodologia desenvolvida constitui uma contribuição para a compreensão da influência de determinadas incertezas no comportamento da barragem durante a sua construção, podendo servir no futuro como um importante suporte à fase de projecto de barragens.
Contribution pour le controle des incertitudes dans la modelisation numerique de la performance de barrages. Application a un barrage en BCR. L’application des approches probabilistes pour tenir compte des incertitudes dans le domaine des barrages est un sujet en developpement. Cependant, la plupart des etudes ont ete realisees sur l’evaluation de la securite des barrages pendant leur service. Ce travail de these vise a appliquer ce type d’approches et a faire une contribution a l’analyse de risque des barrages en beton compacte au rouleau (BCR) des sa construction, a l’aide d’une simulation numerique. Les travaux presentes dans ce manuscrit proposent l’application d’une methodologie qui vise a quantifier la vulnerabilite vis-a-vis de l’apparition de la fissuration pendant la construction du barrage, ce qui peut aflecter a long-terme la permeabilite et par consequent, compromettre son comportement structurel. Pour ce faire, l’accent est mis sur les incertitudes liees a quelques caracteristiques des materi- aux (e.g., resistance, rapport eau-ciment, entre autres) et aux conditions environnementales pendant la phase de construction. Un modele thermo-chemo-mecanique est utilise pour decrire le comportement du BCR. En ce qui concerne le modele probabiliste, deux aspects sont etudies: i) comment les incertitudes liees aux variables d’entree sont propagees dans le modele, et ii) quelle est l’influence de leur dispersion par rapport a la dispersion totale de la sortie. Ce dernier est evalue par l’intermediaire d’une analyse de sensibilite globale eflectuee avec la meth- ode RBD-FAST. En outre, la variabilite spatiale des parametres d’entree est aussi prise en compte a travers des champs aleatoires bidimensionnels. Par ailleurs, un couplage entre des methodes de fiabilite et la methode d’elements finis est eflectue de facon a evaluer le potentiel de fissuration dans chaque couche de BCR lors de sa construction en utilisant un concept de densite’ dc fissumtion. Comme resultat important issu de ce travail de recherche, des courbes de probabilite pour la densite de fissuration sont obtenues au niveau de chaque couche en fonction de leur age et des conditions aux limites, ce qui est considérée comme étant une contribution originale de cette these. La méthodologie proposée peut etre utilise pour aider a comprendre comment les incerti- tudes vont affecter le comportement du barrage pendant sa construction et servir d’appui dans le futur pour améliorer et soutenir la phase de conception du projet de barrage. Mots-clés: Barrages BCR, Comportement thern1o-chemo-nqécanique, Incertitudes, Meth- odes de fiabilité, Analyse de sensibilité, RBD-FAST, Champs aléatoires.
The financial support by the Portuguese Foundation for Science and Technology (FCT) PhD grant (SFRH/BD/63939/2009, QREN POPH - Tipologia 4.1).
APA, Harvard, Vancouver, ISO, and other styles
38

Han, Baoguang. "Statistical analysis of clinical trial data using Monte Carlo methods." Thesis, 2014. http://hdl.handle.net/1805/4650.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
39

Gagnon, Philippe. "Sélection de modèles robuste : régression linéaire et algorithme à sauts réversibles." Thèse, 2017. http://hdl.handle.net/1866/20583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Paditz, Ludwig. "Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz." Doctoral thesis, 1988. https://tud.qucosa.de/id/qucosa%3A26930.

Full text
Abstract:
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!