To see the other types of publications on this topic, follow the link: Numerical computing.

Dissertations / Theses on the topic 'Numerical computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Numerical computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Piqueras, García Miguel Ángel. "Numerical Methods for Multidisciplinary Free Boundary Problems: Numerical Analysis and Computing." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/107948.

Full text
Abstract:
Multitud de problemas en ciencia e ingeniería se plantean como ecuaciones en derivadas parciales (EDPs). Si la frontera del recinto donde esas ecuaciones han de satisfacerse se desconoce a priori, se habla de "Problemas de frontera libre", propios de sistemas estacionarios no dependientes del tiempo, o bien de "Problemas de frontera móvil", asociados a problemas de evolución temporal, donde la frontera cambia con el tiempo. La solución a dichos problemas viene dada por la expresión de la(s) variable(s) dependiente(s) de la(s) EDP(s) junto con la función que determina la posición de la frontera. Dado que este tipo de problemas carece en la mayoría de los casos de solución analítica conocida, se hace preciso recurrir a métodos numéricos que permitan obtener una solución lo suficientemente aproximada, y que además mantenga propiedades cualitativas de la solución del modelo continuo de EDP(s). En este trabajo se ha abordado el estudio numérico de algunos problemas de frontera móvil provenientes de diversas disciplinas. La metodología aplicada consta de dos pasos sucesivos: aplicación de la transformación de Landau o "Front-fixing transformation" al modelo en EDP(s) con el fin de mantener inmóvil la frontera del dominio, y posterior discretización a través de un esquema en diferencias finitas. De ahí se obtienen esquemas numéricos que se implementan por medio de la herramienta MATLAB. Mediante un exhaustivo análisis numérico, se estudian propiedades del esquema y de la solución numérica (positividad, estabilidad, consistencia, monotonía, etc.). En el primer capítulo de este trabajo se revisa el estado del arte del campo objeto de estudio, se justifica la necesidad de disponer de métodos numéricos adaptados a este tipo de problemas y se describe brevemente la metodología empleada en nuestro enfoque. El Capítulo 2 se dedica a un problema perteneciente a la Biología Matemática y que consiste en determinar la evolución de la población de una especie invasora que se propaga en un hábitat. Este modelo consiste en una ecuación de difusión-reacción unida a una condición tipo Stefan. Los resultados del análisis numérico confirman la existencia de una dicotomía propagación-extinción en la evolución a largo plazo de la densidad de población de la especie invasora. En particular, se ha podido precisar el valor del coeficiente de la condición de Stefan que separa el comportamiento de propagación del de extinción. Los Capítulos 3 y 4 se centran en un problema de Química del Hormigón con interés en Ingeniería Civil: el proceso de carbonatación del hormigón, fenómeno evolutivo que lleva consigo la degradación progresiva de la estructura afectada y finalmente su ruina, si no se toman medidas preventivas. En el Capítulo 3 se considera un sistema de dos EDPs de tipo parabólico con dos incógnitas. Para su resolución, hay que considerar además las condiciones iniciales, las de contorno y las de tipo Stefan en la frontera. Los resultados numéricos confirman la tendencia de la ley de evolución de la frontera móvil hacia una función del tipo "raíz cuadrada del tiempo". En el Capítulo 4 se considera un modelo más general que el anterior, en el que intervienen seis especies químicas que se encuentran tanto en la zona carbonatada como en la no carbonatada. En el Capítulo 5 se aborda un problema de transmisión de calor que aparece en diversos procesos industriales; en este caso, en el enfriamiento durante la colada de metal fundido, donde la fase sólida avanza y la líquida se va extinguiendo. La frontera móvil (frente de solidificación) separa ambas fases, siendo su posición en cada instante la variable a determinar, junto con las temperaturas en cada fase. Después de la adecuada transformación y discretización, se implementa un esquema en diferencias finitas, subdividiendo el proceso en tres estadios temporales, a fin de tratar las singularidades asociadas a posicione<br>Many problems in science and engineering are formulated as partial differential equations (PDEs). If the boundary of the domain where these equations are to be solved is not known a priori, we face "Free-boundary problems", which are characteristic of non-time dependent stationary systems; besides, we have "Moving-boundary problems" in temporal evolution processes, where the border changes over time. The solution to these problems is given by the expression of the dependent variable(s) of PDE(s), together with the function that determines the position of the boundary. Since the analytical solution of this type of problems is lacked in most cases, it is necessary to resort to numerical methods that allow an accurate enough solution to be obtained, and which also maintain the qualitative properties of the solution(s) of the continuous model. This work approaches the numerical study of some moving-boundary problems that arise in different disciplines. The applied methodology consists of two successive steps: firstly, the so-called Landau transformation, or "Front-fixing transformation", which is used in the PDE(s) model to maintain the boundary of the domain immobile; later, we proceed to its discretization with a finite difference scheme. Different numerical schemes are obtained and implemented through the MATLAB computational tool. Properties of the scheme and the numerical solution (positivity, stability, consistency, monotonicity, etc.) are studied by an exhaustive numerical analysis. The first chapter of this work reports the state of the art of the field under study, justifies the need to adapt numerical methods to this type of problem, and briefly describes the methodology used in our approach. Chapter 2 presents a problem in Mathematical Biology that consists in determining over time the evolution of an invasive species population that spreads in a habitat. This problem is modelled by a diffusion-reaction equation linked to a Stefan-type condition. The results of the numerical analysis confirm the existence of a spreading-vanishing dichotomy in the long-term evolution of the population density of the invasive species. In particular, it is possible to determine the value of the coefficient of the Stefan condition that separates the propagation behaviour from extinction. Chapters 3 and 4 focus on a problem of Concrete Chemistry with an interest in Civil Engineering: the carbonation of concrete, an evolutionary phenomenon that leads to the progressive degradation of the affected structure and its eventual ruin if preventive measures are not taken. Chapter 3 considers a system of two parabolic type PDEs with two unknowns. For its resolution, the initial and boundary conditions have to be considered together with the Stefan conditions on the carbonation front. The numerical analysis results agree with those obtained in a previous theoretical study. The dynamics of the concentrations and the moving boundary confirm the long-term behaviour of the evolution law for the moving boundary as a "square root of time". Chapter 4 considers a more general model than the previous one, which includes six chemical species, defined in both the carbonated and non-carbonated zones, whose concentrations have to be found. Chapter 5 addresses a heat transfer problem that appears in various industrial processes; in this case, the solidification of metals in casting processes, where the solid phase advances and liquid reduces until it is depleted. The moving boundary (the solidification front) separates both phases. Its position in each instant is the variable to be determined together with the temperature profiles in both phases. After suitable transformation, discretization is carried out to obtain a finite difference scheme to be implemented. The process was subdivided into three temporal stages to deal with the singularities associated with the moving boundary position in the initialisation and depletion stages.<br>Multitud de problemes en ciència i enginyeria es plantegen com a equacions en derivades parcials (EDPs). Si la frontera del recinte on eixes equacions han de satisfer-se es desconeix a priori, es parla de "Problemas de frontera lliure", propis de sistemes estacionaris no dependents del temps, o bé de "Problemas de frontera mòbil", associats a problemes d'evolució temporal, on la frontera canvia amb el temps. Atés que este tipus de problemes manca en la majoria dels casos de solució analítica coneguda, es fa precís recórrer a mètodes numèrics que permeten obtindre una solució prou aproximada a l'exacta, i que a més mantinga propietats qualitatives de la solució del model continu d'EDP(s). En aquest treball s'ha abordat l'estudi numèric d'alguns problemes de frontera mòbil provinents de diverses disciplines. La metodologia aplicada consta de dos passos successius: en primer lloc, s'aplica l'anomenada transformació de Landau o "Front-fixing transformation" al model en EDP(s) a fi de mantindre immòbil la frontera del domini; posteriorment, es procedix a la seva discretització a través d'un esquema en diferències finites. D'ací s'obtenen esquemes numèrics que s'implementen per mitjà de la ferramenta informàtica MATLAB. Per mitjà d'una exhaustiva anàlisi numèrica, s'estudien propietats de l'esquema i de la solució numèrica (positivitat, estabilitat, consistència, monotonia, etc.). En el primer capítol d'aquest treball es revisa l'estat de l'art del camp objecte d'estudi, es justifica la necessitat de disposar de mètodes numèrics adaptats a aquest tipus de problemes i es descriu breument la metodologia emprada en el nostre enfocament. El Capítol 2 es dedica a un problema pertanyent a la Biologia Matemàtica i que consistix a determinar l'evolució en el temps de la distribució de la població d'una espècie invasora que es propaga en un hàbitat. Este model consistix en una equació de difusió-reacció unida a una condició tipus Stefan, que relaciona les funcions solució i frontera mòbil a determinar. Els resultats de l'anàlisi numèrica confirmen l'existència d'una dicotomia propagació-extinció en l'evolució a llarg termini de la densitat de població de l'espècie invasora. En particular, s'ha pogut precisar el valor del coeficient de la condició de Stefan que separa el comportament de propagació del d'extinció. Els Capítols 3 i 4 se centren en un problema de Química del Formigó amb interés en Enginyeria Civil: el procés de carbonatació del formigó, fenomen evolutiu que comporta la degradació progressiva de l'estructura afectada i finalment la seua ruïna, si no es prenen mesures preventives. En el Capítol 3 es considera un sistema de dos EDPs de tipus parabòlic amb dos incògnites. Per a la seua resolució, cal considerar a més, les condicions inicials, les de contorn i les de tipus Stefan en la frontera. Els resultats de l'anàlisi numèrica s'ajusten als obtinguts en un estudi teòric previ. S'han dut a terme experiments numèrics, comprovant la tendència de la llei d'evolució de la frontera mòbil cap a una funció del tipus "arrel quadrada del temps". En el Capítol 4 es considera un model més general, en el que intervenen sis espècies químiques les concentracions de les quals cal trobar, i que es troben tant en la zona carbonatada com en la no carbonatada. En el Capítol 5 s'aborda un problema de transmissió de calor que apareix en diversos processos industrials; en aquest cas, en el refredament durant la bugada de metall fos, on la fase sòlida avança i la líquida es va extingint. La frontera mòbil (front de solidificació) separa ambdues fases, sent la seua posició en cada instant la variable a determinar, junt amb les temperatures en cada una de les dos fases. Després de l'adequada transformació i discretització, s'implementa un esquema en diferències finites, subdividint el procés en tres estadis temporals, per tal de tractar les singularitats asso<br>Piqueras García, MÁ. (2018). Numerical Methods for Multidisciplinary Free Boundary Problems: Numerical Analysis and Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/107948<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Latifi, Yasir. "Optimizing numerical modelling of quantum computing hardware." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-182659.

Full text
Abstract:
Quantum computers are being developed to solve certain problems faster than classical computers. Instead of using classical bits, they use quantum bits (qubits) that utilize quantum effects. At Chalmers University of Technology, researchers have already built a quantum chip consisting of two superconducting transmon qubits and are trying to build systems with more qubits. To assist in that process, they make numerical simulations of the quantum systems. However, these simulations face an intrinsic computational limitation: the Hilbert space of the system grows exponentially with the number of qubits. In order to mitigate the problem: the simulations should be made as efficient as possible, by applying certain approximations, while still obtaining accurate results. The aim of this project is to compare several of these approximations, to see how accurate they are and how fast they run on a classical computer. This is done by modelling the qubits as quantum anharmonic oscillators and testing several cases: varying the energy levels of the qubits, increasing the number of qubits, and testing the rotating-wave approximation (RWA). These cases were tested by implementing two-qubit gates on the system. The simulations were all made using the Python library QuTiP. The results show that one should simulate using at least one energy level higher than the maximum energy level required for the gate to function. For larger systems, the RWA will make a big difference in simulation times, while still giving relatively accurate results. When using the RWA, the number of levels used does not seem to affect the results significantly and one could therefore use the lowest possible energy levels that can simulate the system.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Yong. "Efficient parallel genetic algorithms applied to numerical optimisation." Thesis, Southampton Solent University, 2008. http://ssudl.solent.ac.uk/631/.

Full text
Abstract:
This research is concerned with the optimisation of multi-modal numerical problems using genetic algorithms (GAs). GAs use randomised operators operating over a population of candidate solutions to generate new points in the search space. As the scale and complexity of target applications increase, run time becomes a major inhibitor. Parallel genetic algorithms (PGAs) have therefore become an important area of research. Coarse-grained implementations are one of the most popular models and many researchers are concerned primarily with this area. The island model was the only one class of parallel genetic algorithm on the coarse-grained processing platform. There are indiscriminate overlaps between sub-populations in the island model even if there is no communication between sub-populations. In order to determine whether the removal of the overlaps between sub-populations is beneficial, the partition model based on domain decomposition was motivated and showed that it can offer superior performance on a number of two dimensional test problems. However the partition model has a certain scalability problem. The main contribution of this thesis is to propose and develop an alternative approach, which replicates the beneficial behaviour of the partition model using more scalable techniques. It operates in a similar manner to the island model, but periodically performs a clustering analysis on each sub-population. The clustering analysis is used to identify regions of the search space in which more than one sub-population are sampling. The overlapping clusters are then merged and redistributed amongst sub-populations so that only one sub-population has samples in that region of the search space. It is shown that these overlaps between sub-populations are identified and removed by the clustering analysis without a priori domain knowledge. The performance of the new approach employing the clustering analysis is then compared with the island model and the partition model on a number of non-trivial problems. These experiments show that the new approach is robust, efficient and effective, and removes the scalability problem that prevents the wide application of the partition model.
APA, Harvard, Vancouver, ISO, and other styles
4

Albaiz, Abdulaziz (Abdulaziz Mohammad). "MPI-based scalable computing platform for parallel numerical application." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95562.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 61).<br>Developing parallel numerical applications, such as simulators and solvers, involves a variety of challenges in dealing with data partitioning, workload balancing, data dependencies, and synchronization. Many numerical applications share the need for an underlying parallel framework for parallelization on multi-core/multi-machine hardware. In this thesis, a computing platform for parallel numerical applications is designed and implemented. The platform performs parallelization by multiprocessing over MPI library, and serves as a layer of abstraction that hides the complexities in dealing with data distribution and inter-process communication. It also provides the essential functions that most numerical application use, such as handling data-dependency, workload-balancing, and overlapping communication and computation. The performance evaluation of the parallel platform shows that it is highly scalable for large problems.<br>by Abdulaziz Albaiz.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Keck, Jean-Baptiste. "Numerical modelling and High Performance Computing for sediment flows." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM067.

Full text
Abstract:
La dynamique des écoulements sédimentaires est un sujet qui concerne de nombreuses applications en géophysiques, qui vont des questions d'ensablement des estuaires à la compréhension des bassins sédimentaires. Le sujet de cette thèse porte sur la modélisation numérique à haute résolution de ces écoulements et l'implémentation des algorithmes associés sur accélérateurs. Les écoulements sédimentaires font intervenir plusieurs phases qui interagissent, donnant lieu à plusieurs types d'instabilités comme les instabilités de Rayleigh-Taylor et de double diffusivité. Les difficultés pour la simulation numérique de ces écoulements tiennent à la complexité des interactions fluides/sédiments qui font intervenir des échelles physiques différentes. En effet, ces interactions sont difficiles à traiter du fait de la grande variabilité des paramètres de diffusion dans les deux phases et les méthodes classiques présentent certaines limites pour traiter les cas où le rapport des diffusivités, donné par le nombre de Schmidt, est trop élevé. Cette thèse étend les récents résultats obtenus sur la résolution directe de la dynamique du transport d'un scalaire passif à haut Schmidt sur architecture hybride CPU-GPU et valide cette approche sur les instabilités qui interviennent dans des écoulements sédimentaires. Ce travail revisite tout d'abord les méthodes numériques adaptées aux écoulements à haut Schmidt afin de pouvoir appliquer des stratégies d'implémentations efficaces sur accélérateurs et propose une implémentation de référence open source nommée HySoP. L'implémentation proposée permet, entre autres, de simuler des écoulements régis par les équations de Navier-Stokes incompressibles entièrement sur accélérateur ou coprocesseur grâce au standard OpenCL et tend vers des performances optimales indépendamment du matériel utilisé. La méthode numérique et son implémentation sont tout d'abord validées sur plusieurs cas tests classiques avant d'être appliquées à la dynamique des écoulements sédimentaires qui font intervenir un couplage bidirectionnel entre les scalaires transportés et les équations de Navier-Stokes. Nous montrons que l'utilisation conjointe de méthodes numériques adaptées et de leur implémentation sur accélérateur permet de décrire précisément, à coût très raisonnable, le transport sédimentaire pour des nombres de Schmidt difficilement accessibles par d'autres méthodes<br>The dynamic of sediment flows is a subject that covers many applications in geophysics, ranging from estuary silting issues to the comprehension of sedimentary basins. This PhD thesis deals with high resolution numerical modeling of sediment flows and implementation of the corresponding algorithms on hybrid calculators. Sedimentary flows involve multiple interacting phases, giving rise to several types of instabilities such as Rayleigh-Taylor instabilities and double diffusivity. The difficulties for the numerical simulation of these flows arise from the complex fluid/sediment interactions involving different physical scales. Indeed, these interactions are difficult to treat because of the great variability of the diffusion parameters in the two phases. When the ratio of the diffusivities, given by the Schmidt number, is too high, conventional methods show some limitations. This thesis extends the recent results obtained on the direct resolution of the transport of a passive scalar at high Schmidt number on hybrid CPU-GPU architectures and validates this approach on instabilities that occur in sediment flows. This work first reviews the numerical methods which are adapted to high Schmidt flows in order to apply effective accelerator implementation strategies and proposes an open source reference implementation named HySoP. The proposed implementation makes it possible, among other things, to simulate flows governed by the incompressible Navier-Stokes equations entirely on accelerator or coprocessor thanks to the OpenCL standard and tends towards optimal performances independently of the hardware. The numerical method and its implementation are first validated on several classical test cases and then applied to the dynamics of sediment flows which involve a two-way coupling between the transported scalars and the Navier-Stokes equations. We show that the joint use of adapted numerical methods and their implementation on accelerator makes it possible to describe accurately, at a very reasonable cost, sediment transport for Schmidt numbers difficult to reach with other methods
APA, Harvard, Vancouver, ISO, and other styles
6

Angus, Christopher Michael. "Large scale numerical software development using functional languages." Thesis, University of Newcastle Upon Tyne, 1998. http://hdl.handle.net/10443/2136.

Full text
Abstract:
Functional programming languages such as Haskell allow numerical algorithms to be expressed in a concise, machine-independent manner that closely reflects the underlying mathematical notation in which the algorithm is described. Unfortunately the price paid for this level of abstraction is usually a considerable increase in execution time and space usage. This thesis presents a three-part study of the use of modern purely-functional languages to develop numerical software. In Part I the appropriateness and usefulness of language features such as polymorphism. pattern matching, type-class overloading and non-strict semantics are discussed together with the limitations they impose. Quantitative statistics concerning the manner in which these features are used in practice are also presented. In Part II the information gathered from Part I is used to design and implement FSC. all experimental functional language tailored to numerical computing, motivated as much by pragmatic as theoretical issues. This language is then used to develop numerical software and its suitability assessed via benchmarking it against C/C++ and Haskell under various metrics. In Part III the work is summarised and assessed.
APA, Harvard, Vancouver, ISO, and other styles
7

Snodgrass, Joshua D. "Low-power fault tolerance for spacecraft FPGA-based numerical computing." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FSnodgrass%5FPhD.pdf.

Full text
Abstract:
Dissertation (Ph.D. in Electrical Engineering)--Naval Postgraduate School, September 2006.<br>Dissertation Advisor(s): Herschel H. Loomis. "September 2006." Includes bibliographical references (p. 217-224). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
8

Keshava, Iyer Kartik P. "Studies of turbulence structure and turbulent mixing using petascale computing." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52260.

Full text
Abstract:
A large direct numerical simulation database spanning a wide range of Reynolds and Schmidt number is used to examine fundamental laws governing passive scalar mixing and turbulence structure. Efficient parallel algorithms have been developed to calculate quantities useful in examining the Kolmogorov small-scale phenomenology. These new algorithms are used to analyze data sets with Taylor scale Reynolds numbers as high as 650 with grid-spacing as small as the Kolmogrov length scale. Direct numerical simulation codes using pseudo-spectral methods typically use transpose based three-dimensional (3D) Fast Fourier Transforms (FFT). The ALLTOALL type routines to perform global transposes have a quadratic dependence on message size and typically show limited scaling at very large problem sizes. A hybrid MPI/OpenMP 3D FFT kernel has been developed that divides the work among the threads and schedules them in a pipelined fashion. All threads perform the communication, although not concurrently, with the aim of minimizing thread-idling time and increasing the overlap between communication and computation. The new algorithm is seen to reduce the communication time by as much as 30% at large core-counts, as compared to pure-MPI communication. Turbulent mixing is important in a wide range of fields ranging from combustion to cosmology. Schmidt numbers range from O(1) to O(0.01) in these applications. The Schmidt number dependence of the second-order scalar structure function and the applicability of the so-called Yaglomメs relation is examined in isotropic turbulence with a uniform mean scalar gradient. At the moderate Reynolds numbers currently achievable, the dynamics of strongly diffusive scalars is inherently different from moderately diffusive Schmidt numbers. Results at Schmidt number as low as 1/2048 show that the range of scales in the scalar field become quite narrow with the distribution of the small-scales approaching a Gaussian shape. A much weaker alignment between velocity gradients and principal strain rates and a strong departure from Yaglomメs relation have also been observed. Evaluation of different terms in the scalar structure function budget equation assuming statistical stationarity in time shows that with decreasing Schmidt number, the production and diffusion terms dominate at the intermediate scales possibly leading to non-universal behavior for the low-to-moderate Peclet number regime considered in this study. One of the few exact, non-trivial results in hydrodynamic theory is the so-called Kolmogorov 4/5th law. Agreement for the third order longitudinal structure function with the 4/5 plateau is used to measure the extent of the inertial range, both in experiments and simulations. Direct numerical simulation techniques to obtain the third order structure structure functions typically use component averaging, combined with time averaging over multiple eddy-turnover times. However, anisotropic large scale effects tend to limit the inertial range with significant variance in the components of the structure functions in the intermediate scale ranges along the Cartesian directions. The net result is that the asymptotic 4/5 plateau is not attained. Motivated by recent theoretical developments we present an efficient parallel algorithm to compute spherical averages in a periodic domain. The spherically averaged third-order structure function is shown to attain the K41 plateau in time-local fashion, which decreases the need for running direct numerical simulations for multiple eddy-turnover times. It is well known that the intermittent character of the energy dissipation rate leads to discrepancies between experiments and theory in calculating higher order moments of velocity increments. As a correction, the use of three-dimensional local averages has been proposed in the literature. Kolmogorov used the local 3D averaged dissipation rate to propose a refined similarity theory. An algorithm to calculate 3D local averages has been developed which is shown to scale well up to 32k cores. The algorithm, computes local averages over overlapping regions in space for a range of separation distances, resulting in N^3 samples of the locally averaged dissipation for each averaging length. In light of this new calculation, the refined similarity theory of Kolmogorov is examined using the 3D local averages at high Reynolds number and/or high resolution.
APA, Harvard, Vancouver, ISO, and other styles
9

Ljungberg, Kajsa. "Numerical Algorithms for Mapping of Multiple Quantitative Trait Loci in Experimental Populations." Doctoral thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6248.

Full text
Abstract:
Most traits of medical or economic importance are quantitative, i.e. they can be measured on a continuous scale. Strong biological evidence indicates that quantitative traits are governed by a complex interplay between the environment and multiple quantitative trait loci, QTL, in the genome. Nonlinear interactions make it necessary to search for several QTL simultaneously. This thesis concerns numerical methods for QTL search in experimental populations. The core computational problem of a statistical analysis of such a population is a multidimensional global optimization problem with many local optima. Simultaneous search for d QTL involves solving a d-dimensional problem, where each evaluation of the objective function involves solving one or several least squares problems with special structure. Using standard software, already a two-dimensional search is costly, and searches in higher dimensions are prohibitively slow. Three efficient algorithms for evaluation of the most common forms of the objective function are presented. The computing time for the linear regression method is reduced by up to one order of magnitude for real data examples by using a new scheme based on updated QR factorizations. Secondly, the objective function for the interval mapping method is evaluated using an updating technique and an efficient iterative method, which results in a 50 percent reduction in computing time. Finally, a third algorithm, applicable to the imputation and weighted linear mixture model methods, is presented. It reduces the computing time by between one and two orders of magnitude. The global search problem is also investigated. Standard software techniques for finding the global optimum of the objective function are compared with a new approach based on the DIRECT algorithm. The new method is more accurate than the previously fastest scheme and locates the optimum in 1-2 orders of magnitude less time. The method is further developed by coupling DIRECT to a local optimization algorithm for accelerated convergence, leading to additional time savings of up to eight times. A parallel grid computing implementation of exhaustive search is also presented, and is suitable e.g for verifying global optima when developing efficient optimization algorithms tailored for the QTL mapping problem. Using the algorithms presented in this thesis, simultaneous search for at least six QTL can be performed routinely. The decrease in overall computing time is several orders of magnitude. The results imply that computations which were earlier considered impossible are no longer difficult, and that genetic researchers thus are free to focus on model selection and other central genetical issues.
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Ravi R. "NUMERICAL INVESTIGATION AND PARALLEL COMPUTING FOR THERMAL TRANSPORT MECHANISM DURING NANOMACHINING." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/425.

Full text
Abstract:
Nano-scale machining, or Nanomachining is a hybrid process in which the total thermal energy necessary to remove atoms from a work-piece surface is applied from external sources. In the current study, the total thermal energy necessary to remove atoms from a work-piece surface is applied from two sources: (1) localized energy from a laser beam focused to a micron-scale spot to preheat the work-piece, and (2) a high-precision electron-beam emitted from the tips of carbon nano-tubes to remove material via evaporation/sublimation. Macro-to-nano scale heat transfer models are discussed for understanding their capability to capture and its application to predict the transient heat transfer mechanism required for nano-machining. In this case, thermal transport mechanism during nano-scale machining involves both phonons (lattice vibrations) and electrons; it is modeled using a parabolic two-step (PTS) model, which accounts for the time lag between these energy carriers. A numerical algorithm is developed for the solution of the PTS model based on explicit and implicit finite-difference methods. Since numerical solution for simulation of nanomachining involves high computational cost in terms of wall clock time consumed, performance comparison over a wide range of numerical techniques has been done to devise an efficient numerical solution procedure. Gauss-Seidel (GS), successive over relaxation (SOR), conjugate gradient (CG), d -form Douglas-Gunn time splitting, and other methods have been used to compare the computational cost involved in these methods. Use of the Douglas-Gunn time splitting in the solution of 3D time-dependent heat transport equations appears to be optimal especially as problem size (number of spatial grid points and/or required number of time steps) becomes large. Parallel computing is implemented to further reduce the wall clock time required for the complete simulation of nanomachining process. Domain decomposition with inter-processor communication using Message Passing Interface (MPI) libraries is adapted for parallel computing. Performance tuning has been implemented for efficient parallelization by overlapping communication with computation. Numerical solution for laser source and electron-beam source with different Gaussian distribution are presented. Performance of the parallel code is tested on four distinct computer cluster architecture. Results obtained for laser source agree well with available experimental data in the literature. The results for electron-beam source are self-consistent; nevertheless, they need to be validated experimentally.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiaofei. "1D modeling of blood flow in networks : numerical computing and applications." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066626/document.

Full text
Abstract:
Notre étude vise à modéliser l’écoulement pulsé sanguin dans le réseau vasculaire humain. Celui ci est constitué d’un très grand nombre de vaisseaux disposés dans un vaste réseau ayant différentes propriétés mécaniques. Le modèle simplifié unidimensionnel (1D) permet une étude numérique dans le réseau sanguin et plusieurs applications seront proposées.Le modèle 1D est établi grâce aux hypothèses de grande longueur d’onde de l’onde de pouls comparée aux rayons des vaisseaux et de profil de vitesse de révolution, en moyennant transversalement les équations de Navier-Stokes et de conservation de la masse. Un modèle viscoélastique de Kelvin-Voigt est adopté pour l’équation constitutive du tube. Cela conduit à un système hyperbolique-parabolique non linéaire, qui est ensuite résolu avec quatre schémas numériques, à savoir: MacCormack, Taylor-Galerkin, schéma monotone décentré pour les équations de loi de conservation (MUSCL) et Galerkin discontinu local. Les schémas sont mis en oeuvre dans un premier temps dans MATLAB et les solutions numériques sont vérifiées favorablement à des solutions semi-analytiques et des observations cliniques. Des comparaisons entre les schémas sont faites pour quatre aspects importants: la précision, la capacité de capturer desphénomènes de type choc, la vitesse de calcul et la complexité de la mise en oeuvre, enfin les conditions appropriées pour l’application de chaque système sont discutées. Après cela, un code objet général en C++ est développé et testé sur plusieurs réseaux: un cercle d’artères, un réseau systémique humain de 55 artères et un rein de souris avec plus d’un millier les segments. La répartition en fonction du temps de la pression dans les réseaux est visualisée et les modes de propagation des ondes sont bien capturés. Une bonne accélération est atteinte par parallélisation du code.Le code développé est ensuite appliqué dans trois études. En premier lieu, les coefficients de frottement du fluide et la viscosité de la paroi sontdéterminés avec des dispositifs expérimentaux bien définis constitués de tuyaux élastiques in vitro. Ces deux facteurs amortissant les ondes de pouls, ils sont difficiles à évaluer séparément. Nous les estimons par ajustement du modèle viscoélastique 1D avec les ondes de pression mesurées expérimentalement. Les valeurs ajustées des paramètres viscoélastiques sont conformes aux valeurs estimées avec d’autres méthodes. Les deux effets visqueux sont du même ordre de grandeur. In vivo, des séries chronologiques de la pression du diamètre en différents points d’un réseau artériel de mouton, sont analysées et les paramètres de viscoélasticité sont estimés. Le réseau du mouton est ensuite simulé, on montre que la viscoélasticité amortit de manière significative les hautes fréquences. En troisième lieu, la variation de la circulation induite par des anastomoses axillo- et fémoro-fémorales avec une sténose iliaquesévère est simulée. L’influence de la voie de contournement est étudié<br>The vascular network consists of a very large number of segments with various properties and thus the pulsatile blood flow inside is very complicated. With the time-domain-based nonlinear 1D model, this thesis studies the blood flow in networks, focusing on the numerical computing and several applications.With assumptions of long wave and axisymmetric velocity profile, the 1D governing equations of mass and momentum are derived by integrating the continuity and Navier-Stokes equations along the radius.A Kelvin-Voigt viscoelastic model is adopted for the constitutive equation of the tube.This leads to a nonlinear hyperbolic-parabolic system, which is then solved with four numerical schemes, namely: MacCormack, Taylor-Galerkin, Monotonic Upwind Scheme for Conservation Law (MUSCL) and local discontinuous Galerkin.The schemes are implemented in MATLAB and the numerical solutions are checked favorably against analytical, semi-analytical solutions and clinical observations.Among the numerical schemes, comparisons are made in four important aspects: accuracy, ability to capture shock-like phenomena, computational speed and implementation complexity. The suitable conditions for the application of each scheme are discussed.After this, a general purpose C++ code is developed and tested on several networks:a circle of arteries, a human systemic network with 55 arteries and a mouse kidney with more than one thousand segments. The time dependent distribution of pressure in the networks is visualized and the propagation patterns of the waves are well captured.Good speedup is achieved by parallelizations of the code.The developed code is applied in three studies.First, the coefficients of fluid friction and wall viscosity are determined with aides of a well defined experimental setup.Because both the two factors damp the pulse waves, they are difficult to evaluate separately.We estimate them in pairs by fitting the 1D viscoelastic model against pressure waves measured on the experimental setup. The fitted values of viscoelastic parameters are consistent with values estimated with other methods.The effect of wall viscosity on the pulse wave has been shown in the same order of that of fluid viscosity.Second, with time series of pressure and diameter measured in several locations of the sheep arterial network,the viscoelasticity parameters are estimated.With those values, the pulse waves in the sheep network are simulated and the effect of viscoelasticity is investigated.Numerical solutions show that the viscoelasticity damps significantly the high frequency components of the pulse waves.Third, we simulate the change of blood flow induced by the axillofemoral and femoral-femoral anastomoses with a severe iliac stenosis.The influence of the bypassing path is studied
APA, Harvard, Vancouver, ISO, and other styles
12

Townsend, Alex. "Computing with functions in two dimensions." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:02f92917-809e-477d-8413-417bb8106e56.

Full text
Abstract:
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs near-optimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions. A robust numerical bivariate rootfinder is developed for computing the common zeros of two smooth functions via a resultant method. Using several specialized techniques the algorithm can accurately find the simple common zeros of two functions with polynomial approximants of high degree (&geq; 1,000). Lastly, low rank ideas are extended to linear partial differential equations (PDEs) with variable coefficients defined on rectangles. When these ideas are used in conjunction with a new one-dimensional spectral method the resulting solver is spectrally accurate and efficient, requiring O(n<sup>2</sup>) operations for rank $1$ partial differential operators, O(n<sup>3</sup>) for rank 2, and O(n<sup>4</sup>) for rank &geq,3 to compute an n x n matrix of bivariate Chebyshev expansion coefficients for the PDE solution. The algorithms in this thesis are realized in a software package called Chebfun2, which is an integrated two-dimensional component of Chebfun.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.

Full text
Abstract:
Interval arithmetic is a means to compute verified results. However, a naive use of interval arithmetic does not provide accurate enclosures of the exact results. Moreover, interval arithmetic computations can be time-consuming. We propose several accurate algorithms and efficient implementations in verified linear algebra using interval arithmetic. Two fundamental problems are addressed, namely the multiplication of interval matrices and the verification of a floating-point solution of a linear system. For the first problem, we propose two algorithms which offer new tradeoffs between speed and accuracy. For the second problem, which is the verification of the solution of a linear system, our main contributions are twofold. First, we introduce a relaxation technique, which reduces drastically the execution time of the algorithm. Second, we propose to use extended precision for few, well-chosen parts of the computations, to gain accuracy without losing much in term of execution time.
APA, Harvard, Vancouver, ISO, and other styles
14

Kozlowski, Andrew James. "Computing numerical solutions to the electromagnetic two-dimensional scalar inverse scattering problem." Thesis, University of Ottawa (Canada), 1988. http://hdl.handle.net/10393/5267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Vincent, Jonathan. "The role of domain decomposition in the parallelisation of genetic search for multi-modal numerical optimisation." Thesis, Southampton Solent University, 2001. http://ssudl.solent.ac.uk/1203/.

Full text
Abstract:
This thesis is concerned with the optimisation of multi-modal numerical problems using genetic algorithms. Genetic algorithms are an established technique, inspired by principles of genetics and evolution, and have been successfully utilised in a wide range of applications. However, they are computationally intensive and consequently, addressing problems of increasing size and complexity has led to research into parallelisation. this thesis is concerned with coarse-grained parallelism because of the growing importance of cluster computing. Current parallel genetic algorithm technology offers one coarse-grained approach, usually referred to as the island model. Parallelisation is concerned with the division of a computational system into components which can be executed concurrently on multiple processing nodes. It can be based on a decomposition of either the process or the domain on which it operates. The island model is a process based approach, which divides the genetic algorithm population into a number of co-operating sub-populations. This research examines an alternative approach based on domain decomposition - the search space is divided into a number of regions which are separately optimised. The aims of this research are to determine whether domain decomposition is useful in terms of search performance, and whether it is feasible when there is no a priori knowledge of the search space. It is established, empirically that domain decomposition offers a more robust sampling of the search space. It is further shown that the approach is beneficial when there is an element of deception in the problem. However, domain decomposition is non-trivial when the domain is irregular. the irregularities of the search space result in a computational load imbalance which would reduce the efficiency of a parallel implementation. To address this, a dynamic load-balancing algorithm is developed which adjusts the decomposition of the search space, at run time according to the fitness distribution. Using this algorithm, it is showm that domain decomposition is feasible, and offers significant search advantages in the field of multi-modal numerical optimisation. The performance is compared with a serial implementation and an island model parallel implementation on a number of non-trivial problems. It is concluded that the domain decomposition approach offers superior performance on these problems in terms of rapid convergence and final solution quality. Approaches to the extension and generalisation of the approach are suggested for further work.
APA, Harvard, Vancouver, ISO, and other styles
16

Bharthipudi, Saraswati. "Comparison of numerical result checking mechanisms for FFT computations under faults." Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-12172003-171912/unrestricted/Saraswati%5FBharthipudi%5F2002%5F05.pdf.

Full text
Abstract:
Thesis (M.S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2004.<br>Dr. Feodor Vainstein, Committee Member; Dr. Doug Blough, Committee Chair; Dr. David Schimmel, Committee Member. Includes bibliographical references (leaves 71-75).
APA, Harvard, Vancouver, ISO, and other styles
17

Muntean, Ioan Lucian. "Efficient distributed numerical simulation on the grid." München Verl. Dr. Hut, 2008. http://d-nb.info/992163145/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yannios, Nicholas, and mikewood@deakin edu au. "Computational aspects of the numerical solution of SDEs." Deakin University. School of Computing and Mathematics, 2001. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20060817.123449.

Full text
Abstract:
In the last 30 to 40 years, many researchers have combined to build the knowledge base of theory and solution techniques that can be applied to the case of differential equations which include the effects of noise. This class of ``noisy'' differential equations is now known as stochastic differential equations (SDEs). Markov diffusion processes are included within the field of SDEs through the drift and diffusion components of the Itô form of an SDE. When these drift and diffusion components are moderately smooth functions, then the processes' transition probability densities satisfy the Fokker-Planck-Kolmogorov (FPK) equation -- an ordinary partial differential equation (PDE). Thus there is a mathematical inter-relationship that allows solutions of SDEs to be determined from the solution of a noise free differential equation which has been extensively studied since the 1920s. The main numerical solution technique employed to solve the FPK equation is the classical Finite Element Method (FEM). The FEM is of particular importance to engineers when used to solve FPK systems that describe noisy oscillators. The FEM is a powerful tool but is limited in that it is cumbersome when applied to multidimensional systems and can lead to large and complex matrix systems with their inherent solution and storage problems. I show in this thesis that the stochastic Taylor series (TS) based time discretisation approach to the solution of SDEs is an efficient and accurate technique that provides transition and steady state solutions to the associated FPK equation. The TS approach to the solution of SDEs has certain advantages over the classical techniques. These advantages include their ability to effectively tackle stiff systems, their simplicity of derivation and their ease of implementation and re-use. Unlike the FEM approach, which is difficult to apply in even only two dimensions, the simplicity of the TS approach is independant of the dimension of the system under investigation. Their main disadvantage, that of requiring a large number of simulations and the associated CPU requirements, is countered by their underlying structure which makes them perfectly suited for use on the now prevalent parallel or distributed processing systems. In summary, l will compare the TS solution of SDEs to the solution of the associated FPK equations using the classical FEM technique. One, two and three dimensional FPK systems that describe noisy oscillators have been chosen for the analysis. As higher dimensional FPK systems are rarely mentioned in the literature, the TS approach will be extended to essentially infinite dimensional systems through the solution of stochastic PDEs. In making these comparisons, the advantages of modern computing tools such as computer algebra systems and simulation software, when used as an adjunct to the solution of SDEs or their associated FPK equations, are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
19

Motamed, Mohammad. "Phase space methods for computing creeping rays." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ihshaish, Hisham W. Y. "Genetic Ensemble (G-Ensemble): An Evolutionary Computing Technique for Numerical Weather Prediction Enhancement." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/117612.

Full text
Abstract:
El objetivo principal del presente trabajo es abordar el problema de precisión y tiempo de espera en la predicción meteorológica, la cual es habitualmente llevada a cabo por aplicaciones computaciones conocidas como modelos de predicción meteorológica numérica (Numerical Weather Prediction, NWP). Estos modelos han sido muy desarrollados en las últimas décadas y su rendimiento mejora constantemente con el aumento de la potencia de cómputo. Sin embargo, en la práctica, la comunidad científica aun esta dedicando considerables esfuerzos para reducir el problema ampliamente conocido como 'tiempo limitado de predicción' (weather limited predictability). Principalmente, los dos mayores retos son la voluntad de obtener predicciones meteorológicas más fiables y realizarlas más rápidamente. Como en muchas otras áreas de la modelización medioambiental, los modelos NWP, la mayoría de del software de simulación trabaja con modelos sólidos y ampliamente aceptados. Por lo tanto, la necesidad de optimización de los parámetros de entrada del simulador representa un problema conocido y tratado en numerosas ocasiones por la comunidad científica. En estos entornos en particular no se puede disponer de parámetros de entrada correctos a tiempo. Se requiere utilizar una estrategia de estimación y optimización computacionalmente eficiente para minimizar la desviación entre el escenario predicho y el comportamiento real del fenómeno. Basándose en lo mencionado previamente, esta tesis trata de: 1 Proveer un estudio de sensibilidad del efecto de los parámetros de entrada del modelo NWP en la calidad de la predicción. 2 Proponer un framework, el cual permita realizar búsquedas de los valores óptimos de los parámetros de entrada del modelo que, según nuestra hipótesis, proveerá una mejor calidad de predicción. 3 Reducir el tiempo de espera necesitado para obtener predicciones meteorológicas más fiables. Para cumplir los objetivos de la propuesta presentada, se ha introducido un nuevo esquema de predicción meteorológica. Este nuevo esquema implementa un algoritmo de cómputo evolutivo, el cual se centra en la calibración de los parámetros de entrada del modelo NWP. El esquema presentado se denomina Genetic Ensemble, compuesto por dos etapas: etapa de calibración y etapa de predicción. Mediante la etapa de calibración, esta aproximación aplica un Algoritmo Genético de forma iterativa, para encontrar los 'mejores' valores de los parámetros de entrada del modelo NWP que acto seguido, serán utilizados en la siguiente etapa de predicción. Han sido desarrolladas diversas estrategias del Genetic Ensemble, como la extensión para calibrar más de un nivel de parámetros de entrada, y también para evaluar estos valores utilizando diferentes estrategias. Por otro lado, el esquema propuesto es paralelizado utilizando un paradigma Master/Worker, y es apto para ser ejecutado en plataformas de computación de altas prestaciones (HPC) gracias a las cuales el tiempo total de ejecución se reduce. Este esquema ha sido evaluado ejecutando experimentos de predicción meteorológica correspondientes a una catástrofe muy conocida, el huracán Katrina en 2005. Los resultados obtenidos mostraron una mejora en la calidad de la predicción meteorológica y una reducción significativa del tiempo de ejecución total.<br>The main goal of the presented work is to tackle the problem of accuracy and waiting time in weather forecasting, which are normally conducted by computational applications known as Numerical Weather Prediction (NWP) models. These models have been strongly developed in the last decades and their performance constantly increases with the advances in computational power. However, in practice, many serious are still gaining considerable efforts by the scientific community in order to reduce what is widely known as 'weather limited predictability'. Mainly, the major two challenges are the willingness to get more reliable weather predictions, and to do it faster. As in many other areas of environmental modeling, most simulation software works with well-founded and widely accepted models. Hence, the need for input parameter optimization to improve model output is a long¬known and often-tackled problem. Particularly, in such environments where correct and timely input parameters cannot be provided. Efficient computational parameter estimation and optimization strategies are required to minimize the deviation between the predicted scenario and the real phenomenon behaviour. Based on the before mentioned, this thesis intends to: 1. Provide a sensitivity study of the effect of NWP model input parameters on prediction quality. 2. Propose a valid framework, which allows to search for the most 'optimal' values of model input parameters which, in our hypothesis, will provide better prediction quality. 3. Reduce the waiting time needed to get more reliable weather predictions. To accomplish the objectives of the presented proposal, a new weather prediction scheme is introduced. This new scheme implements an evolutionary computing algorithm, which focuses on the calibration of input parameters in NWP models. The presented scheme is called Genetic Ensemble, which is composed of two-phases: calibration phase and prediction phase. Through the calibration phase, the presented approach applies Genetic Algorithm operators iteratively, in order to find 'best' values of NWP model input parameters, which consequently, will be used in the consequent prediction phase. Many strategies of the Genetic Ensemble have been developed, as such, it s extended to calibrate more than one level of input parameters, and also to evaluate their values using different strategies. On the other hand, the proposed scheme is paralleled using a Master/Worker programming paradigm, and is suitable to be executed in high performance computing (HPC) platforms, by which, execution time is intended to be reduced. The presented scheme has been evaluated by running weather prediction experiments over a well-known weather catastrophe; Hurricane Katrina 2005. Obtained results showed both significant improvement in weather prediction quality, and a considerable reduction in the over all execution time.
APA, Harvard, Vancouver, ISO, and other styles
21

Ortiz, Gual Fernando Enrique. "Novel reconfigurable computing architectures for embedded high performance signal processing and numerical applications." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 1.73 Mb., 102 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3221141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Briles, Scott D. "A numerical procedure for computing probability of detection for a wideband pulse receiver." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Egorova, Vera. "Finite Difference Methods for nonlinear American Option Pricing models: Numerical Analysis and Computing." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/68501.

Full text
Abstract:
[EN] The present PhD thesis is focused on numerical analysis and computing of finite difference schemes for several relevant option pricing models that generalize the Black-Scholes model. A careful analysis of desirable properties for the numerical solutions of option pricing models as the positivity, stability and consistency, is provided. In order to handle the free boundary that arises in American option pricing problems, various transformation techniques based on front-fixing method are applied and studied. Special attention is paid to multi-asset option pricing, such as exchange or spread option. Appropriate transformation allows eliminating of the cross derivative term. Transformation techniques of partial differential equations to remove convection and reaction terms are studied in order to simplify the models and avoid possible troubles of stability. This thesis consists of six chapters. The first chapter is an introduction containing definitions of option and related terms and derivation of the Black-Scholes equation as well as general aspects of theory of finite difference schemes, including preliminaries on numerical analysis. Chapter 2 is devoted to solve linear Black-Scholes model for American put and call options. A Landau transformation and a new front-fixing transformation are applied to the free boundary value problem. It leads to non-linear partial differential equation (PDE) in a fixed domain. Stable and consistent explicit numerical schemes are proposed preserving positivity and monotonicity of the solution in accordance with the behaviour of the exact solution. Efficiency of the front-fixing method demonstrated in Chapter 2 has motivated us to apply the method to some more complicated nonlinear models. A new change of variables resulting in a time dependent boundary instead of fixed one, is applied to nonlinear Black-Scholes model for American options, such as Barles and Soner and Risk Adjusted Pricing models. Chapter 4 provides a new alternative approach for solving American option pricing problem based on rationality of investor. There exists an intensity function that can be reduced in the simplest case to penalty approach. Chapter 5 deals with multi-asset option pricing. Appropriate transformation allows eliminating of the cross derivative term avoiding computational drawbacks and possible troubles of stability. Concluding remarks are given in Chapter 6. All the considered models and numerical methods are accompanied by several examples and simulations. The convergence rate is computed confirming the theoretical study of consistency. Stability conditions are tested by numerical examples. Results are compared with known relevant methods in the literature showing efficiency of the proposed methods.<br>[ES] La presente tesis doctoral se centra en la construcción de esquemas en diferencias finitas y el análisis numérico de relevantes modelos de valoración de opciones que generalizan el modelo de Black-Scholes. Se proporciona un análisis cuidadoso de las propiedades de las soluciones numéricas tales como la positividad, la estabilidad y la consistencia. Con el fin de manejar la frontera libre que surge en los problemas de valoración de opciones Americanas, se aplican y se estudian diversas técnicas de transformación basadas en el método de fijación de las fronteras (front-fixing). Se presta especial atención a la valoración de opciones de múltiples activos, como son las opciones ''exchange'' y ''spread''. Esta tesis se compone de seis capítulos. El primer capítulo es una introducción que contiene las definiciones de opción y términos relacionados y la derivación de la ecuación de Black-Scholes, así como aspectos generales de la teoría de los esquemas en diferencias finitas, incluyendo preliminares de análisis numérico. El capítulo 2 está dedicado a resolver el modelo lineal de Black-Scholes para opciones Americanas put y call. Para fijar las fronteras del problema de frontera libre se aplican transformaciones como la de Landau y un nuevo cambio de variable propuesto. La eficiencia del método front-fixing mostrada en el capítulo 2 ha motivado el estudio de su aplicación a algunos modelos no lineales más complicados. En particular, se propone un cambio de variables que lleva a una nueva frontera dependiente del tiempo en lugar de una fija. Este cambio se aplica a modelos no lineales de Black-Scholes para opciones Americanas, como son el de Barles y Soner y el modelo RAPM (Risk Adjusted Pricing Methodology). El capítulo 4 ofrece una nueva técnica para la resolución de problemas de valoración de opciones Americanas basada en la racionalidad de los inversores. Aparece una función de la intensidad que se puede reducir en el caso más simple a la técnica de penalización (penalty method). Este enfoque tiene en cuenta el posible comportamiento irracional de los inversores. En la sección 4.2 se aplica esta técnica al modelo de cambio de regímenes lo que lleva a un nuevo modelo que tiene en cuenta el posible ejercicio irracional, así como varios estados del mercado. El enfoque del parámetro de racionalidad junto con una transformación logarítmica permiten construir un esquema numérico eficiente sin aplicar el método front-fixing o la conocida formulación de LCP (Linear Complementarity Problem). El capítulo 5 se dedica a la valoración de opciones de activos múltiples. Una transformación apropiada permite la eliminación del término de derivadas cruzadas evitando inconvenientes computacionales y posibles problemas de estabilidad. Las conclusiones se muestran en el capítulo 6. Se pone en relieve varios aspectos de la presente tesis. Todos los modelos considerados y los métodos numéricos van acompañados de varios ejemplos y simulaciones. Se estudia la convergencia numérica que confirma el estudio teórico de la consistencia. Las condiciones de estabilidad son corroboradas con ejemplos numéricos. Los resultados se comparan con métodos relevantes de la bibliografía mostrando la eficiencia de los métodos propuestos.<br>[CAT] La present tesi doctoral se centra en la construcció d'esquemes en diferències finites i l'anàlisi numèrica de rellevants models de valoració d'opcions que generalitzen el model de Black-Scholes. Es proporciona una anàlisi cuidadosa de les propietats de les solucions numèri-ques com ara la positivitat, l'estabilitat i la consistència. A fi de manejar la frontera lliure que sorgix en els problemes de valoració d'opcions Americanes, s'apliquen i s'estudien diverses tècniques de transformació basades en el mètode de fixació de les fronteres (front-fixing). Es presta especial atenció a la valoració d'opcions de múltiples actius, com són les opcions ''exchange'' i ''spread''. Esta tesi es compon de sis capítols. El primer capítol és una introducció que conté les definicions d'opció i termes relacionats i la derivació de l'equació de Black-Scholes, així com aspectes generals de la teoria dels esquemes en diferències finites, incloent aspectes preliminars d'anàlisi numèrica. El 2n capítol està dedicat a resoldre el model lineal de Black-Scholes per a opcions Americanes ''put'' i ''call''. Per a fixar les fronteres del problema de frontera lliure s'apliquen transformacions com la de Landau i s'ha proposat un nou canvi de variable proposat. Açò porta a una equació diferencial en derivades parcials no lineal en un domini fix. L'eficiència del mètode front-fixing mostrada en el 2n capítol ha motivat l'estudi de la seua aplicació a alguns models no lineals més complicats. En particular, es proposa un canvi de variables que porta a una nova frontera dependent del temps en compte d'una fixa. Este canvi s'aplica a models no lineals de Black-Scholes per a opcions Americanes, com són el de Barles i Soner i el model RAPM (Risk Adjusted Pricing Methodology). El 4t capítol oferix una nova tècnica per a la resolució de problemes de valoració d'opcions Americanes basada en la racionalitat dels inversors. Apareix una funció de la intensitat que es pot reduir en el cas més simple a la tècnica de penalització (penal method) . Este enfocament té en compte el possible comportament irracional dels inversors. En la secció 4.2 s'aplica esta tècnica al model de canvi de règims el que porta a un nou model que té en compte el possible exercici irracional, així com diversos estats del mercat. L'enfocament del paràmetre de racionalitat junt amb una transformació logarítmica permeten construir un esquema numèric eficient sense aplicar el mètode front-fixing o la coneguda formulació de LCP (Linear Complementarity Problem). El 5é capítol es dedica a la valoració d'opcions d'actius múltiples. Una transformació apropiada permet l'eliminació del terme de derivades mixtes evitant inconvenients computacionals i possibles problemes d' estabilitat. Les conclusions es mostren al 6é capítol. Es posa en relleu diversos aspectes de la present tesi. Tots els models considerats i els mètodes numèrics van acompanyats de diversos exemples i simulacions. S'estu-dia la convergència numèrica que confirma l'estudi teòric de la consistència. Les condicions d'estabilitat són corroborades amb exemples numèrics. Els resultats es comparen amb mètodes rellevants de la bibliografia mostrant l'eficiència dels mètodes proposats.<br>Egorova, V. (2016). Finite Difference Methods for nonlinear American Option Pricing models: Numerical Analysis and Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68501<br>TESIS<br>Premiado
APA, Harvard, Vancouver, ISO, and other styles
24

Hjerpe, Adam. "Computing Random Forests Variable Importance Measures (VIM) on Mixed Numerical and Categorical Data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-185496.

Full text
Abstract:
The Random Forest model is commonly used as a predictor function and the model have been proven useful in a variety of applications. Their popularity stems from the combination of providing high prediction accuracy, their ability to model high dimensional complex data, and their applicability under predictor correlations. This report investigates the random forest variable importance measure (VIM) as a means to find a ranking of important variables. The robustness of the VIM under imputation of categorical noise, and the capability to differentiate informative predictors from non-informative variables is investigated. The selection of variables may improve robustness of the predictor, improve the prediction accuracy, reduce computational time, and may serve as a exploratory data analysis tool. In addition the partial dependency plot obtained from the random forest model is examined as a means to find underlying relations in a non-linear simulation study.<br>Random Forest (RF) är en populär prediktormodell som visat goda resultat vid en stor uppsättning applikationsstudier. Modellen ger hög prediktionsprecision, har förmåga att modellera komplex högdimensionell data och modellen har vidare visat goda resultat vid interkorrelerade prediktorvariabler. Detta projekt undersöker ett mått, variabel importance measure (VIM) erhållna från RF modellen, för att beräkna graden av association mellan prediktorvariabler och målvariabeln. Projektet undersöker känsligheten hos VIM vid kvalitativt prediktorbrus och undersöker VIMs förmåga att differentiera prediktiva variabler från variabler som endast, med aveende på målvariableln, beskriver brus. Att differentiera prediktiva variabler vid övervakad inlärning kan användas till att öka robustheten hos klassificerare, öka prediktionsprecisionen, reducera data dimensionalitet och VIM kan användas som ett verktyg för att utforska relationer mellan prediktorvariabler och målvariablel.
APA, Harvard, Vancouver, ISO, and other styles
25

Dias, dos Santos Jose. "Implementation and comparison of numerical algorithms for the solution of linear systems using transputer networks." Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.256255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Karaismail, Ertan. "Numerical Simulation Of Radiating Flows." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606452/index.pdf.

Full text
Abstract:
Predictive accuracy of the previously developed coupled code for the solution of the time-dependent Navier-Stokes equations in conjunction with the radiative transfer equation was first assessed by applying it to the prediction of thermally radiating, hydrodynamically developed laminar pipe flow for which the numerical solution had been reported in the literature. The effect of radiation on flow and temperature fields was demonstrated for different values of conduction to radiation ratio. It was found that the steady-state temperature predictions of the code agree well with the benchmark solution. In an attempt to test the predictive accuracy of the coupled code for turbulent radiating flows, it was applied to fully developed turbulent flow of a hot gas through a relatively cold pipe and the results were compared with the numerical solution available in the literature. The code was found to mimic the reported steady-state temperature profiles well. Having validated the predictive accuracy of the coupled code for steady, laminar/turbulent, radiating pipe flows, the performance of the code for transient radiating flows was tested by applying it to a test problem involving laminar/turbulent flow of carbon dioxide through a circular pipe for the simulation of simultaneous hydrodynamic and thermal development. The transient solutions for temperature, velocity and radiative energy source term fields were found to demonstrate the physically expected trends. In order to improve the performance of the code, a parallel algorithm of the code was developed and tested against sequential code for speed up and efficiency. It was found that the same results are obtained with a reasonably high speed-up and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

Carreño, Emmanuell Diaz. "Migration and evaluation of a numerical weather prediction application in a cloud computing infrastructure." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/127446.

Full text
Abstract:
O uso de clusters e grids tem beneficiado durante anos a comunidade de computação de alto desempenho (HPC). O uso deste tipo de sistemas tem permitido aos cientistas usar conjuntos de dados maiores para executar cálculos mais complexos. A computação de alto desempenho tem ajudado para obter aqueles resultados em menos tempo, mas aumentou o custo das despesas de capital nesta área da ciência. Como alguns projetos de e-science são realizados também em ambientes de rede altamente distribuídos, ou usando conjuntos de dados imensos que muitas vezes requerem computação em grade, eles são muito bons candidatos para as iniciativas de computação em nuvem. O paradigma Cloud Computing surgiu como uma solução prática com foco comercial para realizar computação científica em larga escala. A elasticidade da nuvem e o modelo pay-as-you-go apresenta uma oportunidade interessante para aplicações comumente executados em supercomputadores ou clusters. Esta tese apresenta e avalia os desafios da migração e execução da previsão numérica de tempo (NWP) numa infra-estrutura de computação em nuvem. Foi realizada a migração desta aplicação HPC e foi avaliado o desempenho em um cluster local e na nuvem utilizando diferentes tamanhos de instâncias virtuais. Analisamos as principais características da aplicação executando na nuvem. As experiências demonstram que, embora o processamento e a rede criam um fator limitante, o armazenamento dos conjuntos de dados de entrada e saída na nuvem apresentam uma opção atraente para compartilhar resultados e facilitar a implantação de um ambiente de ensaio para investigação meteorológica. Os resultados mostram que a infraestrutura de nuvem pode ser usada como uma alternativa viável de HPC para software de previsão numérica do tempo.<br>The usage of clusters and grids has benefited for years the High Performance Computing (HPC) community. These kind of systems have allowed scientists to use bigger datasets and to perform more intensive computations, helping them to achieve results in less time but has also increased the upfront costs associated with this area of science. As some e-Science projects are carried out also in highly distributed network environments or using immense data sets that sometimes require grid computing, they are good candidates for cloud computing initiatives. The Cloud Computing paradigm has emerged as a practical solution to perform large-scale scientific computing. The elasticity of the cloud and its pay-as-you-go model presents an attractive opportunity for applications commonly executed in clusters or supercomputers. In this context, the user does not need to buy infrastructure, the resources can be rented from a provider and used for a period of time. This thesis presents the challenges and solutions of migrating a numerical weather prediction (NWP) application to a cloud computing infrastructure. We performed the migration of this HPC application and evaluated its performance in a local cluster and the cloud using different instance sizes. We analyzed the main characteristics of the application running in the cloud. The experiments demonstrate that, although processing and networking create a limiting factor, storing input and output datasets in the cloud presents an attractive option to share results and ease the deployment of a test-bed for a weather research platform. Results show that cloud infrastructure can be used as a viable HPC alternative for numerical weather prediction software.
APA, Harvard, Vancouver, ISO, and other styles
28

Zemzemi, Imene. "High-performance computing and numerical simulation for laser wakefield acceleration with realistic laser profiles." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX111.

Full text
Abstract:
Le développement des lasers ultra-courts à de hautes intensités a permis l’émergence de nouveaux domaines de recherche en relation avec l’interaction laser-plasma. En particulier, les lasers petawatt femtoseconde ont ouvert la voie vers la possibilité de concevoir une nouvelle génération d’accélérateurs de particules. La modélisation numérique a largement contribué à l’essor de ce domaine d’accélération des électrons par sillage laser. Dans ce contexte, les codes Particle-In-Cell sont les plus répandus dans la communauté. Ils permettent une description fiable de l’interaction laser plasma et surtout de l’accélération par sillage laser.Cependant, une modélisation précise de la physique en jeu nécessite de recourir à des simulations 3D particulièrement coûteuses. Une manière pour accélérer efficacement ce type de simulations est l’utilisation de modèles réduits qui, tout en assurant un gain en temps de calcul très important, garantissent une modélisation fiable du problème. Parmi ces modèles, la décomposition des champs en modes de Fourier dans la direction azimutale est particulièrement adaptée à l’accélération laser plasma.Dans le cadre de ma thèse, j’ai implémenté ce modèle dans le code open-source SMILEI, dans un premier temps, avec un schéma différences finies (FDTD) pour discrétiser les équations de Maxwell. Néanmoins, ce type de solveur peut induire un effet de Cherenkov numérique qui corrompt les résultats de la simulation. Pour mitiger cet artéfact, j’ai également implémenté une version pseudo-spectrale du solveur de Maxwell qui présente de nombreux avantages en termes de précision numérique.Cette méthode est ensuite mise en oeuvre pour étudier l’impact de profils de lasers réalistes sur la qualité du faisceau d’électrons en exploitant des mesures réalisées sur le laser Apollon. Sa capacité à modéliser correctement les processus physiques présents est analysée en déterminant le nombre de modes nécessaires et en comparant les résultats avec ceux issus des simulations 3D en géométrie Cartésienne. Cette étude montre qu’inclure les défauts du laser mène à des différences dans les résultats et que ces derniers dégradent la performance des accélérateurs-laser plasma notamment en termes de quantité de charge injectée. Ces simulations, instructives pour les futures expériences d’accélération d’électrons par le laser Apollon, mettent en avant la nécessité d’inclure les mesures expérimentales dans la simulation et particulièrement celle du front de phase, pour aboutir à des résultats précis<br>The advent of ultra-short high-intensity lasers has paved the way to new and promising, yet challenging, areas of research in laser-plasma interaction physics. The success of building petawatt femtosecond lasers offers a promising path for designing future particle accelerators and light sources.Achieving this goal intrinsically relies on the combination of experiments and numerical modeling. So far, Particle-In-Cell (PIC) codes have been the ultimate tool to accurately describe the laser-plasma interaction especially in the field of Laser WakeField Acceleration (LWFA). Nevertheless, the numerical modeling of laser-plasma accelerators in 3D can be a very challenging task due to their high computational cost.A useful approach to speed up such simulations consists of employing reduced numerical modes which simplify the problem while retaining a high fidelity.Among these models, Fourier field decomposition in azimuthal modes for the cylindrical geometry is particularly well suited for physical problems with close to cylindrical symmetry, which is the case in LWFA.During my Ph.D., I first implemented this method in the open-source code SMILEI in the Finite Difference Time Domain (FDTD) discretization scheme for the Maxwell solver. However, this kind of solvers may suffer from numerical Cherenkov radiation (NCR). To mitigate this artifact, I also implemented Maxwell’s solver in the Pseudo Spectral Analytical Domain (PSATD) scheme which offers better accuracy of the results.This method is then employed to study the impact of realistic laser profiles from the Apollon facility on the quality of the accelerated electron beam. Its ability to correctly model the involved physical processes is investigated by determining the optimal number of modes and benchmarking its results with full 3D Cartesian simulations. It is shown that the imperfections in the laser pulse lead to differences in the results compared to theoretical profiles. They degrade the performance of laser-plasma accelerators especially in terms of the quantity of injected charge. These simulations, insightful for the future experiments of LWFA that will be held soon with the Apollon laser, put forward the importance of including realistic lasers in the simulation to obtain reliable results
APA, Harvard, Vancouver, ISO, and other styles
29

Tarhan, Tanil. "Numerical Simulation Of Laminar Reacting Flows." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605307/index.pdf.

Full text
Abstract:
Novel sequential and parallel computational fluid dynamic (CFD) codes based on method of lines (MOL) approach were developed for the numerical simulation of multi-component reacting flows using detailed transport and thermodynamic models. Both codes were applied to the prediction of a confined axisymmetric laminar co-flowing methane-air diffusion flame for which experimental data were available in the literature. Flame-sheet model for infinite-rate chemistry and one-, two-, and five- and ten-step reduced finite-rate reaction mechanisms were employed for methane-air combustion sub-model. A second-order high-resolution total variation diminishing (TVD) scheme based on Lagrange interpolation polynomial was proposed in order to alleviate spurious oscillations encountered in time evolution of flame propagation. Steady-state velocity, temperature and species profiles obtained by using infinite- and finite-rate chemistry models were validated against experimental data and other numerical solutions. They were found to be in reasonably good agreement with measurements and numerical results. The proposed difference scheme produced accurate results without spurious oscillations and numerical diffusion encountered in the classical schemes and hence was found to be a successful scheme applicable to strongly convective flow problems with non-uniform grid resolution. The code was also found to be an efficient tool for the prediction and understanding of transient combustion systems. This study constitutes the initial steps in the development of an efficient numerical scheme for direct numerical simulation (DNS) of unsteady, turbulent, multi-dimensional combustion with complex chemistry.
APA, Harvard, Vancouver, ISO, and other styles
30

Macias, Diaz Jorge. "A Numerical Method for Computing Radially Symmetric Solutions of a Dissipative Nonlinear Modified Klein-Gordon Equation." ScholarWorks@UNO, 2004. http://scholarworks.uno.edu/td/167.

Full text
Abstract:
In this paper we develop a finite-difference scheme to approximate radially symmetric solutions of a dissipative nonlinear modified Klein-Gordon equation in an open sphere around the origin, with constant internal and external damping coefficients and nonlinear term of the form G' (w) = w ^p, with p an odd number greater than 1. We prove that our scheme is consistent of quadratic order, and provide a necessary condition for it to be stable order n. Part of our study will be devoted to study the effects of internal and external damping.
APA, Harvard, Vancouver, ISO, and other styles
31

Zheng, Li. "Power distribution network modeling and microfluidic cooling for high-performance computing systems." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54449.

Full text
Abstract:
A silicon interposer platform with microfluidic cooling is proposed for high-performance computing systems. The key components and technologies for the proposed platform, including electrical and fluidic microbumps, microfluidic vias and heat sinks, and simultaneous flip-chip bonding of the electrical and fluidic microbumps, are developed and demonstrated. Fine-pitch electrical microbumps of 25 µm diameter and 50 µm pitch, fluidic vias of 100 µm diameter, and annular-shaped fluidic microbumps of 150 µm inner diameter and 210 µm outer diameter were fabricated and bonded. Electrical and fluidic tests were conducted to verify the bonding results. Moreover, the thermal and signaling benefits of the proposed platform were evaluated based on thermal measurements and simulations, and signaling simulations. Compared to the conventional air cooling, significant reductions in system temperature and thermal coupling are achieved with the proposed platform. Moreover, the signaling performance is improved due to the reduced temperature, especially for long interconnects on the silicon interposer. A numerical power distribution network (PDN) simulator is developed based on distributed circuit models for on-die power/ground grids, package- and board- level power/ground planes, and the finite difference method. The simulator enables power supply noise simulation, including IR-drop and simultaneous switching noise, for a full chip with multiple blocks of different power, decoupling capacitor, and power/ground pad densities. The distributed circuit model is further extended to include TSVs to enable simulations for 3D PDN. The integration of package- and board- level power/ground planes enables co-simulation of die-package-board PDN and exploration of new PDN configurations.
APA, Harvard, Vancouver, ISO, and other styles
32

Patel, Meena. "Numerical study of non-linear spectroscopy and four-wave-mixing in two and multi-level atoms." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2623.

Full text
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2018.<br>In this research, we undertake a numerical study of the interaction between laser beams and two as well as multi-level atoms. The main aim of this research is to obtain a deeper understanding of laser-atom interactions and non-linear processes such as optical four-wave mixing. This work will supplement experiments to be conducted by other members of the group, who are involved in generating entangled photons via four-wave mixing in cold rubidium atoms. We begin by performing a basic study of the interaction between laser beams and two-level atoms as an aid to gain knowledge of numerical techniques, as well as an understanding of the physics behind light-atom interactions. We make use of a semi-classical approach to describe the system where the atoms are treated quantum mechanically and the laser beams are treated classically. We study the interaction between atoms and laser beams using the density matrix operator and Maxwell's equations respectively. By solving the optical Bloch equations for two-level atoms we examine the atomic populations and coherences and present plots of the density matrix elements as a function of time. The e ects of various parameters such as laser intensity, detuning and laser modulation have been tested. The behaviour of the laser beam as it propagates through the atomic sample is also studied. This is determined by Maxwell's equation where the atomic polarization is estimated from the coherence terms of the density matrix elements.<br>French South African Institute of Technology National Research Foundation
APA, Harvard, Vancouver, ISO, and other styles
33

Bharthipudi, Saraswati. "Comparison of numerical result checking mechanisms for FFT computations under faults." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5126.

Full text
Abstract:
This thesis studies and compares existing Numerical Result checking algorithms for FFT computations under faults. In order to simulate faulty conditions, a fault injection tool is implemented. The fault injection tool is designed so as to be as non-intrusive to the application as possible. Faults are injected into memory in the form of bit flips in the data elements of the application. The performance of the three result checking algorithms under these conditions is studied and compared. Faults are injected at all the stages of the FFT computation by flipping each of the 64-bits in the double-precision representation. Experiments also include introducing random bit flips in the data array, emulating a more real-life like scenario. Finally the performance of these algorithms under a set of worst-case is also studied
APA, Harvard, Vancouver, ISO, and other styles
34

Delgado, Javier. "Scheduling Medical Application Workloads on Virtualized Computing Systems." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/633.

Full text
Abstract:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
APA, Harvard, Vancouver, ISO, and other styles
35

El-Fakharany, Mohamed Mostafa Refaat. "Finite Difference Schemes for Option Pricing under Stochastic Volatility and Lévy Processes: Numerical Analysis and Computing." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/53917.

Full text
Abstract:
[EN] In the stock markets, the process of estimating a fair price for a stock, option or commodity is consider the corner stone for this trade. There are several attempts to obtain a suitable mathematical model in order to enhance the estimation process for evaluating the options for short or long periods. The Black-Scholes partial differential equation (PDE) and its analytical solution, 1973, are considered a breakthrough in the mathematical modeling for the stock markets. Because of the ideal assumptions of Black-Scholes several alternatives have been developed to adequate the models to the real markets. Two strategies have been done to capture these behaviors; the first modification is to add jumps into the asset following Lévy processes, leading to a partial integro-differential equation (PIDE); the second is to allow the volatility to evolve stochastically leading to a PDE with two spatial variables. Here in this work, we solve numerically PIDEs for a wide class of Lévy processes using finite difference schemes for European options and also, the associated linear complementarity problem (LCP) for American option. Moreover, the models for options under stochastic volatility incorporated with jump-diffusion are considered. Numerical analysis for the proposed schemes is studied since it is the efficient and practical way to guarantee the convergence and accuracy of numerical solutions. In fact, without numerical analysis, careless computations may waste good mathematical models. This thesis consists of four chapters; the first chapter is an introduction containing historically review for stochastic processes, Black-Scholes equation and preliminaries on numerical analysis. Chapter two is devoted to solve the PIDE for European option under CGMY process. The PIDE for this model is solved numerically using two distinct discretization approximations; the first approximation guarantees unconditionally consistency while the second approximation provides unconditional positivity and stability. In the first approximation, the differential part is approximated using the explicit scheme and the integral part is approximated using the trapezoidal rule. In the second approximation, the differential part is approximated using the Patankar-scheme and the integral part is approximated using the four-point open type formula. Chapter three provides a unified treatment for European and American options under a wide class of Lévy processes as CGMY, Meixner and Generalized Hyperbolic. First, the reaction and convection terms of the differential part of the PIDE are removed using appropriate mathematical transformation. The differential part for European case is explicitly discretized , while the integral part is approximated using Laguerre-Gauss quadrature formula. Numerical properties such as positivity, stability and consistency for this scheme are studied. For the American case, the differential part of the LCP is discretized using a three-time level approximation with the same integration technique. Next, the Projected successive over relaxation and multigrid techniques have been implemented to obtain the numerical solution. Several numerical examples are given including discussion of the errors and computational cost. Finally in Chapter four, the PIDE for European option under Bates model is considered. Bates model combines both stochastic volatility and jump diffusion approaches resulting in a PIDE with a mixed derivative term. Since the presence of cross derivative terms involves the existence of negative coefficient terms in the numerical scheme deteriorating the quality of the numerical solution, the mixed derivative is eliminated using suitable mathematical transformation. The new PIDE is solved numerically and the numerical analysis is provided. Moreover, the LCP for American option under Bates model is studied.<br>[ES] El proceso de estimación del precio de una acción, opción u otro derivado en los mercados de valores es objeto clave de estudio de las matemáticas financieras. Se pueden encontrar diversas técnicas para obtener un modelo matemático adecuado con el fin de mejorar el proceso de valoración de las opciones para periodos cortos o largos. Históricamente, la ecuación de Black-Scholes (1973) fue un gran avance en la elaboración de modelos matemáticos para los mercados de valores. Es un modelo práctico para estimar el valor razonable de una opción. Sobre unos supuestos determinados, F. Black y M. Scholes obtuvieron una ecuación diferencial parcial lineal y su solución analítica. Desde entonces se han desarrollado modelos más complejos para adecuarse a la realidad de los mercados. Un tipo son los modelos con volatilidad estocástica que vienen descritos por una ecuación en derivadas parciales con dos variables espaciales. Otro enfoque consiste en añadir saltos en el precio del subyacente por medio de modelos de Lévy lo que lleva a resolver una ecuación integro-diferencial parcial (EIDP). En esta memoria se aborda la resolución numérica de una amplia clase de modelos con procesos de Lévy. Se desarrollan esquemas en diferencias finitas para opciones europeas y también para opciones americanas con su problema de complementariedad lineal (PCL) asociado. Además se tratan modelos con volatilidad estocástica incorporando difusión con saltos. Se plantea el análisis numérico ya que es el camino eficiente y práctico para garantizar la convergencia y precisión de las soluciones numéricas. De hecho, la ausencia de análisis numérico debilita un buen modelo matemático. Esta memoria está organizada en cuatro capítulos. El primero es una introducción con un breve repaso de los procesos estocásticos, el modelo de Black-Scholes así como nociones preliminares de análisis numérico. En el segundo capítulo se trata la EIDP para las opciones europeas según el modelo CGMY. Se proponen dos esquemas en diferencias finitas; el primero garantiza consistencia incondicional de la solución mientras que el segundo proporciona estabilidad y positividad incondicionales. Con el primer enfoque, la parte diferencial se discretiza por medio de un esquema explícito y para la parte integral se usa la regla del trapecio. En la segunda aproximación, para la parte diferencial se usa un esquema tipo Patankar y la parte integral se aproxima por medio de la fórmula de tipo abierto con cuatro puntos. En el capítulo tercero se propone un tratamiento unificado para una amplia clase de modelos de opciones en procesos de Lévy como CGMY, Meixner e hiperbólico generalizado. Se eliminan los términos de reacción y convección por medio de un apropiado cambio de variables. Después la parte diferencial se aproxima por un esquema explícito mientras que para la parte integral se usa la fórmula de cuadratura de Laguerre-Gauss. Se analizan positividad, estabilidad y consistencia. Para las opciones americanas, la parte diferencial del LCP se discretiza con tres niveles temporales mediante cuadratura de Laguerre-Gauss para la integración numérica. Finalmente se implementan métodos iterativos de proyección y relajación sucesiva y la técnica de multimalla. Se muestran varios ejemplos incluyendo estudio de errores y coste computacional. El capítulo 4 está dedicado al modelo de Bates que combina los enfoques de volatilidad estocástica y de difusión con saltos derivando en una EIDP con un término con derivadas cruzadas. Ya que la discretización de una derivada cruzada comporta la existencia de coeficientes negativos en el esquema que deterioran la calidad de la solución numérica, se propone un cambio de variables que elimina dicha derivada cruzada. La EIDP transformada se resuelve numéricamente y se muestra el análisis numérico. Por otra parte se estudia el LCP para opciones americanas con el modelo de Bates.<br>[CAT] El procés d'estimació del preu d'una acció, opció o un altre derivat en els mercats de valors és objecte clau d'estudi de les matemàtiques financeres . Es poden trobar diverses tècniques per a obtindre un model matemàtic adequat a fi de millorar el procés de valoració de les opcions per a períodes curts o llargs. Històricament, l'equació Black-Scholes (1973) va ser un gran avanç en l'elaboració de models matemàtics per als mercats de valors. És un model matemàtic pràctic per a estimar un valor raonable per a una opció. Sobre uns suposats F. Black i M. Scholes van obtindre una equació diferencial parcial lineal amb solució analítica. Des de llavors s'han desenrotllat models més complexos per a adequar-se a la realitat dels mercats. Un tipus és els models amb volatilitat estocástica que ve descrits per una equació en derivades parcials amb dos variables espacials. Un altre enfocament consistix a afegir bots en el preu del subjacent per mitjà de models de Lévy el que porta a resoldre una equació integre-diferencial parcial (EIDP) . En esta memòria s'aborda la resolució numèrica d'una àmplia classe de models baix processos de Lévy. Es desenrotllen esquemes en diferències finites per a opcions europees i també per a opcions americanes amb el seu problema de complementarietat lineal (PCL) associat. A més es tracten models amb volatilitat estocástica incorporant difusió amb bots. Es planteja l'anàlisi numèrica ja que és el camí eficient i pràctic per a garantir la convergència i precisió de les solucions numèriques. De fet, l'absència d'anàlisi numèrica debilita un bon model matemàtic. Esta memòria està organitzada en quatre capítols. El primer és una introducció amb un breu repàs dels processos estocásticos, el model de Black-Scholes així com nocions preliminars d'anàlisi numèrica. En el segon capítol es tracta l'EIDP per a les opcions europees segons el model CGMY. Es proposen dos esquemes en diferències finites; el primer garantix consistència incondicional de la solució mentres que el segon proporciona estabilitat i positivitat incondicionals. Amb el primer enfocament, la part diferencial es discretiza per mitjà d'un esquema explícit i per a la part integral s'empra la regla del trapezi. En la segona aproximació, per a la part diferencial s'usa l'esquema tipus Patankar i la part integral s'aproxima per mitjà de la fórmula de tipus obert amb quatre punts. En el capítol tercer es proposa un tractament unificat per a una àmplia classe de models d'opcions en processos de Lévy com ara CGMY, Meixner i hiperbòlic generalitzat. S'eliminen els termes de reacció i convecció per mitjà d'un apropiat canvi de variables. Després la part diferencial s'aproxima per un esquema explícit mentres que per a la part integral s'usa la fórmula de quadratura de Laguerre-Gauss. S'analitzen positivitat, estabilitat i consistència. Per a les opcions americanes, la part diferencial del LCP es discretiza amb tres nivells temporals amb quadratura de Laguerre-Gauss per a la integració numèrica. Finalment s'implementen mètodes iteratius de projecció i relaxació successiva i la tècnica de multimalla. Es mostren diversos exemples incloent estudi d'errors i cost computacional. El capítol 4 està dedicat al model de Bates que combina els enfocaments de volatilitat estocástica i de difusió amb bots derivant en una EIDP amb un terme amb derivades croades. Ja que la discretización d'una derivada croada comporta l'existència de coeficients negatius en l'esquema que deterioren la qualitat de la solució numèrica, es proposa un canvi de variables que elimina dita derivada croada. La EIDP transformada es resol numèricament i es mostra l'anàlisi numèrica. D'altra banda s'estudia el LCP per a opcions americanes en el model de Bates.<br>El-Fakharany, MMR. (2015). Finite Difference Schemes for Option Pricing under Stochastic Volatility and Lévy Processes: Numerical Analysis and Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53917<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
36

Srđan, Milićević. "Algorithms for computing the optimal Geršgorin-type localizations." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2020. https://www.cris.uns.ac.rs/record.jsf?recordId=114425&source=NDLTD&language=en.

Full text
Abstract:
There are numerous ways to localize eigenvalues. One of the best known results is that the spectrum of a given matrix ACn,n is a subset of a union of discs centered at diagonal elements whose radii equal to the sum of the absolute values of the off-diagonal elements of a corresponding row in the matrix. This result (Ger&scaron;gorin&#39;s theorem, 1931) is one of the most important and elegant ways of eigenvalues localization ([63]). Among all Ger&scaron;gorintype sets, the minimal Ger&scaron;gorin set gives the sharpest and the most precise localization of the spectrum ([39]). In this thesis, new algorithms for computing an efficient and accurate approximation of the minimal Ger&scaron;gorin set are presented.<br>Постоје бројни начини за локализацију карактеристичних корена. Један од најчувенијих резултата је да се спектар дате матрице АCn,n налази у скупу који представља унију кругова са центрима у дијагоналним елементима матрице и полупречницима који су једнаки суми модула вандијагоналних елемената одговарајуће врсте у матрици. Овај резултат (Гершгоринова теорема, 1931.), сматра се једним од најзначајнијих и најелегантнијих начина за локализацију карактеристичних корена ([61]). Међу свим локализацијама Гершгориновог типа, минимални Гершгоринов скуп даје најпрецизнију локализацију спектра ([39]). У овој дисертацији, приказани су нови алгоритми за одређивање тачне и поуздане апроксимације минималног Гершгориновог скупа.<br>Postoje brojni načini za lokalizaciju karakterističnih korena. Jedan od najčuvenijih rezultata je da se spektar date matrice ACn,n nalazi u skupu koji predstavlja uniju krugova sa centrima u dijagonalnim elementima matrice i poluprečnicima koji su jednaki sumi modula vandijagonalnih elemenata odgovarajuće vrste u matrici. Ovaj rezultat (Geršgorinova teorema, 1931.), smatra se jednim od najznačajnijih i najelegantnijih načina za lokalizaciju karakterističnih korena ([61]). Među svim lokalizacijama Geršgorinovog tipa, minimalni Geršgorinov skup daje najprecizniju lokalizaciju spektra ([39]). U ovoj disertaciji, prikazani su novi algoritmi za određivanje tačne i pouzdane aproksimacije minimalnog Geršgorinovog skupa.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Jingwei. "Numerical Methods for the Chemical Master Equation." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/30018.

Full text
Abstract:
The chemical master equation, formulated on the Markov assumption of underlying chemical kinetics, offers an accurate stochastic description of general chemical reaction systems on the mesoscopic scale. The chemical master equation is especially useful when formulating mathematical models of gene regulatory networks and protein-protein interaction networks, where the numbers of molecules of most species are around tens or hundreds. However, solving the master equation directly suffers from the so called "curse of dimensionality" issue. This thesis first tries to study the numerical properties of the master equation using existing numerical methods and parallel machines. Next, approximation algorithms, namely the adaptive aggregation method and the radial basis function collocation method, are proposed as new paths to resolve the "curse of dimensionality". Several numerical results are presented to illustrate the promises and potential problems of these new algorithms. Comparisons with other numerical methods like Monte Carlo methods are also included. Development and analysis of the linear Shepard algorithm and its variants, all of which could be used for high dimensional scattered data interpolation problems, are also included here, as a candidate to help solve the master equation by building surrogate models in high dimensions.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Haden, Lonnie A. "A numerical procedure for computing errors in the measurement of pulse time-of-arrival and pulse-width." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

He, Chuan. "Numerical solutions of differential equations on FPGA-enhanced computers." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Delengov, Vladimir. "Computing Eigenmodes of Elliptic Operators on Manifolds Using Radial Basis Functions." Scholarship @ Claremont, 2018. https://scholarship.claremont.edu/cgu_etd/113.

Full text
Abstract:
In this work, a numerical approach based on meshless methods is proposed to obtain eigenmodes of Laplace-Beltrami operator on manifolds, and its performance is compared against existing alternative methods. Radial Basis Function (RBF)-based methods allow one to obtain interpolation and differentiation matrices easily by using scattered data points. We derive expressions for such matrices for the Laplace-Beltrami operator via so-called Reilly’s formulas and use them to solve the respective eigenvalue problem. Numerical studies of proposed methods are performed in order to demonstrate convergence on simple examples of one-dimensional curves and two-dimensional surfaces.
APA, Harvard, Vancouver, ISO, and other styles
41

Baladron, Pezoa Javier. "Exploring the neural codes using parallel hardware." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00847333.

Full text
Abstract:
The aim of this thesis is to understand the dynamics of large interconnected populations of neurons. The method we use to reach this objective is a mixture of mesoscopic modeling and high performance computing. The rst allows us to reduce the complexity of the network and the second to perform large scale simulations. In the rst part of this thesis a new mean eld approach for conductance based neurons is used to study numerically the eects of noise on extremely large ensembles of neurons. Also, the same approach is used to create a model of one hypercolumn from the primary visual cortex where the basic computational units are large populations of neurons instead of simple cells. All of these simulations are done by solving a set of partial dierential equations that describe the evolution of the probability density function of the network. In the second part of this thesis a numerical study of two neural eld models of the primary visual cortex is presented. The main focus in both cases is to determine how edge selection and continuation can be computed in the primary visual cortex. The dierence between the two models is in how they represent the orientation preference of neurons, in one this is a feature of the equations and the connectivity depends on it, while in the other there is an underlying map which denes an input function. All the simulations are performed on a Graphic Processing Unit cluster. Thethesis proposes a set of techniques to simulate the models fast enough on this kind of hardware. The speedup obtained is equivalent to that of a huge standard cluster.
APA, Harvard, Vancouver, ISO, and other styles
42

Armstrong, Shea. "Suitability of Java for Solving Large Sparse Positive Definite Systems of Equations Using Direct Methods." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1175.

Full text
Abstract:
The purpose of the thesis is to determine whether Java, a programming language that evolved out of a research project by Sun Microsystems in 1990, is suitable for solving large sparse linear systems using direct methods. That is, can performance comparable to the language traditionally used for sparse matrix computation, Fortran, be achieved by a Java implementation. Performance evaluation criteria include execution speed and memory requirements. A secondary criterion is ease of development. Many attractive features, unique to the Java programming language, make it desirable for use in sparse matrix computation and provide the motivation for the thesis. The 'write once, run anywhere' proposition, coupled with nearly-ubiquitous Java support, alleviates the need to re-write programs in the event of hardware change. Features such as garbage collection (automatic recycling of memory) and array-index bounds checking make Java programs more robust than those written in Fortran. Java has garnered a poor reputation as a high-performance computing platform, largely attributable to poor performance relative to Fortran in its early years. It is now a consensus among researchers that the Java language itself is not the problem, but rather its implementation. As such, improving compiler technology for numerical codes is critical to achieving high performance in numerical Java applications. Preliminary work involved converting SPARSPAK, a collection of Fortran 90 subroutines for solving large sparse systems of linear equations and least squares problems developed by Dr. Alan George, into Java (J-SPARSPAK). It is well known that the majority of the solution process is spent in the numeric factorization phase. Initial benchmarks showed Java performing, on average, 3. 6 times slower than Fortran for this critical phase. We detail how we improved Java performance to within a factor of two of Fortran.
APA, Harvard, Vancouver, ISO, and other styles
43

Kazazakis, Nikolaos. "Parallel computing, interval derivative methods, heuristic algorithms, and their implementation in a numerical solver, for deterministic global optimization." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45359.

Full text
Abstract:
This thesis presents new algorithms for the deterministic global optimization of general non-linear programming problems (NLPs). It is proven that the αBB general underestimator may provide exact lower bounds on a function only if rigorous conditions are satisfied. These conditions are derived and the μ-subenergy methodology is proposed to achieve tighter αBB underestimation when they are violated. An interval lower bounding test is proposed to improve αBB lower bounds and avoid expensive algorithmic steps. Piecewise-linear relaxations (PLR) are proposed for the underestimation of general functions. Calculation of these relaxations is accelerated using parallel computing. Quality bounds tightening (QBT) is proposed to reduce the cost of bounds tightening algorithms by avoiding unnecessary calculations. Violation branching is proposed to improve the performance of branching strategies by considering constraint violation when selecting a branching variable. Furthermore, a novel bounds tightening method, PLR bounds tightening (PLR-BT), is proposed. Variable-based convexity (VBC) is proposed as a general reformulation algorithm, to build tighter relaxations by exploiting underlying convexity. This work also introduces algorithms for parallel branching for the global solution of NLPs. A parallel node subdivision strategy, Multisection Branching, is proposed to achieve tighter bounds, and a parallel node selection strategy, Future Branching, is proposed to accelerate the investigation of the branch-and-bound tree. A parallel solver is presented, where MPI is used to distribute node calculations and an asynchronous bounds tightening algorithm is proposed to reduce bounds tightening wall times. This algorithm is implemented using multithreading for asynchronous feasibility-based bounds tightening (AF-BBT). All algorithms are implemented in a global solver, and its parallel architecture and features are described. This solver is used to perform numerical studies on the benefits of using the new algorithms in tandem. The new solver is benchmarked against the BARON 15.9.22 global solver for a set of problems, in which it achieves comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Tristanto, Indi Himawan. "A mesh transparent numerical method for large-eddy simulation of compressible turbulent flows." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/12128.

Full text
Abstract:
A Large Eddy-Simulation code, based on a mesh transparent algorithm, for hybrid unstructured meshes is presented to deal with complex geometries that are often found in engineering flow problems. While tetrahedral elements are very effective in dealing with complex geometry, excessive numerical diffusion often affects results. Thus, prismatic or hexahedral elements are preferable in regions where turbulence structures are important. A second order reconstruction methodology is used since an investigation of a higher order method based upon Lele's compact scheme has shown this to be impractical on general unstructured meshes. The convective fluxes are treated with the Roe scheme that has been modified by introducing a variable scaling to the dissipation matrix to obtain a nearly second order accurate centred scheme in statistically smooth flow, whilst retaining the high resolution TVD behaviour across a shock discontinuity. The code has been parallelised using MPI to ensure portability. The base numerical scheme has been validated for steady flow computations over complex geometries using inviscid and RANS forms of the governing equations. The extension of the numerical scheme to unsteady turbulent flows and the complete LES code have been validated for the interaction of a shock with a laminar mixing layer, a Mach 0.9 turbulent round jet and a fully developed turbulent pipe flow. The mixing layer and round jet computations indicate that, for similar mesh resolution of the shear layer, the present code exhibits results comparable to previously published work using a higher order scheme on a structured mesh. The unstructured meshes have a significantly smaller total number of nodes since tetrahedral elements are used to fill to the far field region. The pipe flow results show that the present code is capable of producing the correct flow features. Finally, the code has been applied to the LES computation of the impingement of a highly under-expanded jet that produces plate shock oscillation. Comparison with other workers' experiments indicates good qualitative agreement for the major features of the flow. However, in this preliminary computation the computed frequency is somewhat lower than that of experimental measurements.
APA, Harvard, Vancouver, ISO, and other styles
45

Creech, Angus C. W. "A three-dimensional numerical model of a horizontal axis, energy extracting turbine : an implementation on a parallel computing system." Thesis, Heriot-Watt University, 2009. http://hdl.handle.net/10399/2243.

Full text
Abstract:
In the last decade, there has been a resurgence of interest in tidal power as a renewable, and environmentally friendly source of electricity. Scotland is well placed in this regard, as the currents in the surrounding seas are primarily tidal; that is to say, driven by lunar and solar tides. Investigations into tidal streams as an energy source, their viability in particular locales, the efficient organisation of marine turbine farms, and most importantly, the effect of such farms on the environment, demand the use of computational fluid dynamics for effective modelling. They also require a turbine model sophisticated enough to generate realistic power output and wakes for a variety of flow conditions, yet simple enough to simulate a number of turbines on modest computing resources. What is presented here then, is the justification for such a model, the development and deployment of it during my PhD, and my validation of the model in a variety of environments.
APA, Harvard, Vancouver, ISO, and other styles
46

Setta, Mario. "Multiscale numerical approximation of morphology formation in ternary mixtures with evaporation : Discrete and continuum models for high-performance computing." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85036.

Full text
Abstract:
We propose three models to study morphology formations in interacting ternary mixtures with the evaporation of one component. Our models involve three distinct length scales: microscopic, mesoscopic, and respectively, macroscopic. The real-world application we have in mind concerns charge transport through the heterogeneous structures arising in the fabrication of organic solar cells. As first model, we propose a microscopic 3-spins lattice dynamics with short-range interactions between the considered species. This microscopic model is approximated numerically via a Monte Carlo Metropolis-based algorithm. We explore the effect of the model parameters (volatility of the solvent, system's temperature, and interaction strengths) on the structure of the formed morphologies. Our second model is built upon the first one, by introducing a new mesoscale corresponding to the size of block spins. The link between these two models as well as between the effects of the model parameters and formed morphologies are studied in detail. These two models offer insight into cross-sections of the modeling box. Our third model encodes a macroscopic view of the evaporating mixture. We investigate its capability to lead to internal coherent structures. We propose a macroscopic system of nonlinearly coupled Cahn-Hilliard equations to capture numerical results for a top view of the modeling box. Effects of effective evaporation rates, effective interaction energy parameters, and degree of polymerization on the wanted morphology formation are explored via the computational platform FEniCS using a FEM approximation of a suitably linearized system. High-performance computing resources and Python-based parallel implementations have been used to facilitate the numerical approximation of the three models.
APA, Harvard, Vancouver, ISO, and other styles
47

Torberntsson, Kim, and Vidar Stiernström. "A High Order Finite Difference Method for Simulating Earthquake Sequences in a Poroelastic Medium." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-298414.

Full text
Abstract:
Induced seismicity (earthquakes caused by injection or extraction of fluids in Earth's subsurface) is a major, new hazard in the United States, the Netherlands, and other countries, with vast economic consequences if not properly managed. Addressing this problem requires development of predictive simulations of how fluid-saturated solids containing frictional faults respond to fluid injection/extraction. Here we present a numerical method for linear poroelasticity with rate-and-state friction faults. A numerical method for approximating the fully coupled linear poroelastic equations is derived using the summation-by-parts-simultaneous-approximation-term (SBP-SAT) framework. Well-posedness is shown for a set of physical boundary conditions in 1D and in 2D. The SBP-SAT technique is used to discretize the governing equations and show semi-discrete stability and the correctness of the implementation is verified by rigorous convergence tests using the method of manufactured solutions, which shows that the expected convergence rates are obtained for a problem with spatially variable material parameters. Mandel's problem and a line source problem are studied, where simulation results and convergence studies show satisfactory numerical properties. Furthermore, two problem setups involving fault dynamics and slip on faults triggered by fluid injection are studied, where the simulation results show that fluid injection can trigger earthquakes, having implications for induced seismicity. In addition, the results show that the scheme used for solving the fully coupled problem, captures dynamics that would not be seen in an uncoupled model. Future improvements involve imposing Dirichlet boundary conditions using a different technique, extending the scheme to handle curvilinear coordinates and three spatial dimensions, as well as improving the high-performance code and extending the study of the fault dynamics.
APA, Harvard, Vancouver, ISO, and other styles
48

Hand, Randall Eugene. "A faster technique for rendering meshes in multiple display systems." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-04082002-165856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Khan, Irfan. "Direct numerical simulation and analysis of saturated deformable porous media." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34664.

Full text
Abstract:
Existing numerical techniques for modeling saturated deformable porous media are based on homogenization techniques and thus are incapable of performing micro-mechanical investigations, such as the effect of micro-structure on the deformational characteristics of the media. In this research work, a numerical scheme is developed based on the parallelized hybrid lattice-Boltzmann finite-element method, that is capable of performing micro-mechanical investigations through direct numerical simulations. The method has been used to simulate compression of model saturated porous media made of spheres and cylinders in regular arrangements. Through these simulations it is found that in the limit of small Reynolds number, Capillary number and strain, the deformational behaviour of a real porous media can be recovered through model porous media when the parameters porosity, permeability and bulk compressive modulus are matched between the two media. This finding motivated research in using model porous geometries to represent more complex real porous geometries in order to perform investigations of deformation on the latter. An attempt has been made to apply this technique to the complex geometries of ªfeltº, (a fibrous mat used in paper industries). These investigations lead to new understanding on the effect of fiber diameter on the bulk properties of a fibrous media and subsequently on the deformational behaviour of the media. Further the method has been used to investigate the constitutive relationships in deformable porous media. Particularly the relationship between permeability and porosity during the deformation of the media is investigated. Results show the need of geometry specific investigations.
APA, Harvard, Vancouver, ISO, and other styles
50

Engels, Thomas. "Numerical modeling of fluid-structure interaction in bio-inspired propulsion." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4773/document.

Full text
Abstract:
Les animaux volants et flottants ont développé des façons efficaces de produire l'écoulement de fluide qui génère les forces désirées pour leur locomotion. Cette thèse est placée dans ce contexte interdisciplinaire et utilise des simulations numériques pour étudier ces problèmes d'interaction fluides-structure, et les applique au vol des insectes et à la nage des poissons. Basée sur les travaux existants sur les obstacles mobiles rigides, une méthode numérique a été développée, permettant également la simulation des obstacles déformables et fournissant une polyvalence et précision accrues dans le cas des obstacles rigides. Nous appliquons cette méthode d'abord aux insectes avec des ailes rigides, où le corps et d'autres détails, tels que les pattes et les antennes, peuvent être inclus. Après la présentation de tests de validation détaillée, nous procédons à l'étude d'un modèle de bourdon dans un écoulement turbulent pleinement développé. Nos simulations montrent que les perturbations turbulentes affectent les insectes volants d'une manière différente de celle des avions aux ailes fixées et conçues par l'humain. Dans le cas de ces derniers, des perturbations en amont peuvent déclencher des transitions dans la couche limite, tandis que les premiers ne présentent pas de changements systématiques dans les forces aérodynamiques. Nous concluons que les insectes se trouvent plutôt confrontés à des problèmes de contrôle dans un environnement turbulent qu'à une détérioration de la production de force. Lors de l‘étape suivante, nous concevons un modèle solide, basé sur une équation de barre monodimensionnelle, et nous passons à la simulation des systèmes couplés fluide–structure<br>Flying and swimming animals have developed efficient ways to produce the fluid flow that generates the desired forces for their locomotion. These bio-inspired problems couple fluid dynamics and solid mechanics with complex geometries and kinematics. The present thesis is placed in this interdisciplinary context and uses numerical simulations to study these fluid--structure interaction problems with applications in insect flight and swimming fish. Based on existing work on rigid moving obstacles, using an efficient Fourier discretization, a numerical method has been developed, which allows the simulation of flexible, deforming obstacles as well, and provides enhanced versatility and accuracy in the case of rigid obstacles. The method relies on the volume penalization method and the fluid discretization is still based on a Fourier discretization. We first apply this method to insects with rigid wings, where the body and other details, such as the legs and antennae, can be included. After presenting detailed validation tests, we proceed to studying a bumblebee model in fully developed turbulent flow. Our simulations show that turbulent perturbations affect flapping insects in a different way than human-designed fixed-wing aircrafts. While in the latter, upstream perturbations can cause transitions in the boundary layer, the former do not present systematical changes in aerodynamic forces. We conclude that insects rather face control problems in a turbulent environment than a deterioration in force production. In the next step, we design a solid model, based on a one--dimensional beam equation, and simulate coupled fluid--solid systems
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!