To see the other types of publications on this topic, follow the link: Computational physics.

Dissertations / Theses on the topic 'Computational physics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational physics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Knebe, Alexander. "Computational cosmology." Thesis, Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2010/4114/.

Full text
Abstract:
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
Die Kosmologie ist heutzutage eines der spannendsten Arbeitsgebiete in der Astronomie und Astrophysik. Das vorherrschende (Urknall-)Modell in Verbindung mit den neuesten und präzisesten Beobachtungsdaten deutet darauf hin, daß wir in einem Universum leben, welches zu knapp 24% aus Dunkler Materie und zu 72% aus Dunkler Energie besteht; die sichtbare Materie macht gerade einmal 4% aus. Und auch wenn uns derzeit eindeutige bzw. direkte Beweise für die Existenz dieser beiden exotischen Bestandteile des Universums fehlen, so ist es uns dennoch möglich, die Entstehung von Galaxien, Galaxienhaufen und der großräumigen Struktur in solch einem Universum zu modellieren. Dabei bedienen sich Wissenschaftler Computersimulationen, welche die Strukturbildung in einem expandierenden Universum mittels Großrechner nachstellen; dieses Arbeitsgebiet wird Numerische Kosmologie bzw. “Computational Cosmology” bezeichnet und ist Inhalt der vorliegenden Habilitationsschrift. Nach einer kurzen Einleitung in das Themengebiet werden die Techniken zur Durchführung solcher numerischen Simulationen vorgestellt. Die Techniken zur Lösung der relevanten (Differential-)Gleichungen zur Modellierung des “Universums im Computer” unterscheiden sich dabei teilweise drastisch voneinander (Teilchen- vs. Gitterverfahren), und es werden die verfahrenstechnischen Unterschiede herausgearbeitet. Und obwohl unterschiedliche Programme auf unterschiedlichen Methoden basieren, so sind die Unterschiede in den Endergebnissen doch (glücklicherweise) vernachlässigbar gering. Wir stellen desweiteren einen komplett neuen Code – basierend auf dem Gitterverfahren – vor, welcher einen Hauptbestandteil der vorliegenden Habilitation darstellt. Im weiteren Verlauf der Arbeit werden diverse kosmologische Simulationen vorgestellt und ausgewertet. Dabei werden zum einen die Entstehung und Entwicklung von Satellitengalaxien – den (kleinen) Begleitern von Galaxien wie unserer Milchstraße und der Andromedagalaxie – als auch Alternativen zum oben eingeführten “Standardmodell” der Kosmologie untersucht. Es stellt sich dabei heraus, daß keine der (hier vorgeschlagenen) Alternativen eine bedrohliche Konkurenz zu dem Standardmodell darstellt. Aber nichtsdestoweniger zeigen die Rechnungen, daß selbst so extreme Abänderungen wie z.B. modifizierte Newton’sche Dynamik (MOND) zu einem Universum führen können, welches dem beobachteten sehr nahe kommt. Die Ergebnisse in Bezug auf die Dynamik der Satellitengalaxien zeigen auf, daß die Untersuchung der Trümmerfelder von durch Gezeitenkräfte zerriebenen Satellitengalaxien Rückschlüsse auf Eigenschaften des ursprünglichen Satelliten zulassen. Diese Tatsache wird bei der Aufschlüsselung der Entstehungsgeschichte unserer eigenen Milchstraße von erheblichem Nutzen sein. Trotzdem deuten die hier vorgestellten Ergebnisse auch darauf hin, daß dieser Zusammenhang nicht so eindeutig ist, wie er zuvor mit Hilfe kontrollierter Einzelsimulationen von Satellitengalaxien in analytischen “Mutterpotentialen” vorhergesagt wurde: Das Zusammenspiel zwischen den Satelliten und der Muttergalaxie sowie die Einbettung der Rechnungen in einen kosmologischen Rahmen sind von entscheidender Bedeutung.
APA, Harvard, Vancouver, ISO, and other styles
2

Zagordi, Osvaldo. "Statistical physics methods in computational biology." Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/3971.

Full text
Abstract:
The interest of statistical physics for combinatorial optimization is not new, it suffices to think of a famous tool as simulated annealing. Recently, it has also resorted to statistical inference to address some "hard" optimization problems, developing a new class of message passing algorithms. Three applications to computational biology are presented in this thesis, namely: 1) Boolean networks, a model for gene regulatory networks; 2) haplotype inference, to study the genetic information present in a population; 3) clustering, a general machine learning tool.
APA, Harvard, Vancouver, ISO, and other styles
3

Vakili, Mohammadjavad. "Methods in Computational Cosmology." Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10260795.

Full text
Abstract:

State of the inhomogeneous universe and its geometry throughout cosmic history can be studied by measuring the clustering of galaxies and the gravitational lensing of distant faint galaxies. Lensing and clustering measurements from large datasets provided by modern galaxy surveys will forever shape our understanding of the how the universe expands and how the structures grow. Interpretation of these rich datasets requires careful characterization of uncertainties at different stages of data analysis: estimation of the signal, estimation of the signal uncertainties, model predictions, and connecting the model to the signal through probabilistic means. In this thesis, we attempt to address some aspects of these challenges.

The first step in cosmological weak lensing analyses is accurate estimation of the distortion of the light profiles of galaxies by large scale structure. These small distortions, known as the cosmic shear signal, are dominated by extra distortions due to telescope optics and atmosphere (in the case of ground-based imaging). This effect is captured by a kernel known as the Point Spread Function (PSF) that needs to be fully estimated and corrected for. We address two challenges a head of accurate PSF modeling for weak lensing studies. The first challenge is finding the centers of point sources that are used for empirical estimation of the PSF. We show that the approximate methods for centroiding stars in wide surveys are able to optimally saturate the information content that is retrievable from astronomical images in the presence of noise.

The fist step in weak lensing studies is estimating the shear signal by accurately measuring the shapes of galaxies. Galaxy shape measurement involves modeling the light profile of galaxies convolved with the light profile of the PSF. Detectors of many space-based telescopes such as the Hubble Space Telescope (HST) sample the PSF with low resolution. Reliable weak lensing analysis of galaxies observed by the HST camera requires knowledge of the PSF at a resolution higher than the pixel resolution of HST. This PSF is called the super-resolution PSF. In particular, we present a forward model of the point sources imaged through filters of the HST WFC3 IR channel. We show that this forward model can accurately estimate the super-resolution PSF. We also introduce a noise model that permits us to robustly analyze the HST WFC3 IR observations of the crowded fields.

Then we try to address one of the theoretical uncertainties in modeling of galaxy clustering on small scales. Study of small scale clustering requires assuming a halo model. Clustering of halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies with halo occupation distribution (HOD) assume that halo mass alone is sufficient to characterize the connection between galaxies and halos. However, assembly bias could cause the modeling of galaxy clustering to face systematic effects if the expected number of galaxies in halos is correlated with other halo properties. Using high resolution N-body simulations and the clustering measurements of Sloan Digital Sky Survey (SDSS) DR7 main galaxy sample, we show that modeling of galaxy clustering can slightly improve if we allow the HOD model to depend on halo properties beyond mass.

One of the key ingredients in precise parameter inference using galaxy clustering is accurate estimation of the error covariance matrix of clustering measurements. This requires generation of many independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast and accurate method based on low-resolution N-body simulations and an empirical bias model for generating mock catalogs. We use fast particle mesh gravity solvers for generation of dark matter density field and we use Markov Chain Monti Carlo (MCMC) to estimate the bias model that connects dark matter to galaxies. We show that this approach enables the fast generation of mock catalogs that recover clustering at a percent-level accuracy down to quasi-nonlinear scales.

Cosmological datasets are interpreted by specifying likelihood functions that are often assumed to be multivariate Gaussian. Likelihood free approaches such as Approximate Bayesian Computation (ABC) can bypass this assumption by introducing a generative forward model of the data and a distance metric for quantifying the closeness of the data and the model. We present the first application of ABC in large scale structure for constraining the connections between galaxies and dark matter halos. We present an implementation of ABC equipped with Population Monte Carlo and a generative forward model of the data that incorporates sample variance and systematic uncertainties. (Abstract shortened by ProQuest.)

APA, Harvard, Vancouver, ISO, and other styles
4

Wilson, John Max. "Computational Studies of Geophysical Systems." Thesis, University of California, Davis, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10979293.

Full text
Abstract:

Earthquakes and tsunamis represent two of the most devastating natural disasters faced by humankind. Earthquakes can occur in matters of seconds, with little to no warning. The governing variables of earthquakes, namely the stress profiles of vast regions of the earth's crust, cannot be measured in a comprehensive manner. Similarly, tsunami parameters are often accurately determined only minutes before waves make landfall. We are therefore left only with statistical analyses of past events to produce hazard forecasts for these disasters. Unfortunately, the events that cause the most damage also occur infrequently, and most regions have scientific records of earthquakes going back only a century, with modern instrumentation being widely distributed only in the past few decades. The 2011 M=9 Tohoku earthquake and tsunami, which killed close to sixteen thousand people, is the perfect case study of a country heavily invested in earthquake and tsunami risk reduction, yet being unprepared for a once-in-a-millennium event.

Physics-based simulations are some of the most promising tools for learning more about these systems. These tools can be used to study many thousands of years worth of synthetic seismicity. Additionally, scaling laws present in such complex geophysical systems can provide insights into dynamics otherwise hidden from view. This dissertation represents a collection of studies using these two tools. First, the Virtual Quake earthquake simulator is introduced, along with some of my contributions to its functionality and maintenance. A method based on Omori aftershock scaling is presented for verifying the spatial distribution of synthetic earthquakes produced by long-term simulators. The use of aftershock ground motion records to improve constraints on those same aftershock models is then explored. Finally, progress in constructing a tsunami early warning system based on the coupling of Virtual Quake and the Tsunami Squares wave simulator is presented. Taken together, these studies demonstrate the versatility and strength of complexity science and computational methods in the context of hazard analysis.

APA, Harvard, Vancouver, ISO, and other styles
5

Venkataram, Prashanth Sanjeev. "Computational investigations of nanophotonic systems." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92676.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Physics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-106).
In this thesis, I developed code in the MEEP finite-difference time domain classical electromagnetic solver to simulate the quantum phenomenon of spontaneous emission and its enhancement by a photonic crystal. The results of these simulations were favorably cross-checked with semi-analytical predictions and experimental results. This code was further extended to simulate spontaneous emission from the top half of a sphere, where the top half is a dielectric material and the bottom half is a metal, in order to determine how effective the metal is at reflecting the emission toward the top. Separately, I used the SCUFF-EM boundary element method classical electromagnetic solver to simulate absorption and scattering, together called extinction, of infrared light from nanoparticles, and used those results to optimize the nanoparticle shapes and sizes for extinction at the desired infrared wavelength.
by Prashanth Sanjeev Venkataram.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
6

Thompson, Travis W. "Tuning the Photochemical Reactivity of Electrocyclic Reactions| A Non-adiabatic Molecular Dynamics Study." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10839950.

Full text
Abstract:

We use non-adiabatic ab initio molecular dynamics to study the influence of substituent side groups on the photoactive unit (Z)-hexa-1,3,5-triene (HT). The Time-Dependent Density Functional Theory Surface Hopping method (TDDFT-SH) is used to investigate the influence of substituted isopropyl and methyl groups on the excited state dynamics. The 1,4 and 2,5-substituted molecules are simulated: 2,5-dimethylhexa-1,3,5-triene (DMHT), 2-isopropyl-5-methyl-1,3,5-hexatriene (2,5-IMHT), 3,7-dimethylocta-1,3,5-triene (1,4-IMHT), and 2,5-diisopropyl-1,3,5-hexatriene (DIHT). We find that HT and 1,4-IMHT have the lowest ring-closing branching ratios of 5.3% and 1.0%, respectively. For the 2,5-substituted derivatives, the branching ratio increases with increasing size of the substituents, exhibiting yields of 9.78%, 19%, and 24% for DMHT, 2,5-IMHT, and DIHT, respectively. The reaction channels are shown to prefer certain conformation configurations at excitation, where the ring-closing reaction tends to originate from the gauche-Z-gauche (gZg) rotamer almost exclusively. In addition, there is a conformational dependency on absorption, gZg conformers have on average lower S1 ← S0 excitation energies that the other rotamers. Furthermore, we develop a method to calculate a predicted quantum yield that is in agreement with the wavelength-dependence observed in experiment for DMHT. In addition, the quantum yield method also predicts DIHT to have the highest CHD yield of 0.176 at 254 nm and 0.390 at 290 nm.

Additionally, we study the vitamin D derivative Tachysterol (Tachy) which exhibits similar photochemical properties as HT and its derivatives. We find the reaction channels of Tachy also have a conformation dependency, where the reactive products toxisterol-D1 (2.3%), previtamin D (1.4%) and cyclobutene toxisterol (0.7%) prefer cEc, cEt, and tEc configurations at excitation, leaving the tEt completely non-reactive. The rotamers similarly have a dependence on absorption as well, where the cEc configuration has the lowest energy S 1 ← S0 excitation of the rotamers. The wavelength dependence of the rotamers should lead to selective properties of these molecules at excitation. An excitation to the red-shifted side of the maximum absorption peak will on average lead to excitations of the gZg rotamers more exclusively.

APA, Harvard, Vancouver, ISO, and other styles
7

Darmawan, Andrew. "Quantum computational phases of matter." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11640.

Full text
Abstract:
Universal quantum computation can be realised by measuring individual particles in a specially entangled state of many particles, called a universal resource state. This model of quantum computation, called measurement-based quantum computation (MBQC), provides a framework for studying the intrinsic computational power of physical systems. In this thesis I will investigate how universal resource states may arise naturally as ground states of interacting spin systems. In particular, I will describe new 'phases' of quantum matter, which are characterised by having universal resource states as ground states. This direction of research allows us to draw on techniques from both many-body quantum physics and quantum information theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Allehabi, Saleh. "Computational Spectroscopy of C-Like Mg VII." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2018. http://digitalcommons.auctr.edu/cauetds/153.

Full text
Abstract:
In this thesis, energy levels, lifetimes, oscillator strengths and transition probabilities of Mg VII have been calculated. The Hartree-Fock (HF) and Multiconfiguration Hartree-Fock (MCHF) methods were used in the calculations of these atomic properties. We have included relativistic operators mass correction, spin-orbit interaction, one body Darwin term and spin-other-orbit interaction in the Breit-Pauli Hamiltonian. The configurations, (1s2)2s22p2, 2s2p3,2p4, 2s22p3s, 2s22p3p,2s2p2(4P)3s and 2s22p3d which correspond to 52 fine-structure levels, were included in the atomic model for the Mg VII ions. The present results have been compared with NIST compilation and other theoretical results, and generally a good agreement was found.
APA, Harvard, Vancouver, ISO, and other styles
9

Flint, Christopher Robert. "Computational Methods of Lattice Boltzmann Mhd." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1530192360.

Full text
Abstract:
Lattice Boltzmann (LB) Methods are a somewhat novel approach to Computational Fluid Dynamics (CFD) simulations. These methods simulate Navier-Stokes and magnetohydrodynamics (MHD) equations on the mesoscopic (quasi-kinetic) scale by solving for a statistical distribution of particles rather than attempting to solve the nonlinear macroscopic equations directly. These LB methods allow for a highly parallelizable code since one replaces the difficult nonlinear convective derivatives of MHD by simple linear advection on a lattice. New developments in LB have significantly extended the numerical stability limits of its applicability. These developments include multiple relaxation times (MRT) in the collision operators, maximizing entropy to ensure positive definiteness in the distribution functions, as well as large eddy simulations of MHD turbulence. Improving the limits of this highly parallelizable simulation method allows it to become an ideal candidate for simulating various fluid and plasma problems; improving both the speed of the simulation and the spatial grid resolution of the LB algorithms on today's high performance supercomputers. Some of these LB extensions are discussed and tested against various problems in magnetized plasmas.
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Hao. "Computational Studies of Strongly Correlated Quantum Matter." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1499450059.

Full text
Abstract:
The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.
APA, Harvard, Vancouver, ISO, and other styles
11

Ross, Brian Christopher. "Computational tools for modeling and measuring chromosome structure." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79262.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 99-112).
DNA conformation within cells has many important biological implications, but there are challenges both in modeling DNA due to the need for specialized techniques, and experimentally since tracing out in vivo conformations is currently impossible. This thesis contributes two computational projects to these efforts. The first project is a set of online and offline calculators of conformational statistics using a variety of published and unpublished methods, addressing the current lack of DNA model-building tools intended for general use. The second project is a reconstructive analysis that could enable in vivo mapping of DNA conformation at high resolution with current experimental technology.
by Brian Christopher Ross.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Ranner, Thomas. "Computational surface partial differential equations." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/57647/.

Full text
Abstract:
Surface partial differential equations model several natural phenomena; for example in uid mechanics, cell biology and material science. The domain of the equations can often have complex and changing morphology. This implies analytic techniques are unavailable, hence numerical methods are required. The aim of this thesis is to design and analyse three methods for solving different problems with surface partial differential equations at their core. First, we define a new finite element method for numerically approximating solutions of partial differential equations in a bulk region coupled to surface partial differential equations posed on the boundary of this domain. The key idea is to take a polyhedral approximation of the bulk region consisting of a union of simplices, and to use piecewise polynomial boundary faces as an approximation of the surface and solve using isoparametric finite element spaces. We study this method in the context of a model elliptic problem. The main result in this chapter is an optimal order error estimate which is confirmed in numerical experiments. Second, we use the evolving surface finite element method to solve a Cahn- Hilliard equation on an evolving surface with prescribed velocity. We start by deriving the equation using a conservation law and appropriate transport formulae and provide the necessary functional analytic setting. The finite element method relies on evolving an initial triangulation by moving the nodes according to the prescribed velocity. We go on to show a rigorous well-posedness result for the continuous equations by showing convergence, along a subsequence, of the finite element scheme. We conclude the chapter by deriving error estimates and present various numerical examples. Finally, we stray from surface finite element method to consider new unfitted finite element methods for surface partial differential equations. The idea is to use a fixed bulk triangulation and approximate the surface using a discrete approximation of the distance function. We describe and analyse two methods using a sharp interface and narrow band approximation of the surface for a Poisson equation. Error estimates are described and numerical computations indicate very good convergence and stability properties.
APA, Harvard, Vancouver, ISO, and other styles
13

Djambazov, Georgi Stefanov. "Numerical techniques for computational aeroacoustics." Thesis, University of Greenwich, 1998. http://gala.gre.ac.uk/6149/.

Full text
Abstract:
The problem of aerodynamic noise is considered following the Computational Aeroacoustics approach which is based on direct numerical simulation of the sound field. In the region of sound generation, the unsteady airflow is computed separately from the sound using Computational Fluid Dynamics (CFD) codes. Overlapping this region and extending further away is the acoustic domain where the linearised Euler equations governing the sound propagation in moving medium are solved numerically. After considering a finite volume technique of improved accuracy, preference is given to an optimised higher order finite difference scheme which is validated against analytical solutions of the governing equations. A coupling technique of two different CFD codes with the acoustic solver is demonstrated to capture the mechanism of sound generation by vortices hitting solid objects in the flow. Sub-grid turbulence and its effect 011sound generation has not been considered in this thesis. The contribution made to the knowledge of Computational Aeroacoustics can be summarised in the following: 1) Extending the order of accuracy of the staggered leap-frog method for the linearised Euler equations in both finite volume and finite difference formulations; 2) Heuristically determined optimal coefficients for the staggered dispersion relation preserving scheme; 3) A solution procedure for the linearised Euler equations involving mirroring at solid boundaries which combines the flexibility of the finite volume method with the higher accuracy of the finite difference schemes; 4) A method for identifying the sound sources in the CFD solution at solid walls and an expansion technique for sound sources inside the flow; 5) Better understanding of the three-level structure of the motions in air: mean flow, flow perturbations, and acoustic waves. It can be used, together with detailed simulation results, in the search for ways of reducing the aerodynamic noise generated by propellers, jets, wind turbines, tunnel exits, and wind-streamed buildings.
APA, Harvard, Vancouver, ISO, and other styles
14

Matsuda, Takehisa. "Computational proposal for locating local defects in superconducting tapes." California State University, Long Beach, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sponseller, Daniel Ray. "Molecular Dynamics Study of Polymers and Atomic Clusters." Thesis, George Mason University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10685723.

Full text
Abstract:

This dissertation contains investigations based on Molecular Dynamics (MD) of a variety of systems, from small atomic clusters to polymers in solution and in their condensed phases. The overall research is divided in three parts. First, I tested a new thermostat in the literature on the thermal equilibration of a small cluster of Lennard-Jones (LJ) atoms. The proposed thermostat is a Hamiltonian thermostat based on a logarithmic oscillator with the outstanding property that the mean value of its kinetic energy is constant independent of the mass and energy. I inspected several weak-coupling interaction models between the LJ cluster and the logarithmic oscillator in 3D. In all cases I show that this coupling gives rise to a kinetic motion of the cluster center of mass without transferring kinetic energy to the interatomic vibrations. This is a failure of the published thermostat because the temperature of the cluster is mainly due to vibrations in small atomic clusters This logarithmic oscillator cannot be used to thermostat any atomic or molecular system, small or large.

The second part of the dissertation is the investigation of the inherent structure of the polymer polyethylene glycol (PEG) solvated in three different solvents: water, water with 4% ethanol, and ethyl acetate. PEG with molecular weight of 2000 Da (PEG2000) is a polymer with many applications from industrial manufacturing to medicine that in bulk is a paste. However, its structure in very dilute solutions deserved a thorough study, important for the onset of aggregation with other polymer chains. I introduced a modification to the GROMOS 54A7 force field parameters for modeling PEG2000 and ethyl acetate. Both force fields are new and have now been incorporated into the database of known residues in the molecular dynamics package Gromacs. This research required numerous high performance computing MD simulations in the ARGO cluster of GMU for systems with about 100,000 solvent molecules. My findings show that PEG2000 in water acquires a ball-like structure without encapsulating solvent molecules. In addition, no hydrogen bonds were formed. In water with 4% ethanol, PEG2000 acquires also a ball-like structure but the polymer ends fluctuate folding outward and onward, although the general shape is still a compact ball-like structure.

In contrast, PEG2000 in ethyl acetate is quite elongated, as a very flexible spaghetti that forms kinks that unfold to give rise to folds and kinks in other positions along the polymer length. The behavior resembles an ideal polymer in a &thetas; solvent. A Principal Component Analysis (PCA) of the minima composing the inherent structure evidences the presence of two distinct groups of ball-like structures of PEG2000 in water and water with 4% ethanol. These groups give a definite signature to the solvated structure of PEG2000 in these two solvents. In contrast, PCA reveals several groups of avoided states for PEG2000 in ethyl acetate that disqualify the possibility of being an ideal polymer in a &thetas; solvent.

The third part of the dissertation is a work in progress, where I investigate the condensed phase of PEG2000 and study the interface between the condensed phase and the three different solvents under study. With a strategy of combining NPT MD simulations at different temperatures and pressures, PEG 2000 condensed phase displays the experimental density within a 1% discrepancy at 300 K and 1 atm. This is a very encouraging result on this ongoing project.

APA, Harvard, Vancouver, ISO, and other styles
16

Tsai, Carol Leanne. "Heuristic Algorithms for Agnostically Identifying the Globally Stable and Competitive Metastable Morphologies of Block Copolymer Melts." Thesis, University of California, Santa Barbara, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423067.

Full text
Abstract:

Block copolymers are composed of chemically distinct polymer chains that can be covalently linked in a variety of sequences and architectures. They are ubiquitous as ingredients of consumer products and also have applications in advanced plastics, drug delivery, advanced membranes, and next generation nano-lithographic patterning. The wide spectrum of possible block copolymer applications is a consequence of block copolymer self-assembly into periodic, meso-scale morphologies as a function of varying block composition and architecture in both melt and solution states, and the broad spectrum of physical properties that such mesophases afford.

Materials exploration and discovery has traditionally been pursued through an iterative process between experimental and theoretical/computational collaborations. This process is often implemented in a trial-and-error fashion, and from the computational perspective of generating phase diagrams, usually requires some existing knowledge about the competitive phases for a given system. Self-Consistent Field Theory (SCFT) simulations have proven to be both qualitatively and quantitatively accurate in the determination, or forward mapping, of block copolymer phases of a given system. However, it is possible to miss candidates. This is because SCFT simulations are highly dependent on their initial configurations, and the ability to map phase diagrams requires a priori knowledge of what the competing candidate morphologies are. The unguided search for the stable phase of a block copolymer of a given composition and architecture is a problem of global optimization. SCFT by itself is a local optimization method, so we can combine it with population-based heuristic algorithms geared at global optimization to facilitate forward mapping. In this dissertation, we discuss the development of two such methods: Genetic Algorithm + SCFT (GA-SCFT) and Particle Swarm Optimization + SCFT (PSO-SCFT). Both methods allow a population of configurations to explore the space associated with the numerous states accessible to a block copolymer of a given composition and architecture.

GA-SCFT is a real-space method in which a population of SCFT field configurations “evolves” over time. This is achieved by initializing the population randomly, allowing the configurations to relax to local basins of attraction using SCFT simulations, then selecting fit members (lower free energy structures) to recombine their fields and undergo mutations to generate a new “generation” of structures that iterate through this process. We present results from benchmark testing of this GA-SCFT technique on the canonical AB diblock copolymer melt, for which the theoretical phase diagram has long been established. The GA-SCFT algorithm successfully predicts many of the conventional mesophases from random initial conditions in large, 3-dimensional simulation cells, including hexagonally-packed cylinders, BCC-packed spheres, and lamellae, over a broad composition range and weak to moderate segregation strength. However, the GA-SCFT method is currently not effective at discovery of network phases, such as the Double-Gyroid (GYR) structure.

PSO-SCFT is a reciprocal space approach in which Fourier components of SCFT fields near the principal shell are manipulated. Effectively, PSO-SCFT facilitates the search through a space of reciprocal-space SCFT seeds which yield a variety of morphologies. Using intensive free energy as a fitness metric by which to compare these morphologies, the PSO-SCFT methodology allows us to agnostically identify low-lying competitive and stable morphologies. We present results for applying PSO-SCFT to conformationally symmetric diblock copolymers and a miktoarm star polymer, AB4, which offers a rich variety of competing sphere structures. Unlike the GA-SCFT method we previously presented, PSO-SCFT successfully predicts the double gyroid morphology in the AB-diblock. Furthermore, PSO-SCFT successfully recovers the A 15 morphology at a composition where it is expected to be stable in the miktoarm system, as well as several competitive metastable candidates, and a new sphere morphology belonging to the hexagonal space group 191, which has not been seen before in polymer systems. Thus, we believe the PSO-SCFT method provides a promising platform for screening for competitive structures in a given block copolymer system.

APA, Harvard, Vancouver, ISO, and other styles
17

Srisupattarawanit, Tarin. "Simulation of offshore wind turbines by computational multi-physics." kostenfrei, 2007. http://www.digibib.tu-bs.de/?docid=00020645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Miller, David J. Ghosh Avijit. "New methods in computational systems biology /." Philadelphia, Pa. : Drexel University, 2008. http://hdl.handle.net/1860/2810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Giddy, Andrew Peter. "Computational studies of structural phase transitions." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Han, Song. "Computational Methods for Multi-dimensional Neutron Diffusion Problems." Licentiate thesis, KTH, Physics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Steneteg, Peter, and Lars Erik Rosengren. "Design and Implementation of a General Molecular Dynamics Package." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6053.

Full text
Abstract:

There are many different codes available for making molecular dynamic simulation. Most of these are focused on high performance mainly. We have moved that focus towards modularity, flexibility and user friendliness. Our goal has been to design a software that is easy to use, can handle many different kind of simulations and is easily extendable to meet new requirements.

In the report we present you with the theory that is needed to understand the principles of a molecular dynamics simulation. The four different potentials we have used in the software are presented. Further we give a detailed description of the design and the different design choices we have made while constructing the software.

We show some examples of how the software can be used and discuss some aspects of the performance of the implementation. Finally we give our thoughts on the future of the software.

APA, Harvard, Vancouver, ISO, and other styles
22

Farr, Graham E. "Topics in computational complexity." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alvarado, Walter. "Investigating Butyrylcholinesterase Inhibition via Molecular Mechanics." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10639439.

Full text
Abstract:

We show that a combination of different theoretical methods is a viable approach to calculate and explain the relative binding affinities of inhibitors of the human butyrylcholinesterase enzyme. We probe structural properties of the enzyme-inhibitor complex in the presence of dialkyl phenyl phosphates and derivatives that include changes to the aromatic group and alkane-to-cholinyl substitutions that help these inhibitors mimic physiological substrates. Monte Carlo docking allowed for the identification of three regions within the active site of the enzyme where substituents of the phosphate group could be structurally stabilized. Computational clustering was used to identify distinct binding modes and their relative stabilities. Molecular dynamics suggest an essential asparagine residue not previously characterized as strongly influencing inhibitor strength which may serve as a crucial component in catalytic and inhibitory activity. This study provides a framework for suggesting future inhibitors that we expect will be effective at sub-micromolar concentrations.

APA, Harvard, Vancouver, ISO, and other styles
24

Stirewalt, Heather R. "Computation as a Model Building Tool in a High School Physics Classroom." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785706.

Full text
Abstract:

The Next Generation Science Standards (NGSS) have established computational thinking as one of the science and engineering practices that should be developed in high school classrooms. Much of the work done by scientists is accomplished through the use of computation, but many students leave high school with little to no exposure to coding of any kind. This study outlines an attempt to integrate computational physics lessons into a high school algebra-based physics course which utilizes Modeling Instruction. Specifically, it aims to determine if students who complete computational physics assignments demonstrate any difference in understanding force concepts as measured by the Force Concept Inventory (FCI) versus students who do not. Additionally, it investigates students’ attitudes about learning computation alongside physics. Students were introduced to Vpython programs during the course of a semester. The FCI was administered pre and post instruction, and the gains were measured against a control group. The Computational Modeling in Physics Attitudinal Student Survey (COMPASS) was administered post instruction and the responses were analyzed. While the FCI gains were slightly larger on average than the control group, the difference was not statistically significant. This at least suggests that incorporating computational physics assignments does not adversely affect students’ conceptual learning.

APA, Harvard, Vancouver, ISO, and other styles
25

Iacchetta, Alexander S. "Spatio-Spectral Interferometric Imaging and the Wide-Field Imaging Interferometry Testbed." Thesis, University of Rochester, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10936092.

Full text
Abstract:

The light collecting apertures of space telescopes are currently limited in part by the size and weight restrictions of launch vehicles, ultimately limiting the spatial resolution that can be achieved by the observatory. A technique that can overcome these limitations and provide superior spatial resolution is interferometric imaging, whereby multiple small telescopes can be combined to produce a spatial resolution comparable to a much larger monolithic telescope. In astronomy, the spectrum of the sources in the scene are crucial to understanding the material composition of the sources. So, the ultimate goal is to have high-spatial-resolution imagery and obtain sufficient spectral resolution for all points in the scene. This goal can be accomplished through spatio-spectral interferometric imaging, which combines the aperture synthesis aspects of a Michelson stellar interferometer with the spectral capabilities of Fourier transform spectroscopy.

Spatio-spectral interferometric imaging can be extended to a wide-field imaging modality, which increases the collecting efficiency of the technique. This is the basis for NASA’s Wide-field Imaging Interferometry Testbed (WIIT). For such an interferometer, there are two light collecting apertures separated by a variable distance known as the baseline length. The optical path in one of the arms of the interferometer is variable, while the other path delay is fixed. The beams from both apertures are subsequently combined and imaged onto a detector. For a fixed baseline length, the result is many low-spatial-resolution images at a slew of optical path differences, and the process is repeated for many different baseline lengths and orientations. Image processing and synthesis techniques are required to reduce the large dataset into a single high-spatial-resolution hyperspectral image.

Our contributions to spatio-spectral interferometry include various aspects of theory, simulation, image synthesis, and processing of experimental data, with the end goal of better understanding the nature of the technique. We present the theory behind the measurement model for spatio-spectral interferometry, as well as the direct approach to image synthesis. We have developed a pipeline to preprocess experimental data to remove unwanted signatures in the data and register all image measurements to a single orientation, which leverages information about the optical system’s point spread function. In an experimental setup, such as WIIT, the reference frame for the path difference measured for each baseline is unknown and must be accounted for. To overcome this obstacle, we created a phase referencing technique that leverages point sources within the scene of known separation in order to recover unknown information regarding the measurements in a laboratory setting. We also provide a method that allows for the measurement of spatially and spectrally complicated scenes with WIIT by decomposing them prior to scene projection.

APA, Harvard, Vancouver, ISO, and other styles
26

Sun, Baoqing. "Three dimensional computational imaging with single-pixel detectors." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6127/.

Full text
Abstract:
Computational imaging with single-pixel detectors utilises spatial correlation of light to obtain images. A series of structured illumination is generated using a spatial light modulator to encode the spatial information of an object. The encoded object images are recorded as total intensities with no spatial information by a single-pixel detector. These intensities are then sent to correlate with their corresponding illumination structures to derive an image. This correlation imaging method was first recognised as a quantum imaging technique called ghost imaging (GI) in 1995. Quantum GI uses the spatial correlation of entangled photon pairs to form images and was later demonstrated also by using classical correlated light beams. In 2008, an adaptive classical GI system called computational GI which employed a spatial light modulator and a single-pixel detector was proposed. Since its proposal, this computational imaging technique received intensive interest for this potential application. The aim of the work in this thesis was to improve this new imaging technique into a more applicable stage. Our contribution mainly includes three aspects. First an advanced reconstruction algorithm called normalised ghost imaging was developed to improve the correlation efficiency. By normalising the object intensity with a reference beam, the reconstruction single-to-noise ratio can be increased, especially for a more transmissive object. In the second work, a computational imaging scheme adapted from computational GI was designed by using a digital light projector for structured illumination. Compared to a conventional computational GI system, the adaptive system improved the reconstruction efficiency significantly. And for the first time, correlation imaging using structured illumination and single-pixel detection was able to image a 3 dimensional reflective object with reasonable details. By using several single-pixel detectors, the system was able to retrieve the 3 dimensional profile of the object. In the last work, effort was devoted to increase the reconstruction speed of the single-pixel imaging technique, and a fast computational imaging system was built up to generate real-time single-pixel videos.
APA, Harvard, Vancouver, ISO, and other styles
27

Varner, Samuel John. "Experimental and computational techniques in carbon-13 NMR." W&M ScholarWorks, 1999. https://scholarworks.wm.edu/etd/1539623952.

Full text
Abstract:
An efficient method for calculating NMR lineshapes from anisotropic second rank tensor interactions is presented. The algorithm produces lineshapes from asymmetric tensors by summing those from symmetric tensors. This approach significantly reduces the calculation time, greatly facilitating iterative nonlinear least squares fitting of experimental spectra. This algorithm has been modified to produce partially relaxed lineshapes and spectra of partially ordered samples.;Calculations for rapidly spinning samples show that spin-lattice relaxation time ( T1Z ) anisotropy varies with the angle between the spinning axis and the external field. When the rate of molecular motion is in the extreme narrowing limit, measurement of T1Z anisotropies for two different values of the spinning angle allows the determination of two linear combinations of the three static spectral densities, J0(0), J1(0) and J2(0). Experimental results for ferrocene demonstrate the utility of these linear combinations in the investigation of molecular dynamics with natural abundance 13C NMR. For ferrocene-d 10, deuteron T1Z and quadrupolar order relaxation time ( T1Q ) anisotropies, along with the relaxation time of the 13C magic angle spinning (MAS) peak, provide sufficient information to determine the orientation dependence of all three individual spectral densities. The experimental results include the first determination of J 0(0) in a solid sample.;A variety of experimental techniques were used in an investigation of the polyimides LaRC-IA, LaRC-TPI and LaRC-SI and related model compounds. Magic angle spinning was used to acquire 13C isotropic chemical shift spectra of these materials. The spectra were assigned as completely as possible. In addition, the principal components of some shielding tensors were measured using variable angle correlation spectroscopy. of those studied, LaRC-SI is the only polymer that is soluble. However, after it is heated past its glass transition temperature, LaRC-SI becomes insoluble. Experiments were performed in an attempt to identify causes of this behavior. 1H and 13C NMR spectra of soluble and insoluble LaRC-SI are significantly different when magnetization from nuclei in rigid regions of the polymer is suppressed. Hydration studies of LaRC-SI and LaRC-IA show that absorbed water plasticizes these polymers.
APA, Harvard, Vancouver, ISO, and other styles
28

Lefebvre, Antoine. "Computational acoustic methods for the design of woodwind instruments." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97000.

Full text
Abstract:
This thesis presents a number of methods for the computational analysis of woodwind instruments. The Transmission-Matrix Method (TMM) for the calculation of the input impedance of an instrument is described. An approach based on the Finite Element Method (FEM) is applied to the determination of the transmission-matrix parameters of woodwind instrument toneholes, from which new formulas are developed that extend the range of validity of current theories. The effect of a hanging keypad is investigated and discrepancies with current theories are found for short toneholes. This approach was applied as well to toneholes on a conical bore, and we conclude that the tonehole transmission matrix parameters developed on a cylindrical bore are equally valid for use on a conical bore.A boundary condition for the approximation of the boundary layer losses for use with the FEM was developed, and it enables the simulation of complete woodwind instruments. The comparison of the simulations of instruments with many open or closed toneholes with calculations using the TMM reveal discrepancies that are most likely attributable to internalor external tonehole interactions. This is not taken into account in the TMM and poses a limit to its accuracy. The maximal error is found to be smaller than 10 cents. The effect of the curvature of the main bore is investigated using the FEM. The radiation impedance of a wind instrument bell is calculated using the FEM and compared to TMM calculations; we conclude that the TMM is not appropriate for the simulation of flaring bells.Finally, a method is presented for the calculation of the tonehole positions and dimensions under various constraints using an optimization algorithm, which is based on the estimation of the playing frequencies using the Transmission-Matrix Method. A number of simple woodwind instruments are designed using this algorithm and prototypes evaluated.
Cette thèse présente des méthodes pour la conception d'instruments de musique à vent à l'aide de calculs scientifiques. La méthode des matrices de transfert pour le calcul de l'impédance d'entrée est décrite. Une méthode basée sur le calcul par Éléments Finis est appliquée à la détermination des paramètres des matrices de transfert des trous latéraux des instruments à vent, à partir desquels de nouvelles équations sont développées pour étendre la validité deséquations de la littérature. Des simulations par Éléments Finis de l'effet d'une clé suspendue au-dessus des trous latéraux donnent des résultats différents de la théorie pour les trous courts. La méthode est aussi appliquée à des trous sur un corps conique et nous concluons que les paramètres des matrices de transmission développées pour les tuyaux cylindriques sont également valides pour les tuyaux coniques.Une condition frontière pour l'approximation des pertes viscothermiques dans les calculs par Éléments Finis est développée et permet la simulation d'instruments complets. La comparaison des résultats de simulations d'instruments avec plusieurs trous ouverts ou fermés montre que la méthode des matrices de transfert présente des erreurs probablement attribuables aux interactions internes et externes entre les trous. Cet effet n'est pas pris en compte dans laméthode des matrices de transfert et pose une limite à la précision de cette méthode. L'erreur maximale est de l'ordre de 10 cents. L'effet de la courbure du corps de l'instrument est étudié avec la méthode des Éléments Finis. L'impédance de rayonnement du pavillon d'un instrument est calculée avec la méthode des matrices de transfert et comparée aux résultats de la méthode des Éléments Finis; nous concluons que la méthode des matrices de transfert n'estpas appropriée à la simulation des pavillons.Finalement, une méthode d'optimisation est présentée pour le calcul de la position et des dimensions des trous latéraux avec plusieurs contraintes, qui est basé sur l'estimation des fréquences de jeu avec la méthode des matrices de transfert. Plusieurs instruments simples sont conçus et des prototypes fabriqués et évalués.
APA, Harvard, Vancouver, ISO, and other styles
29

Andrews, Brian. "Computational Solutions for Medical Issues in Ophthalmology." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case15275972120621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Edelmaier, Christopher. "Computational Modeling of Mitosis in Fission Yeast." Thesis, University of Colorado at Boulder, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10837613.

Full text
Abstract:

Mitosis ensures the proper segregation of chromosomes into daughter cells, which is accomplished by the mitotic spindle. During fission yeast mitosis, chromosomes establish bi-orientation as the bipolar spindle assembles, meaning that sister kinetochores become attached to microtubules whose growth was initiated by the two sister poles. This process includes mechanisms that correct erroneous attachments made by the kinetochores during the attachment process. This thesis presents a 3D physical model of spindle assembly in a Brownian dynamics-kinetic Monte Carlo simulation framework and a realistic description of the physics of microtubule, kinetochore, and chromosome dynamics, in order to interrogate the dynamics and mechanisms of chromosome bi-orientation and error correction. We have added chromosomes to our previous physical model of spindle assembly, which included microtubules, a spherical nuclear envelope, motor proteins, crosslinking proteins, and spindle pole bodies (centrosomes). In this work, we have explored the mechanical properties of kinetochores and their interactions with microtubules that achieve amphitelic spindle attachments at high frequency. A minimal physical model yields simulations that generate chromosome attachment errors, but resolves them, much as normal chromosomes do.

APA, Harvard, Vancouver, ISO, and other styles
31

Garcia, Alberto J. "Parameter Dependence of Pair Correlations in Clean Superconducting-Magnetic Proximity Systems." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10841350.

Full text
Abstract:

Cooper pairs are known to tunnel through a barrier between superconductors in a Josephson junction. The spin states of the pairs can be a mixture of singlet and triplet states when the barrier is an inhomogeneous magnetic material. The purpose of this thesis is to better understand the behavior of pair correlations in the ballistic regime for different magnetic configurations and varying physical parameters. We use a tight-binding Hamiltonian to describe the system and consider singlet-pair conventional superconductors. Using the Bogoliubov-Valatin transformation, we derive the Bogoliubov-de Gennes equations and numerically solve the associated eigenvalue problem. Pair correlations in the magnetic Josephson junction are obtained from the Green's function formalism for a superconductor. This formalism is applied to Josephson junctions composed of discrete and continuous magnetic materials. The differences between representing pair correlations in the time and frequency domain are discussed, as well as the advantages of describing the Gor'kov functions on a log scale rather than the commonly used linear scale, and in a rotating basis as opposed to a static basis. Furthermore, the effects of parameters such as ferromagnetic width, magnetization strength, and band filling will be investigated. Lastly, we compare results in the clean limit with known results in the diffusive regime.

APA, Harvard, Vancouver, ISO, and other styles
32

Arias, Tomas A. "New analytic and computational techniques for finite temperature condensed matter systems." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Clarke, David A. "Scale Setting and Topological Observables in Pure SU(2) LGT." Thesis, The Florida State University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10935396.

Full text
Abstract:

In this dissertation, we investigate the approach of pure SU(2) lattice gauge theory to its continuum limit using the deconfinement temperature, six gradient scales, and six cooling scales. We find that cooling scales exhibit similarly good scaling behavior as gradient scales, while being computationally more efficient. In addition, we estimate systematic error in continuum limit extrapolations of scale ratios by comparing standard scaling to asymptotic scaling. Finally we study topological observables in pure SU(2) using cooling to smooth the gauge fields, and investigate the sensitivity of cooling scales to topological charge. We find that large numbers of cooling sweeps lead to metastable charge sectors, without destroying physical instantons, provided the lattice spacing is fine enough and the volume is large enough. Continuum limit estimates of the topological susceptibility are obtained, of which we favor χ1/4/Tc = 0.643(12). Differences between cooling scales in different topological sectors turn out to be too small to be detectable within our statistical error.

APA, Harvard, Vancouver, ISO, and other styles
34

Swoger, Maxx Ryan. "Computational Investigation of Material and Dynamic Properties of Microtubules." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1532108320185937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Thomas, Adrian Hugh. "Experimental and computational studies of amorphous magnetic systems." Thesis, Keele University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ruiz, Gutierrez Elfego. "Theoretical and computational modelling of wetting phenomena in smooth geometries." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/34536/.

Full text
Abstract:
Capillarity and wetting are the study of the interfaces that separate immiscible fluids and their interaction with solid surfaces. The interest in understanding capillary and wetting phenomena in complex geometries has grown in recent years. This is partly motivated by applications, such as the micro-fabrication of surfaces that achieve a controlled wettability, but also because of the fundamental role that the geometry of a solid surface can play in the statics and dynamics of liquids that come into contact with it. In this work, the statics and dynamics of liquids in contact with smooth, but non-planar geometries are studied. The approach is theoretical, and include mathematical modelling and numerical simulations using a new lattice-Boltzmann simulation method. The latter can account for solid boundaries of arbitrary geometry and a variety of boundary conditions relevant to experimental situations. The focus is directed to two model systems. First, an analysis on the statics and dynamics of a droplet inside wedge is performed, this is accomplished by proposing the shape of the droplet, a new shape that will be referred in this document as a “liquid barrel”. Using this assumption, the static position and shape of the droplet in response to an external body force is predicted. Then, the analysis is extended to include to dynamical situations in the absence of external forces, in which the translational motion of the liquid barrel towards equilibrium it is described. The proposed analytical model was validated by comparison with full 3D lattice-Boltzmann simulations and with recent experimental results. The applicability of these ideas is materialised with the purpose of achieving energy-invariant manipulation of a liquid barrel in a reconfigurable wedge. As a second model system, the evaporation of a sessile droplet in contact with a wavy solid surface was studied. Due to the non-planar solid topography, the droplet position in equilibrium is restricted to a discrete set of positions. It is shown that when the amplitude of the surface is sufficiently high, the droplet can suddenly readjust its shape and location to a new equilibrium configuration. These events occur in a time-scale much shorter than the evaporation time-scale, a “snap”. With numerical simulations and theoretical analysis, the study reveals the causes for the snap transitions, which lie in shape bifurcations of the droplet shapes, The analysis and results are compared against recent experiments of droplets evaporating on smooth sinusoidal surfaces. With the advent of low-friction surfaces, in which static friction is practically absent, the mobility of droplets is close to ideal, and with this, predicting and controlling them in static cases becomes a challenge. The analysis and results presented in this work can be used for manipulating the position and defining the shape of droplets via the geometry of their confinements.
APA, Harvard, Vancouver, ISO, and other styles
37

Fiorini, Francesca. "Experimental and computational dosimetry of laser-driven radiation beams." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3371/.

Full text
Abstract:
Laser-driven particle acceleration is an area of increasing research interest given the recent development of short pulse high intensity lasers. A significant difficulty in this field is given by the exceptionally large instantaneous dose rates which such particle beams can produce. This represents a challenge for standard dosimetry techniques and more sophisticated procedures need to be explored. In this thesis I present novel detection and characterisation methods using a combination of GafChromic films, TLD chips, nuclear activation and Monte Carlo simulations, applicable to laser-driven beams. Part of the work is focused on the detection of laserdriven protons used to irradiate V79 cells in order to determine the feasibility of laser-driven proton therapy. A dosimetry method involving GafChromic films and numerical simulations has been appositely developed and used to obtain cell survival results, which are in agreement with those obtained by conventionally accelerated proton beams. Another part is dedicated to the detection and characterisation of laser-driven electron and X-ray beams. An innovative simulation method to obtain the temperature of the electrons accelerated by the laser, and predict the subsequently generated X-ray beam, has been developed and compared with the acquired experimental data.
APA, Harvard, Vancouver, ISO, and other styles
38

Bruma, Alina. "A combined experimental and computational study of AuPd nanoparticles." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4372/.

Full text
Abstract:
The thesis is focused on the investigation of structural properties of AuPd nanoparticles via theoretical and experimental studies. For the first system, the 98-atom AuPd nanoclusters, a theoretical analysis has been employed to study the energetics and segregation effects and to assess how typical is the Leary Tetrahedron (LT). Although this motif is the most stable at the empirical level, it loses stability at the DFT level against FCC or Marks Decahedron. The second system is the Au24Pd1 nanoclusters. Theoretically, by performing a search at the DFT level using Basin Hopping Monte Carlo, we identified pyramidal cage structures as putative global minima, where Pd sits in the core and Au occupies surface positions. The Löwdin analysis emphasized charge transfer between Pd and Au, explaining the enhanced catalytic activity with respect to Au25 clusters. Experimentally, STEM has been employed for the structural characterization of Au24Pd1 clusters supported on Multiwall Carbon Nanotubes. Whenever possible, we have tried to link the experimental analysis to the theoretical findings. The third system has been the evaporated AuPd nanoparticles. We observed that the annealing process led to the formation of L12 ordered phases as well as layered and core-shell structures. This study aimed to bring an insight on the segregation and energetics effects of AuPd nanoparticles with potential applications in nanocatalysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Igram, Dale J. "Computational Modeling and Characterization of Amorphous Materials." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1564347980986716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Smith, Katherine Margaret. "Effects of Submesoscale Turbulence on Reactive Tracers in the Upper Ocean." Thesis, University of Colorado at Boulder, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10623667.

Full text
Abstract:

In this dissertation, Large Eddy Simulations (LES) are used to model the coupled turbulence-reactive tracer dynamics within the upper mixed layer of the ocean. Prior work has shown that LES works well over the spatial and time scales relevant to both turbulence and reactive biogeochemistry. Additionally, the code intended for use is able to carry an arbitrary number of tracer equations, allowing for easy expansion of the species reactions. Research in this dissertation includes a study of 15 idealized non-reactive tracers within an evolving large-scale temperature front in order determine and understand the fundamental dynamics underlying turbulence-tracer interaction in the absence of reactions. The focus of this study, in particular, was on understanding the evolution of biogeochemically-relevant, non-reactive tracers in the presence of both large (~5 km) submesoscale eddies and smallscale (~100 m) wave-driven Langmuir turbulence. The 15 tracers studied have different initial, boundary, and source conditions and significant differences are seen in their distributions depending on these conditions. Differences are also seen between regions where submesoscale eddies and small-scale Langmuir turbulence are both present, and in regions with only Langmuir turbulence. A second study focuses on the examination of Langmuir turbulence effects on upper ocean carbonate chemistry. Langmuir mixing time scales are similar to those of chemical reactions, resulting in potentially strong tracer-flow coupling effects. The strength of the Langmuir turbulence is varied, from no wave-driven turbulence (i.e., only shear-driven turbulence), to Langmuir turbulence that is much stronger than that found in typical upper ocean conditions. Three different carbonate chemistry models are also used in this study: time-dependent chemistry, equilibrium chemistry, and no-chemistry (i.e., non-reactive tracers). The third and final study described in this dissertation details the development of a reduced-order biogeochemical model with 17 state equations that can accurately reproduce the Bermuda Atlantic Time-series Study (BATS) ecosystem behavior, but that can also be integrated within high-resolution LES.

APA, Harvard, Vancouver, ISO, and other styles
41

Chu, Feng. "Validation of a Lagrangian model for laser-induced fluorescence." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6077.

Full text
Abstract:
Extensive information can be obtained on wave-particle interactions and wave fields by direct measurement of perturbed ion distribution functions using laser-induced fluorescence (LIF). For practical purposes, LIF is frequently performed on metastable states that are produced from neutral gas particles and ions in other electronic states. If the laser intensity is increased to obtain a better LIF signal, then optical pumping can produce systematic effects depending on the collision rates which control metastable population and lifetime. We numerically simulate the ion velocity distribution measurement and wave-detection process using a Lagrangian model for the LIF signal. The simulations show that optical pumping broadening affects the ion velocity distribution function (IVDF) $f_0(v)$ and its first-order perturbation $f_1(v,t)$ when laser intensity is increased above a certain level. The results also suggest that ion temperature measurements are only accurate when the metastable ions can live longer than the ion-ion collision mean free time. For the purposes of wave detection, the wave period has to be significantly shorter than the lifetime of metastable ions for a direct interpretation. Experiments are carried out to study the optical pumping broadening and metastable lifetime effects, and the results are compared with the simulation in order to validate the Lagrangian model for LIF. It is more generally true that metastable ions may be viewed as test-particles. As long as an appropriate model is available, LIF can be extended to a range of environments.
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Xun. "Adaptive mesh refinement for computational aeroacoustics." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/47087/.

Full text
Abstract:
This thesis describes a parallel block-structured adaptive mesh refinement (AMR) method that is employed to solve some computational aeroacoustic problems with the aim of improving the computational efficiency. AMR adaptively refines and coarsens a computational mesh along with sound propagation to increase grid resolution only in the area of interest. While sharing many of the same features, there is a marked difference between the current and the established AMR approaches. Rather than low-order schemes generally used in the previous approaches, a high-order spatial difference scheme is employed to improve numerical dispersion and dissipation qualities. To use a high-order scheme with AMR, a number of numerical issues associated with fine-coarse block interfaces on an adaptively refined mesh, such as interpolations, filter and artificial selective damping techniques and accuracy are addressed. In addition, the asymptotic stability and the transient behaviour of a high-order spatial scheme on an adaptively refined mesh are also studied with eigenvalue analysis and pseudospectra analysis respectively. In addition, the fundamental AMR algorithm is simplified in order to make the work of implementation more manageable. Particular emphasis has been placed on solving sound radiation from generic aero-engine bypass geometry with mean flow. The approach of AMR is extended to support a body-fitted multi-block mesh. The radiation from an intake duct is modelled by the linearised Euler equations, while the radiation from an exhaust duct is modelled by the extended acoustic perturbation equations to suppress hydrodynamic instabilities generated in a sheared mean flow. After solving the near-field sound solution, the associated far-field sound directivity is estimated by solving the Ffowcs Williams-Hawkings equation. The overall results demonstrate the accuracy and the efficiency of the presented AMR method, but also reveal some limitations. The possible methods to avoid these limitations are given at the end of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
43

Nathaniel, James Edward II. "A computational study of electronic structures of graphene allotropes with electrical bias." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2011. http://digitalcommons.auctr.edu/dissertations/582.

Full text
Abstract:
Graphene is a two-dimensional system consisting of a single planar layer of carbon atoms with hexagonal arrangement. Various approaches have been proposed to control its physical and electronic properties. When appropriately cut, rolled, and bonded, graphene generates single-walled carbon nanotubes of varying diameters. Graphite intercalation compounds are materials formed by inserting molecular layers of compounds between stacked sheets of graphene. We have studied the physical and electronic responses of two graphene layers intercalated with FeCl3 and of metallic, semi-metallic and semiconducting nanotubes when normally biased using electric fields of various magnitudes. By means of first-principles density functional calculations, our results indicate that the band structures of the aforementioned graphene structures are modified upon application of a bias voltage. In the case of nanotubes, electric biasing allows tuning of the band gap leading to a transition from semiconducting to metallic state, or vice versa. In the case of the FeCl3 intercalant compounds, electric biasing results in shifting of the Dirac point.
APA, Harvard, Vancouver, ISO, and other styles
44

Biafore, Michael. "A computational role for the one-dimensional Lieb-Schultz-Mattis spin model." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Povall, Timothy Mark. "Dense granular flow in rotating drums: a computational investigation of constitutive equations." Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29694.

Full text
Abstract:
The constitutive laws of dense granular flow are investigated. Simulations of a drum, with periodic boundary conditions, rotating at varying speeds are performed. From the resulting data, kinematic and kinetic fields are extracted and used to investigate the validity of constitutive relations proposed in the literature. Two key constitutive assumptions are (a) isotropy and (b) incompressibility. The rotating drum system is found to be largely isotropic for high rotational speeds. For low rotational speeds, anisotropy is observed in the bottom part of the system, where the particles are flowing upwards. A small degree of compressibility is observed in the downward-flowing layer. The friction coefficient for the granular constitutive relations is also investigated. An empirically-derived friction law has a better fit to the data when compared to other friction laws proposed in the literature. Lastly, two scaling laws are investigated: the scaling between the scaled flow-rate (flux) and the thickness of the downward- flowing layer and the scaling between the dynamic angle of repose of the bed and the flux through the downward- flowing layer. The thickness-flux scaling is measured by interpolating the flux over a number of slices through the flowing layer, this is done in a number of different ways. The size of the measured section through the flowing layer is varied. The orientation of the slices is also varied. Also investigated is whether the total velocity or the tangential velocity produce the same scaling. The size of the section of the flowing layer significantly changes the scaling, this shows that the scaling is not constant throughout the flowing layer. The dynamic angle of repose is determined using two methods, one which is determined unambiguously as the repose angle of the ellipse fitted to the equilibrium surface and the other which is the changing angle of the tangent to the equilibrium surface or free surface. The first repose angle is found to be highly dependent on the flux even in the limit of infinite drum length, which is modelled using axial periodic boundary conditions. The second definition results in two sets of repose angles with complex behaviour that may be due to inertial effects. An instability in the system is observed, this is conjectured to be due to a frictional threshold that is breached as the rotational speed of the drum increases. Algorithms for calculating field variables and features of the charge are presented.
APA, Harvard, Vancouver, ISO, and other styles
46

Heinrich, Lukas. "Searches for Supersymmetry, RECAST, and Contributions to Computational High Energy Physics." Thesis, New York University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13421570.

Full text
Abstract:

The search for phenomena Beyond the Standard Model (BSM) is the primary motivation for the experiments at the Large Hadron Collider (LHC). This dissertation assesses the experimental status of supersymmetric theories based on analyses of data collected by the ATLAS experiment during the first and second run of the LHC. Both R-parity preserving theories defined within the framework of the Minimally Supersymmetric Standard Model (MSSM) as well as R-parity violating models are studied. Further, a framework for systematic reinterpretation, RECAST, is presented which enables a streamlined, community-wide, approach to the search for BSM physics through the preservation of data analyses as parametrized computational workflows. A language and execution engine for such workflows of heterogeneous workloads on distributed computing systems is presented. Additionally, a new implementation of the HistFactory class of binned likelihoods based on auto-differentiable computational graphs is developed for accelerated and distributed inference computation. Finally, to enable efficient reinterpretation, a method of estimating excursion sets of one or more resource-intensive, multivariate, black-box functions, such as p-value functions, through an information-based Bayesian Optimization procedure is introduced.

APA, Harvard, Vancouver, ISO, and other styles
47

Nguyen, Tung Le. "Computational Modeling of Slow Neurofilament Transport along Axons." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1547036394834075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Harris, Chelsea E. "One Shell, Two Shell, Red Shell, Blue Shell| Numerical Modeling to Characterize the Circumstellar Environments of Type I Supernovae." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10837128.

Full text
Abstract:

Though fundamental to our understanding of stellar, galactic, and cosmic evolution, the stellar explosions known as supernovae (SNe) remain mysterious. We know that mass loss and mass transfer are central processes in the evolution of a star to the supernova event, particularly for thermonuclear Type Ia supernovae (SNe Ia), which are in a close binary system. The circumstellar environment (CSE) contains a record of the mass lost from the progenitor system in the centuries prior to explosion and is therefore a key diagnostic of SN progenitors. Unfortunately, tools for studying the CSE are specialized to stellar winds rather than the more complicated and violent mass-loss processes hypothesized for SN Ia progenitors.

This thesis presents models for constraining the properties of a CSE detached from the stellar surface. In such cases, the circumstellar material (CSM) may not be observed until interaction occurs and dominates the SN light weeks or even months after maximum light. I suggest we call SNe with delayed interaction SNe X;n (i.e. SNe Ia;n, SNe Ib;n). I per- formed numerical hydrodynamic simulations and radiation transport calculations to study the evolution of shocks in these systems. I distilled these results into simple equations that translate radio luminosity into a physical description of the CSE. I applied my straightforward procedure to derive upper limits on the CSM for three SNe Ia: SN 2011fe, SN 2014J, and SN 2015cp. I modeled interaction to late times for the SN Ia;n PTF11kx; this led to my participation in the program that discovered interaction in SN 2015cp. Finally, I expanded my simulations to study the Type Ib;n SN 2014C, the first optically-confirmed SN X;n with a radio detection. My SN 2014C models represent the first time an SN X;n has been simultaneous modeled in the x-ray and radio wavelengths.

APA, Harvard, Vancouver, ISO, and other styles
49

Dadachanji, Zareer. "Computational many-body theory of electron correlation in solids." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ahmed, Israr. "Mathematical and computational modelling of soft and active matter." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/18641/.

Full text
Abstract:
The collective motion of organisms such as flights of birds, swimming of school of fish, migration of bacteria and movement of herds across long distances is a fascinating phenomenon that has intrigued man for centuries. Long and details observations have resulted in numerous abstract hypothesis and theories regarding the collective motion animals and organisms. In recent years the developments in supercomputers and general computational power along with highly refined mathematical theories and equations have enabled the collective motion of particles to be investigated in a logical and systematic manner. Hence, this study is focused mathematical principles are harnessed along with computational programmes in order to obtain a better understanding of collective behaviour of particles. Two types of systems have been considered namely homogeneous and heterogeneous systems, which represent collective motion with and without obstacles respectively. The Vicsek model has been used to investigate the collective behaviour of the particles in 2D and 3D systems. Based on this, a new model was developed: the obstacle avoidance model. This showed the interaction of particles with fixed and moving obstacles. It was established using this model that the collective motion of the particles was very low when higher noise was involved in the system and the collective motion of the particles was higher when lower noise and interaction radius existed. Very little is known about the collective motion of self-propelled particles in heterogeneous mediums, especially when noise is added to the system, and when the interaction radius between particles and obstacles is changed. In the presence of moving obstacles, particles exhibited a greater collective motion than with the fixed obstacles. Collective motion showed non-monotonic behaviour and the existence of optimal noise maximised the collective motion. In the presence of moving obstacles there were fluctuations in the value of the order parameter. Collective systems studies are highly useful in order to produce artificial swarms of autonomous vehicles, to develop effective fishing strategies and to understand human interactions in crowds for devising and implementing efficient and safe crowd control policies. These will help to avoid fatalities in highly crowded situations such as music concerts and sports and entertainment events with large audiences, as well as crowded shopping centres. In this study, a new model termed the obstacle avoidance model is presented which investigates the collective motion of self-propelled particles in the heterogeneous medium. In future work this model can be extended to include a combination of a number of motionless and moving obstacles hence bringing the modelling closer to reality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography