Dissertations / Theses on the topic 'Computational physics'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Computational physics.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Knebe, Alexander. "Computational cosmology." Thesis, Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2010/4114/.
Full textDie Kosmologie ist heutzutage eines der spannendsten Arbeitsgebiete in der Astronomie und Astrophysik. Das vorherrschende (Urknall-)Modell in Verbindung mit den neuesten und präzisesten Beobachtungsdaten deutet darauf hin, daß wir in einem Universum leben, welches zu knapp 24% aus Dunkler Materie und zu 72% aus Dunkler Energie besteht; die sichtbare Materie macht gerade einmal 4% aus. Und auch wenn uns derzeit eindeutige bzw. direkte Beweise für die Existenz dieser beiden exotischen Bestandteile des Universums fehlen, so ist es uns dennoch möglich, die Entstehung von Galaxien, Galaxienhaufen und der großräumigen Struktur in solch einem Universum zu modellieren. Dabei bedienen sich Wissenschaftler Computersimulationen, welche die Strukturbildung in einem expandierenden Universum mittels Großrechner nachstellen; dieses Arbeitsgebiet wird Numerische Kosmologie bzw. “Computational Cosmology” bezeichnet und ist Inhalt der vorliegenden Habilitationsschrift. Nach einer kurzen Einleitung in das Themengebiet werden die Techniken zur Durchführung solcher numerischen Simulationen vorgestellt. Die Techniken zur Lösung der relevanten (Differential-)Gleichungen zur Modellierung des “Universums im Computer” unterscheiden sich dabei teilweise drastisch voneinander (Teilchen- vs. Gitterverfahren), und es werden die verfahrenstechnischen Unterschiede herausgearbeitet. Und obwohl unterschiedliche Programme auf unterschiedlichen Methoden basieren, so sind die Unterschiede in den Endergebnissen doch (glücklicherweise) vernachlässigbar gering. Wir stellen desweiteren einen komplett neuen Code – basierend auf dem Gitterverfahren – vor, welcher einen Hauptbestandteil der vorliegenden Habilitation darstellt. Im weiteren Verlauf der Arbeit werden diverse kosmologische Simulationen vorgestellt und ausgewertet. Dabei werden zum einen die Entstehung und Entwicklung von Satellitengalaxien – den (kleinen) Begleitern von Galaxien wie unserer Milchstraße und der Andromedagalaxie – als auch Alternativen zum oben eingeführten “Standardmodell” der Kosmologie untersucht. Es stellt sich dabei heraus, daß keine der (hier vorgeschlagenen) Alternativen eine bedrohliche Konkurenz zu dem Standardmodell darstellt. Aber nichtsdestoweniger zeigen die Rechnungen, daß selbst so extreme Abänderungen wie z.B. modifizierte Newton’sche Dynamik (MOND) zu einem Universum führen können, welches dem beobachteten sehr nahe kommt. Die Ergebnisse in Bezug auf die Dynamik der Satellitengalaxien zeigen auf, daß die Untersuchung der Trümmerfelder von durch Gezeitenkräfte zerriebenen Satellitengalaxien Rückschlüsse auf Eigenschaften des ursprünglichen Satelliten zulassen. Diese Tatsache wird bei der Aufschlüsselung der Entstehungsgeschichte unserer eigenen Milchstraße von erheblichem Nutzen sein. Trotzdem deuten die hier vorgestellten Ergebnisse auch darauf hin, daß dieser Zusammenhang nicht so eindeutig ist, wie er zuvor mit Hilfe kontrollierter Einzelsimulationen von Satellitengalaxien in analytischen “Mutterpotentialen” vorhergesagt wurde: Das Zusammenspiel zwischen den Satelliten und der Muttergalaxie sowie die Einbettung der Rechnungen in einen kosmologischen Rahmen sind von entscheidender Bedeutung.
Zagordi, Osvaldo. "Statistical physics methods in computational biology." Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/3971.
Full textVakili, Mohammadjavad. "Methods in Computational Cosmology." Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10260795.
Full textState of the inhomogeneous universe and its geometry throughout cosmic history can be studied by measuring the clustering of galaxies and the gravitational lensing of distant faint galaxies. Lensing and clustering measurements from large datasets provided by modern galaxy surveys will forever shape our understanding of the how the universe expands and how the structures grow. Interpretation of these rich datasets requires careful characterization of uncertainties at different stages of data analysis: estimation of the signal, estimation of the signal uncertainties, model predictions, and connecting the model to the signal through probabilistic means. In this thesis, we attempt to address some aspects of these challenges.
The first step in cosmological weak lensing analyses is accurate estimation of the distortion of the light profiles of galaxies by large scale structure. These small distortions, known as the cosmic shear signal, are dominated by extra distortions due to telescope optics and atmosphere (in the case of ground-based imaging). This effect is captured by a kernel known as the Point Spread Function (PSF) that needs to be fully estimated and corrected for. We address two challenges a head of accurate PSF modeling for weak lensing studies. The first challenge is finding the centers of point sources that are used for empirical estimation of the PSF. We show that the approximate methods for centroiding stars in wide surveys are able to optimally saturate the information content that is retrievable from astronomical images in the presence of noise.
The fist step in weak lensing studies is estimating the shear signal by accurately measuring the shapes of galaxies. Galaxy shape measurement involves modeling the light profile of galaxies convolved with the light profile of the PSF. Detectors of many space-based telescopes such as the Hubble Space Telescope (HST) sample the PSF with low resolution. Reliable weak lensing analysis of galaxies observed by the HST camera requires knowledge of the PSF at a resolution higher than the pixel resolution of HST. This PSF is called the super-resolution PSF. In particular, we present a forward model of the point sources imaged through filters of the HST WFC3 IR channel. We show that this forward model can accurately estimate the super-resolution PSF. We also introduce a noise model that permits us to robustly analyze the HST WFC3 IR observations of the crowded fields.
Then we try to address one of the theoretical uncertainties in modeling of galaxy clustering on small scales. Study of small scale clustering requires assuming a halo model. Clustering of halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies with halo occupation distribution (HOD) assume that halo mass alone is sufficient to characterize the connection between galaxies and halos. However, assembly bias could cause the modeling of galaxy clustering to face systematic effects if the expected number of galaxies in halos is correlated with other halo properties. Using high resolution N-body simulations and the clustering measurements of Sloan Digital Sky Survey (SDSS) DR7 main galaxy sample, we show that modeling of galaxy clustering can slightly improve if we allow the HOD model to depend on halo properties beyond mass.
One of the key ingredients in precise parameter inference using galaxy clustering is accurate estimation of the error covariance matrix of clustering measurements. This requires generation of many independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast and accurate method based on low-resolution N-body simulations and an empirical bias model for generating mock catalogs. We use fast particle mesh gravity solvers for generation of dark matter density field and we use Markov Chain Monti Carlo (MCMC) to estimate the bias model that connects dark matter to galaxies. We show that this approach enables the fast generation of mock catalogs that recover clustering at a percent-level accuracy down to quasi-nonlinear scales.
Cosmological datasets are interpreted by specifying likelihood functions that are often assumed to be multivariate Gaussian. Likelihood free approaches such as Approximate Bayesian Computation (ABC) can bypass this assumption by introducing a generative forward model of the data and a distance metric for quantifying the closeness of the data and the model. We present the first application of ABC in large scale structure for constraining the connections between galaxies and dark matter halos. We present an implementation of ABC equipped with Population Monte Carlo and a generative forward model of the data that incorporates sample variance and systematic uncertainties. (Abstract shortened by ProQuest.)
Wilson, John Max. "Computational Studies of Geophysical Systems." Thesis, University of California, Davis, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10979293.
Full textEarthquakes and tsunamis represent two of the most devastating natural disasters faced by humankind. Earthquakes can occur in matters of seconds, with little to no warning. The governing variables of earthquakes, namely the stress profiles of vast regions of the earth's crust, cannot be measured in a comprehensive manner. Similarly, tsunami parameters are often accurately determined only minutes before waves make landfall. We are therefore left only with statistical analyses of past events to produce hazard forecasts for these disasters. Unfortunately, the events that cause the most damage also occur infrequently, and most regions have scientific records of earthquakes going back only a century, with modern instrumentation being widely distributed only in the past few decades. The 2011 M=9 Tohoku earthquake and tsunami, which killed close to sixteen thousand people, is the perfect case study of a country heavily invested in earthquake and tsunami risk reduction, yet being unprepared for a once-in-a-millennium event.
Physics-based simulations are some of the most promising tools for learning more about these systems. These tools can be used to study many thousands of years worth of synthetic seismicity. Additionally, scaling laws present in such complex geophysical systems can provide insights into dynamics otherwise hidden from view. This dissertation represents a collection of studies using these two tools. First, the Virtual Quake earthquake simulator is introduced, along with some of my contributions to its functionality and maintenance. A method based on Omori aftershock scaling is presented for verifying the spatial distribution of synthetic earthquakes produced by long-term simulators. The use of aftershock ground motion records to improve constraints on those same aftershock models is then explored. Finally, progress in constructing a tsunami early warning system based on the coupling of Virtual Quake and the Tsunami Squares wave simulator is presented. Taken together, these studies demonstrate the versatility and strength of complexity science and computational methods in the context of hazard analysis.
Venkataram, Prashanth Sanjeev. "Computational investigations of nanophotonic systems." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92676.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 105-106).
In this thesis, I developed code in the MEEP finite-difference time domain classical electromagnetic solver to simulate the quantum phenomenon of spontaneous emission and its enhancement by a photonic crystal. The results of these simulations were favorably cross-checked with semi-analytical predictions and experimental results. This code was further extended to simulate spontaneous emission from the top half of a sphere, where the top half is a dielectric material and the bottom half is a metal, in order to determine how effective the metal is at reflecting the emission toward the top. Separately, I used the SCUFF-EM boundary element method classical electromagnetic solver to simulate absorption and scattering, together called extinction, of infrared light from nanoparticles, and used those results to optimize the nanoparticle shapes and sizes for extinction at the desired infrared wavelength.
by Prashanth Sanjeev Venkataram.
S.B.
Thompson, Travis W. "Tuning the Photochemical Reactivity of Electrocyclic Reactions| A Non-adiabatic Molecular Dynamics Study." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10839950.
Full textWe use non-adiabatic ab initio molecular dynamics to study the influence of substituent side groups on the photoactive unit (Z)-hexa-1,3,5-triene (HT). The Time-Dependent Density Functional Theory Surface Hopping method (TDDFT-SH) is used to investigate the influence of substituted isopropyl and methyl groups on the excited state dynamics. The 1,4 and 2,5-substituted molecules are simulated: 2,5-dimethylhexa-1,3,5-triene (DMHT), 2-isopropyl-5-methyl-1,3,5-hexatriene (2,5-IMHT), 3,7-dimethylocta-1,3,5-triene (1,4-IMHT), and 2,5-diisopropyl-1,3,5-hexatriene (DIHT). We find that HT and 1,4-IMHT have the lowest ring-closing branching ratios of 5.3% and 1.0%, respectively. For the 2,5-substituted derivatives, the branching ratio increases with increasing size of the substituents, exhibiting yields of 9.78%, 19%, and 24% for DMHT, 2,5-IMHT, and DIHT, respectively. The reaction channels are shown to prefer certain conformation configurations at excitation, where the ring-closing reaction tends to originate from the gauche-Z-gauche (gZg) rotamer almost exclusively. In addition, there is a conformational dependency on absorption, gZg conformers have on average lower S1 ← S0 excitation energies that the other rotamers. Furthermore, we develop a method to calculate a predicted quantum yield that is in agreement with the wavelength-dependence observed in experiment for DMHT. In addition, the quantum yield method also predicts DIHT to have the highest CHD yield of 0.176 at 254 nm and 0.390 at 290 nm.
Additionally, we study the vitamin D derivative Tachysterol (Tachy) which exhibits similar photochemical properties as HT and its derivatives. We find the reaction channels of Tachy also have a conformation dependency, where the reactive products toxisterol-D1 (2.3%), previtamin D (1.4%) and cyclobutene toxisterol (0.7%) prefer cEc, cEt, and tEc configurations at excitation, leaving the tEt completely non-reactive. The rotamers similarly have a dependence on absorption as well, where the cEc configuration has the lowest energy S 1 ← S0 excitation of the rotamers. The wavelength dependence of the rotamers should lead to selective properties of these molecules at excitation. An excitation to the red-shifted side of the maximum absorption peak will on average lead to excitations of the gZg rotamers more exclusively.
Darmawan, Andrew. "Quantum computational phases of matter." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11640.
Full textAllehabi, Saleh. "Computational Spectroscopy of C-Like Mg VII." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2018. http://digitalcommons.auctr.edu/cauetds/153.
Full textFlint, Christopher Robert. "Computational Methods of Lattice Boltzmann Mhd." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1530192360.
Full textShi, Hao. "Computational Studies of Strongly Correlated Quantum Matter." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1499450059.
Full textRoss, Brian Christopher. "Computational tools for modeling and measuring chromosome structure." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79262.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 99-112).
DNA conformation within cells has many important biological implications, but there are challenges both in modeling DNA due to the need for specialized techniques, and experimentally since tracing out in vivo conformations is currently impossible. This thesis contributes two computational projects to these efforts. The first project is a set of online and offline calculators of conformational statistics using a variety of published and unpublished methods, addressing the current lack of DNA model-building tools intended for general use. The second project is a reconstructive analysis that could enable in vivo mapping of DNA conformation at high resolution with current experimental technology.
by Brian Christopher Ross.
Ph.D.
Ranner, Thomas. "Computational surface partial differential equations." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/57647/.
Full textDjambazov, Georgi Stefanov. "Numerical techniques for computational aeroacoustics." Thesis, University of Greenwich, 1998. http://gala.gre.ac.uk/6149/.
Full textMatsuda, Takehisa. "Computational proposal for locating local defects in superconducting tapes." California State University, Long Beach, 2013.
Find full textSponseller, Daniel Ray. "Molecular Dynamics Study of Polymers and Atomic Clusters." Thesis, George Mason University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10685723.
Full textThis dissertation contains investigations based on Molecular Dynamics (MD) of a variety of systems, from small atomic clusters to polymers in solution and in their condensed phases. The overall research is divided in three parts. First, I tested a new thermostat in the literature on the thermal equilibration of a small cluster of Lennard-Jones (LJ) atoms. The proposed thermostat is a Hamiltonian thermostat based on a logarithmic oscillator with the outstanding property that the mean value of its kinetic energy is constant independent of the mass and energy. I inspected several weak-coupling interaction models between the LJ cluster and the logarithmic oscillator in 3D. In all cases I show that this coupling gives rise to a kinetic motion of the cluster center of mass without transferring kinetic energy to the interatomic vibrations. This is a failure of the published thermostat because the temperature of the cluster is mainly due to vibrations in small atomic clusters This logarithmic oscillator cannot be used to thermostat any atomic or molecular system, small or large.
The second part of the dissertation is the investigation of the inherent structure of the polymer polyethylene glycol (PEG) solvated in three different solvents: water, water with 4% ethanol, and ethyl acetate. PEG with molecular weight of 2000 Da (PEG2000) is a polymer with many applications from industrial manufacturing to medicine that in bulk is a paste. However, its structure in very dilute solutions deserved a thorough study, important for the onset of aggregation with other polymer chains. I introduced a modification to the GROMOS 54A7 force field parameters for modeling PEG2000 and ethyl acetate. Both force fields are new and have now been incorporated into the database of known residues in the molecular dynamics package Gromacs. This research required numerous high performance computing MD simulations in the ARGO cluster of GMU for systems with about 100,000 solvent molecules. My findings show that PEG2000 in water acquires a ball-like structure without encapsulating solvent molecules. In addition, no hydrogen bonds were formed. In water with 4% ethanol, PEG2000 acquires also a ball-like structure but the polymer ends fluctuate folding outward and onward, although the general shape is still a compact ball-like structure.
In contrast, PEG2000 in ethyl acetate is quite elongated, as a very flexible spaghetti that forms kinks that unfold to give rise to folds and kinks in other positions along the polymer length. The behavior resembles an ideal polymer in a &thetas; solvent. A Principal Component Analysis (PCA) of the minima composing the inherent structure evidences the presence of two distinct groups of ball-like structures of PEG2000 in water and water with 4% ethanol. These groups give a definite signature to the solvated structure of PEG2000 in these two solvents. In contrast, PCA reveals several groups of avoided states for PEG2000 in ethyl acetate that disqualify the possibility of being an ideal polymer in a &thetas; solvent.
The third part of the dissertation is a work in progress, where I investigate the condensed phase of PEG2000 and study the interface between the condensed phase and the three different solvents under study. With a strategy of combining NPT MD simulations at different temperatures and pressures, PEG 2000 condensed phase displays the experimental density within a 1% discrepancy at 300 K and 1 atm. This is a very encouraging result on this ongoing project.
Tsai, Carol Leanne. "Heuristic Algorithms for Agnostically Identifying the Globally Stable and Competitive Metastable Morphologies of Block Copolymer Melts." Thesis, University of California, Santa Barbara, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423067.
Full textBlock copolymers are composed of chemically distinct polymer chains that can be covalently linked in a variety of sequences and architectures. They are ubiquitous as ingredients of consumer products and also have applications in advanced plastics, drug delivery, advanced membranes, and next generation nano-lithographic patterning. The wide spectrum of possible block copolymer applications is a consequence of block copolymer self-assembly into periodic, meso-scale morphologies as a function of varying block composition and architecture in both melt and solution states, and the broad spectrum of physical properties that such mesophases afford.
Materials exploration and discovery has traditionally been pursued through an iterative process between experimental and theoretical/computational collaborations. This process is often implemented in a trial-and-error fashion, and from the computational perspective of generating phase diagrams, usually requires some existing knowledge about the competitive phases for a given system. Self-Consistent Field Theory (SCFT) simulations have proven to be both qualitatively and quantitatively accurate in the determination, or forward mapping, of block copolymer phases of a given system. However, it is possible to miss candidates. This is because SCFT simulations are highly dependent on their initial configurations, and the ability to map phase diagrams requires a priori knowledge of what the competing candidate morphologies are. The unguided search for the stable phase of a block copolymer of a given composition and architecture is a problem of global optimization. SCFT by itself is a local optimization method, so we can combine it with population-based heuristic algorithms geared at global optimization to facilitate forward mapping. In this dissertation, we discuss the development of two such methods: Genetic Algorithm + SCFT (GA-SCFT) and Particle Swarm Optimization + SCFT (PSO-SCFT). Both methods allow a population of configurations to explore the space associated with the numerous states accessible to a block copolymer of a given composition and architecture.
GA-SCFT is a real-space method in which a population of SCFT field configurations “evolves” over time. This is achieved by initializing the population randomly, allowing the configurations to relax to local basins of attraction using SCFT simulations, then selecting fit members (lower free energy structures) to recombine their fields and undergo mutations to generate a new “generation” of structures that iterate through this process. We present results from benchmark testing of this GA-SCFT technique on the canonical AB diblock copolymer melt, for which the theoretical phase diagram has long been established. The GA-SCFT algorithm successfully predicts many of the conventional mesophases from random initial conditions in large, 3-dimensional simulation cells, including hexagonally-packed cylinders, BCC-packed spheres, and lamellae, over a broad composition range and weak to moderate segregation strength. However, the GA-SCFT method is currently not effective at discovery of network phases, such as the Double-Gyroid (GYR) structure.
PSO-SCFT is a reciprocal space approach in which Fourier components of SCFT fields near the principal shell are manipulated. Effectively, PSO-SCFT facilitates the search through a space of reciprocal-space SCFT seeds which yield a variety of morphologies. Using intensive free energy as a fitness metric by which to compare these morphologies, the PSO-SCFT methodology allows us to agnostically identify low-lying competitive and stable morphologies. We present results for applying PSO-SCFT to conformationally symmetric diblock copolymers and a miktoarm star polymer, AB4, which offers a rich variety of competing sphere structures. Unlike the GA-SCFT method we previously presented, PSO-SCFT successfully predicts the double gyroid morphology in the AB-diblock. Furthermore, PSO-SCFT successfully recovers the A 15 morphology at a composition where it is expected to be stable in the miktoarm system, as well as several competitive metastable candidates, and a new sphere morphology belonging to the hexagonal space group 191, which has not been seen before in polymer systems. Thus, we believe the PSO-SCFT method provides a promising platform for screening for competitive structures in a given block copolymer system.
Srisupattarawanit, Tarin. "Simulation of offshore wind turbines by computational multi-physics." kostenfrei, 2007. http://www.digibib.tu-bs.de/?docid=00020645.
Full textMiller, David J. Ghosh Avijit. "New methods in computational systems biology /." Philadelphia, Pa. : Drexel University, 2008. http://hdl.handle.net/1860/2810.
Full textGiddy, Andrew Peter. "Computational studies of structural phase transitions." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239582.
Full textHan, Song. "Computational Methods for Multi-dimensional Neutron Diffusion Problems." Licentiate thesis, KTH, Physics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11298.
Full textSteneteg, Peter, and Lars Erik Rosengren. "Design and Implementation of a General Molecular Dynamics Package." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6053.
Full textThere are many different codes available for making molecular dynamic simulation. Most of these are focused on high performance mainly. We have moved that focus towards modularity, flexibility and user friendliness. Our goal has been to design a software that is easy to use, can handle many different kind of simulations and is easily extendable to meet new requirements.
In the report we present you with the theory that is needed to understand the principles of a molecular dynamics simulation. The four different potentials we have used in the software are presented. Further we give a detailed description of the design and the different design choices we have made while constructing the software.
We show some examples of how the software can be used and discuss some aspects of the performance of the implementation. Finally we give our thoughts on the future of the software.
Farr, Graham E. "Topics in computational complexity." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.
Full textAlvarado, Walter. "Investigating Butyrylcholinesterase Inhibition via Molecular Mechanics." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10639439.
Full textWe show that a combination of different theoretical methods is a viable approach to calculate and explain the relative binding affinities of inhibitors of the human butyrylcholinesterase enzyme. We probe structural properties of the enzyme-inhibitor complex in the presence of dialkyl phenyl phosphates and derivatives that include changes to the aromatic group and alkane-to-cholinyl substitutions that help these inhibitors mimic physiological substrates. Monte Carlo docking allowed for the identification of three regions within the active site of the enzyme where substituents of the phosphate group could be structurally stabilized. Computational clustering was used to identify distinct binding modes and their relative stabilities. Molecular dynamics suggest an essential asparagine residue not previously characterized as strongly influencing inhibitor strength which may serve as a crucial component in catalytic and inhibitory activity. This study provides a framework for suggesting future inhibitors that we expect will be effective at sub-micromolar concentrations.
Stirewalt, Heather R. "Computation as a Model Building Tool in a High School Physics Classroom." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785706.
Full textThe Next Generation Science Standards (NGSS) have established computational thinking as one of the science and engineering practices that should be developed in high school classrooms. Much of the work done by scientists is accomplished through the use of computation, but many students leave high school with little to no exposure to coding of any kind. This study outlines an attempt to integrate computational physics lessons into a high school algebra-based physics course which utilizes Modeling Instruction. Specifically, it aims to determine if students who complete computational physics assignments demonstrate any difference in understanding force concepts as measured by the Force Concept Inventory (FCI) versus students who do not. Additionally, it investigates students’ attitudes about learning computation alongside physics. Students were introduced to Vpython programs during the course of a semester. The FCI was administered pre and post instruction, and the gains were measured against a control group. The Computational Modeling in Physics Attitudinal Student Survey (COMPASS) was administered post instruction and the responses were analyzed. While the FCI gains were slightly larger on average than the control group, the difference was not statistically significant. This at least suggests that incorporating computational physics assignments does not adversely affect students’ conceptual learning.
Iacchetta, Alexander S. "Spatio-Spectral Interferometric Imaging and the Wide-Field Imaging Interferometry Testbed." Thesis, University of Rochester, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10936092.
Full textThe light collecting apertures of space telescopes are currently limited in part by the size and weight restrictions of launch vehicles, ultimately limiting the spatial resolution that can be achieved by the observatory. A technique that can overcome these limitations and provide superior spatial resolution is interferometric imaging, whereby multiple small telescopes can be combined to produce a spatial resolution comparable to a much larger monolithic telescope. In astronomy, the spectrum of the sources in the scene are crucial to understanding the material composition of the sources. So, the ultimate goal is to have high-spatial-resolution imagery and obtain sufficient spectral resolution for all points in the scene. This goal can be accomplished through spatio-spectral interferometric imaging, which combines the aperture synthesis aspects of a Michelson stellar interferometer with the spectral capabilities of Fourier transform spectroscopy.
Spatio-spectral interferometric imaging can be extended to a wide-field imaging modality, which increases the collecting efficiency of the technique. This is the basis for NASA’s Wide-field Imaging Interferometry Testbed (WIIT). For such an interferometer, there are two light collecting apertures separated by a variable distance known as the baseline length. The optical path in one of the arms of the interferometer is variable, while the other path delay is fixed. The beams from both apertures are subsequently combined and imaged onto a detector. For a fixed baseline length, the result is many low-spatial-resolution images at a slew of optical path differences, and the process is repeated for many different baseline lengths and orientations. Image processing and synthesis techniques are required to reduce the large dataset into a single high-spatial-resolution hyperspectral image.
Our contributions to spatio-spectral interferometry include various aspects of theory, simulation, image synthesis, and processing of experimental data, with the end goal of better understanding the nature of the technique. We present the theory behind the measurement model for spatio-spectral interferometry, as well as the direct approach to image synthesis. We have developed a pipeline to preprocess experimental data to remove unwanted signatures in the data and register all image measurements to a single orientation, which leverages information about the optical system’s point spread function. In an experimental setup, such as WIIT, the reference frame for the path difference measured for each baseline is unknown and must be accounted for. To overcome this obstacle, we created a phase referencing technique that leverages point sources within the scene of known separation in order to recover unknown information regarding the measurements in a laboratory setting. We also provide a method that allows for the measurement of spatially and spectrally complicated scenes with WIIT by decomposing them prior to scene projection.
Sun, Baoqing. "Three dimensional computational imaging with single-pixel detectors." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6127/.
Full textVarner, Samuel John. "Experimental and computational techniques in carbon-13 NMR." W&M ScholarWorks, 1999. https://scholarworks.wm.edu/etd/1539623952.
Full textLefebvre, Antoine. "Computational acoustic methods for the design of woodwind instruments." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97000.
Full textCette thèse présente des méthodes pour la conception d'instruments de musique à vent à l'aide de calculs scientifiques. La méthode des matrices de transfert pour le calcul de l'impédance d'entrée est décrite. Une méthode basée sur le calcul par Éléments Finis est appliquée à la détermination des paramètres des matrices de transfert des trous latéraux des instruments à vent, à partir desquels de nouvelles équations sont développées pour étendre la validité deséquations de la littérature. Des simulations par Éléments Finis de l'effet d'une clé suspendue au-dessus des trous latéraux donnent des résultats différents de la théorie pour les trous courts. La méthode est aussi appliquée à des trous sur un corps conique et nous concluons que les paramètres des matrices de transmission développées pour les tuyaux cylindriques sont également valides pour les tuyaux coniques.Une condition frontière pour l'approximation des pertes viscothermiques dans les calculs par Éléments Finis est développée et permet la simulation d'instruments complets. La comparaison des résultats de simulations d'instruments avec plusieurs trous ouverts ou fermés montre que la méthode des matrices de transfert présente des erreurs probablement attribuables aux interactions internes et externes entre les trous. Cet effet n'est pas pris en compte dans laméthode des matrices de transfert et pose une limite à la précision de cette méthode. L'erreur maximale est de l'ordre de 10 cents. L'effet de la courbure du corps de l'instrument est étudié avec la méthode des Éléments Finis. L'impédance de rayonnement du pavillon d'un instrument est calculée avec la méthode des matrices de transfert et comparée aux résultats de la méthode des Éléments Finis; nous concluons que la méthode des matrices de transfert n'estpas appropriée à la simulation des pavillons.Finalement, une méthode d'optimisation est présentée pour le calcul de la position et des dimensions des trous latéraux avec plusieurs contraintes, qui est basé sur l'estimation des fréquences de jeu avec la méthode des matrices de transfert. Plusieurs instruments simples sont conçus et des prototypes fabriqués et évalués.
Andrews, Brian. "Computational Solutions for Medical Issues in Ophthalmology." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case15275972120621.
Full textEdelmaier, Christopher. "Computational Modeling of Mitosis in Fission Yeast." Thesis, University of Colorado at Boulder, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10837613.
Full textMitosis ensures the proper segregation of chromosomes into daughter cells, which is accomplished by the mitotic spindle. During fission yeast mitosis, chromosomes establish bi-orientation as the bipolar spindle assembles, meaning that sister kinetochores become attached to microtubules whose growth was initiated by the two sister poles. This process includes mechanisms that correct erroneous attachments made by the kinetochores during the attachment process. This thesis presents a 3D physical model of spindle assembly in a Brownian dynamics-kinetic Monte Carlo simulation framework and a realistic description of the physics of microtubule, kinetochore, and chromosome dynamics, in order to interrogate the dynamics and mechanisms of chromosome bi-orientation and error correction. We have added chromosomes to our previous physical model of spindle assembly, which included microtubules, a spherical nuclear envelope, motor proteins, crosslinking proteins, and spindle pole bodies (centrosomes). In this work, we have explored the mechanical properties of kinetochores and their interactions with microtubules that achieve amphitelic spindle attachments at high frequency. A minimal physical model yields simulations that generate chromosome attachment errors, but resolves them, much as normal chromosomes do.
Garcia, Alberto J. "Parameter Dependence of Pair Correlations in Clean Superconducting-Magnetic Proximity Systems." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10841350.
Full textCooper pairs are known to tunnel through a barrier between superconductors in a Josephson junction. The spin states of the pairs can be a mixture of singlet and triplet states when the barrier is an inhomogeneous magnetic material. The purpose of this thesis is to better understand the behavior of pair correlations in the ballistic regime for different magnetic configurations and varying physical parameters. We use a tight-binding Hamiltonian to describe the system and consider singlet-pair conventional superconductors. Using the Bogoliubov-Valatin transformation, we derive the Bogoliubov-de Gennes equations and numerically solve the associated eigenvalue problem. Pair correlations in the magnetic Josephson junction are obtained from the Green's function formalism for a superconductor. This formalism is applied to Josephson junctions composed of discrete and continuous magnetic materials. The differences between representing pair correlations in the time and frequency domain are discussed, as well as the advantages of describing the Gor'kov functions on a log scale rather than the commonly used linear scale, and in a rotating basis as opposed to a static basis. Furthermore, the effects of parameters such as ferromagnetic width, magnetization strength, and band filling will be investigated. Lastly, we compare results in the clean limit with known results in the diffusive regime.
Arias, Tomas A. "New analytic and computational techniques for finite temperature condensed matter systems." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13158.
Full textClarke, David A. "Scale Setting and Topological Observables in Pure SU(2) LGT." Thesis, The Florida State University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10935396.
Full textIn this dissertation, we investigate the approach of pure SU(2) lattice gauge theory to its continuum limit using the deconfinement temperature, six gradient scales, and six cooling scales. We find that cooling scales exhibit similarly good scaling behavior as gradient scales, while being computationally more efficient. In addition, we estimate systematic error in continuum limit extrapolations of scale ratios by comparing standard scaling to asymptotic scaling. Finally we study topological observables in pure SU(2) using cooling to smooth the gauge fields, and investigate the sensitivity of cooling scales to topological charge. We find that large numbers of cooling sweeps lead to metastable charge sectors, without destroying physical instantons, provided the lattice spacing is fine enough and the volume is large enough. Continuum limit estimates of the topological susceptibility are obtained, of which we favor χ1/4/Tc = 0.643(12). Differences between cooling scales in different topological sectors turn out to be too small to be detectable within our statistical error.
Swoger, Maxx Ryan. "Computational Investigation of Material and Dynamic Properties of Microtubules." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1532108320185937.
Full textThomas, Adrian Hugh. "Experimental and computational studies of amorphous magnetic systems." Thesis, Keele University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334633.
Full textRuiz, Gutierrez Elfego. "Theoretical and computational modelling of wetting phenomena in smooth geometries." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/34536/.
Full textFiorini, Francesca. "Experimental and computational dosimetry of laser-driven radiation beams." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3371/.
Full textBruma, Alina. "A combined experimental and computational study of AuPd nanoparticles." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4372/.
Full textIgram, Dale J. "Computational Modeling and Characterization of Amorphous Materials." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1564347980986716.
Full textSmith, Katherine Margaret. "Effects of Submesoscale Turbulence on Reactive Tracers in the Upper Ocean." Thesis, University of Colorado at Boulder, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10623667.
Full textIn this dissertation, Large Eddy Simulations (LES) are used to model the coupled turbulence-reactive tracer dynamics within the upper mixed layer of the ocean. Prior work has shown that LES works well over the spatial and time scales relevant to both turbulence and reactive biogeochemistry. Additionally, the code intended for use is able to carry an arbitrary number of tracer equations, allowing for easy expansion of the species reactions. Research in this dissertation includes a study of 15 idealized non-reactive tracers within an evolving large-scale temperature front in order determine and understand the fundamental dynamics underlying turbulence-tracer interaction in the absence of reactions. The focus of this study, in particular, was on understanding the evolution of biogeochemically-relevant, non-reactive tracers in the presence of both large (~5 km) submesoscale eddies and smallscale (~100 m) wave-driven Langmuir turbulence. The 15 tracers studied have different initial, boundary, and source conditions and significant differences are seen in their distributions depending on these conditions. Differences are also seen between regions where submesoscale eddies and small-scale Langmuir turbulence are both present, and in regions with only Langmuir turbulence. A second study focuses on the examination of Langmuir turbulence effects on upper ocean carbonate chemistry. Langmuir mixing time scales are similar to those of chemical reactions, resulting in potentially strong tracer-flow coupling effects. The strength of the Langmuir turbulence is varied, from no wave-driven turbulence (i.e., only shear-driven turbulence), to Langmuir turbulence that is much stronger than that found in typical upper ocean conditions. Three different carbonate chemistry models are also used in this study: time-dependent chemistry, equilibrium chemistry, and no-chemistry (i.e., non-reactive tracers). The third and final study described in this dissertation details the development of a reduced-order biogeochemical model with 17 state equations that can accurately reproduce the Bermuda Atlantic Time-series Study (BATS) ecosystem behavior, but that can also be integrated within high-resolution LES.
Chu, Feng. "Validation of a Lagrangian model for laser-induced fluorescence." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6077.
Full textHuang, Xun. "Adaptive mesh refinement for computational aeroacoustics." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/47087/.
Full textNathaniel, James Edward II. "A computational study of electronic structures of graphene allotropes with electrical bias." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2011. http://digitalcommons.auctr.edu/dissertations/582.
Full textBiafore, Michael. "A computational role for the one-dimensional Lieb-Schultz-Mattis spin model." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12227.
Full textPovall, Timothy Mark. "Dense granular flow in rotating drums: a computational investigation of constitutive equations." Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29694.
Full textHeinrich, Lukas. "Searches for Supersymmetry, RECAST, and Contributions to Computational High Energy Physics." Thesis, New York University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13421570.
Full textThe search for phenomena Beyond the Standard Model (BSM) is the primary motivation for the experiments at the Large Hadron Collider (LHC). This dissertation assesses the experimental status of supersymmetric theories based on analyses of data collected by the ATLAS experiment during the first and second run of the LHC. Both R-parity preserving theories defined within the framework of the Minimally Supersymmetric Standard Model (MSSM) as well as R-parity violating models are studied. Further, a framework for systematic reinterpretation, RECAST, is presented which enables a streamlined, community-wide, approach to the search for BSM physics through the preservation of data analyses as parametrized computational workflows. A language and execution engine for such workflows of heterogeneous workloads on distributed computing systems is presented. Additionally, a new implementation of the HistFactory class of binned likelihoods based on auto-differentiable computational graphs is developed for accelerated and distributed inference computation. Finally, to enable efficient reinterpretation, a method of estimating excursion sets of one or more resource-intensive, multivariate, black-box functions, such as p-value functions, through an information-based Bayesian Optimization procedure is introduced.
Nguyen, Tung Le. "Computational Modeling of Slow Neurofilament Transport along Axons." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1547036394834075.
Full textHarris, Chelsea E. "One Shell, Two Shell, Red Shell, Blue Shell| Numerical Modeling to Characterize the Circumstellar Environments of Type I Supernovae." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10837128.
Full textThough fundamental to our understanding of stellar, galactic, and cosmic evolution, the stellar explosions known as supernovae (SNe) remain mysterious. We know that mass loss and mass transfer are central processes in the evolution of a star to the supernova event, particularly for thermonuclear Type Ia supernovae (SNe Ia), which are in a close binary system. The circumstellar environment (CSE) contains a record of the mass lost from the progenitor system in the centuries prior to explosion and is therefore a key diagnostic of SN progenitors. Unfortunately, tools for studying the CSE are specialized to stellar winds rather than the more complicated and violent mass-loss processes hypothesized for SN Ia progenitors.
This thesis presents models for constraining the properties of a CSE detached from the stellar surface. In such cases, the circumstellar material (CSM) may not be observed until interaction occurs and dominates the SN light weeks or even months after maximum light. I suggest we call SNe with delayed interaction SNe X;n (i.e. SNe Ia;n, SNe Ib;n). I per- formed numerical hydrodynamic simulations and radiation transport calculations to study the evolution of shocks in these systems. I distilled these results into simple equations that translate radio luminosity into a physical description of the CSE. I applied my straightforward procedure to derive upper limits on the CSM for three SNe Ia: SN 2011fe, SN 2014J, and SN 2015cp. I modeled interaction to late times for the SN Ia;n PTF11kx; this led to my participation in the program that discovered interaction in SN 2015cp. Finally, I expanded my simulations to study the Type Ib;n SN 2014C, the first optically-confirmed SN X;n with a radio detection. My SN 2014C models represent the first time an SN X;n has been simultaneous modeled in the x-ray and radio wavelengths.
Dadachanji, Zareer. "Computational many-body theory of electron correlation in solids." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387593.
Full textAhmed, Israr. "Mathematical and computational modelling of soft and active matter." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/18641/.
Full text