To see the other types of publications on this topic, follow the link: Computer calculations.

Dissertations / Theses on the topic 'Computer calculations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer calculations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Saira, Mian I. "Computer graphics and enzyme-substrate calculations." Thesis, University of Oxford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Plankis, Tomas. "Computer calculations for some sequences and polynomials." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20091008_155608-75536.

Full text
Abstract:
In this thesis we will consider divisibility properties of some recurrent sequences, Newman polynomials and computer calculations in those and related questions of number theory.
Čia nagrinėsime rekurenčiųjų sekų dalumo savybes, Niumano polinomus ir kompiuterių panaudojimąatliekant įvairius skaičiavimus, susijusius su minėtais skaičių teorijos klausimais.
APA, Harvard, Vancouver, ISO, and other styles
3

Riddell, A. G. "Computer algorithms for Euclidean lattice gauge theory calculations." Thesis, University of Canterbury. Physics, 1988. http://hdl.handle.net/10092/8220.

Full text
Abstract:
The computer algorithm devised by K. Decker [25] for the calculation of strong coupling expansions in Euclidean lattice gauge theory is reviewed. Various shortcomings of this algorithm are pointed out and an improved algorithm is developed. The new algorithm does away entirely with the need to store large amounts of information, and is designed in such a way that memory useage is essentially independant of the order to which the expansion is being calculated. A good deal of the redundancy and double handling present in the algorithm of ref. [25] is also eliminated. The algorithm has been used to generate a 14th order expansion for the energy of a glue ball with non-zero momentum in Z₂ lattice gauge theory in 2+1 dimensions. The resulting expression is analysed in order to study the restoration of Lorentz invariance as the theory approaches the continuum. A description is presented of the alterations required to extend the algorithm to calculations in 3+1 dimensions. An eighth order expansion of the z₂ mass gap in 3+1 dimensions has been calculated. The eighth order term differs from a previously published result.
APA, Harvard, Vancouver, ISO, and other styles
4

Wallin, Erik. "Alumina Thin Films : From Computer Calculations to Cutting Tools." Doctoral thesis, Linköpings universitet, Plasma och beläggningsfysik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15360.

Full text
Abstract:
The work presented in this thesis deals with experimental and theoretical studies related to alumina thin films. Alumina, Al2O3, is a polymorphic material utilized in a variety of applications, e.g., in the form of thin films. However, controlling thin film growth of this material, in particular at low substrate temperatures, is not straightforward. The aim of this work is to increase the understanding of the basic mechanisms governing alumina growth and to investigate novel ways of synthesizing alumina coatings. The thesis can be divided into two main parts, where the first part deals with fundamental studies of mechanisms affecting alumina growth and the second part with more application-oriented studies of high power impulse magnetron sputter (HiPIMS) deposition of the material. In the first part, it was shown that the thermodynamically stable α phase, which normally is synthesized at substrate temperatures of around 1000 °C, can be grown using reactive sputtering at a substrate temperature of merely 500 °C by controlling the nucleation surface. This was done by predepositing a Cr2O3 nucleation layer. Moreover, it was found that an additional requirement for the formation of the α phase is that the depositions are carried out at low enough total pressure and high enough oxygen partial pressure. Based on these observations, it was concluded that energetic bombardment, plausibly originating from energetic oxygen, is necessary for the formation of α-alumina (in addition to the effect of the chromia nucleation layer). Moreover, the effects of residual water on the growth of crystalline films were investigated by varying the partial pressure of water in the ultra high vacuum (UHV) chamber. Films deposited onto chromia nucleation layers exhibited a columnar structure and consisted of crystalline α-alumina if deposited under UHV conditions. However, as water to a partial pressure of 1*10-5 Torr was introduced, the columnar α-alumina growth was disrupted. Instead, a microstructure consisting of small, equiaxed grains was formed, and the γ-alumina content was found to increase with increasing film thickness. To gain a better understanding of the atomistic processes occurring on the surface, density functional theory based computational studies of adsorption and diffusion of Al, O, AlO, and O2 on different α-alumina (0001) surfaces were also performed. The results give possible reasons for the difficulties in growing the α phase at low temperatures through the identification of several metastable adsorption sites and also show how adsorbed hydrogen might inhibit further growth of α-alumina crystallites. In addition, it was shown that the Al surface diffusion activation energies are unexpectedly low, suggesting that limited surface diffusivity is not the main obstacle for low-temperature α-alumina growth. Instead, it is suggested to be more important to find ways of reducing the amount of impurities, especially hydrogen, in the process and to facilitate α-alumina nucleation when designing new processes for low-temperature deposition of α-alumina. In the second part of the thesis, reactive HiPIMS deposition of alumina was studied. In HiPIMS, a high-density plasma is created by applying very high power to the sputtering magnetron at a low duty cycle. It was found, both from experiments and modeling, that the use of HiPIMS drastically influences the characteristics of the reactive sputtering process, causing reduced target poisoning and thereby reduced or eliminated hysteresis effects and relatively high deposition rates of stoichiometric alumina films. This is not only of importance for alumina growth, but for reactive sputter deposition in general, where hysteresis effects and loss of deposition rate pose a substantial problem. Moreover, it was found that the energetic and ionized deposition flux in the HiPIMS discharge can be used to lower the deposition temperature of α-alumina. Coatings predominantly consisting of the α phase were grown at temperatures as low as 650 °C directly onto cemented carbide substrates without the use of nucleation layers. Such coatings were also deposited onto cutting inserts and were tested in a steel turning application. The coatings were found to increase the crater wear resistance compared to a benchmark TiAlN coating, and the process consequently shows great potential for further development towards industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Woodward, Luke. "Zeta functions of groups : computer calculations and functional equations." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Palmer, Jonathan. "Computer assisted loop calculations in the dualized standard model." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Joseph E. "An interactive computer tool for imprecise calculations in engineering systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hayward, Peter. "Parallel likelihood calculations for phylogenetic trees." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17919.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: Phylogenetic analysis is the study of evolutionary relationships among organisms. To this end, phylogenetic trees, or evolutionary trees, are used to depict the evolutionary relationships between organisms as reconstructed from DNA sequence data. The likelihood of a given tree is commonly calculated for many purposes including inferring phylogenies, sampling from the space of likely trees and inferring other parameters governing the evolutionary process. This is done using Felsenstein’s algorithm, a widely implemented dynamic programming approach that reduces the computational complexity from exponential to linear in the number of taxa. However, with the advent of efficient modern sequencing techniques the size of data sets are rapidly increasing beyond current computational capability. Parallel computing has been used successfully to address many similar problems and is currently receiving attention in the realm of phylogenetic analysis. Work has been done using data decomposition, where the likelihood calculation is parallelised over DNA sequence sites. We propose an alternative way of parallelising the likelihood calculation, which we call segmentation, where the tree is broken down into subtrees and the likelihood of each subtree is calculated concurrently over multiple processes. We introduce our proposed system, which aims to drastically increase the size of trees that can be practically used in phylogenetic analysis. Then, we evaluate the system on large phylogenies which are constructed from both real and synthetic data, to show that a larger decrease of run times are obtained when the system is used.
AFRIKAANSE OPSOMMING:Filogenetiese analise is die studie van evolusionêre verwantskappe tussen organismes. Filogenetiese of evolusionêre bome word aangewend om die evolusionêre verwantskappe, soos herwin vanuit DNS-kettings data, tussen organismes uit te beeld. Die aanneemlikheid van ’n gegewe filogenie word oor die algemeen bereken en aangewend vir menigte doeleindes, insluitende die afleiding van filogenetiese bome, om te monster vanuit ’n versameling van sulke moontlike bome en vir die afleiding van ander belangrike parameters in die evolusionêre proses. Dit word vermag met behulp van Felsenstein se algoritme, ’n alombekende benaderingwyse wat gebruik maak van dinamiese programmering om die berekeningskompleksiteit van eksponensieel na lineêr in die aantal taxa, te herlei. Desnieteenstaande, het die koms van moderne, doeltreffender orderingsmetodes groter datastelle tot gevolg wat vinnig besig is om bestaande berekeningsvermoë te oorskry. Parallelle berekeningsmetodes is reeds suksesvol toegepas om vele soortgelyke probleme op te los, met groot belangstelling tans in die sfeer van filogenetiese analise. Werk is al gedoen wat gebruik maak van data dekomposisie, waar die aanneemlikheidsberekening oor die DNS basisse geparallelliseer word. Ons stel ’n alternatiewe metode voor, wat ons segmentasie noem, om die aanneemlikheidsberekening te parallelliseer, deur die filogenetiese boom op te breek in sub-bome, en die aanneemlikheid van elke sub-boom gelyklopend te bereken oor verskeie verwerkingseenhede. Ons stel ’n stelsel voor wat dit ten doel het om ’n drastiese toename in die grootte van die bome wat gebruik kan word in filogenetiese analise, teweeg te bring. Dan, word ons voorgestelde stelsel op groot filogenetiese bome, wat vanaf werklike en sintetiese data gekonstrueer is, evalueer. Dit toon aan dat ’n groter afname in looptyd verkry word wanneer die stelsel in gebruik is.
APA, Harvard, Vancouver, ISO, and other styles
9

Argyn, Aidar. "Material And Heat Balance Calculations Of Eti-bakir Plant By Computer." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609734/index.pdf.

Full text
Abstract:
In this study the data taken from Outokumpu type Flash smelter of Eti-Bakir Plant (Samsun, Turkey) was used to write a computer program in Visual Basic with interface to Excel. Flash smelting is the pyrometallurgical process for smelting metal sulfide concentrates, used in Eti-Bakir plant. In this plant, copper flash smelting consists of blowing fine, dried copper sulfide concentrate mixtures, silica flux, lignite with air into the furnace and natural gas as main fuel. The molten matte is the principal product of the furnace and slag contains 0.5-2% Cu. It is sent to a slag treatment (flotation) process for Cu recovery. This flash furnace off-gas contains from 8-12 volume % SO2 which is fixed as H2SO4. Written program was used to optimize the consumption of oxygen enriched air, fuel and lignite in this Flash Smelter by making material and heat balance of the plant.
APA, Harvard, Vancouver, ISO, and other styles
10

Caccioppo, Donna L. "The Measurement of Environmental Factors during Function Point Calculations." NSUWorks, 2003. http://nsuworks.nova.edu/gscis_etd/437.

Full text
Abstract:
The forecast of pertinent project information, such as project costs, resource allocation, test case estimation and true software cost, is often needed early in the software development life cycle. Numerous sizing estimation models do exist to assist with obtaining (his type of data and currently the defacto standardization is the Function Point Analysis (FPA) model. This paper proposes an enhancement to FPA, referred to as the Function Point Environmental Analysis (FPEA), (0 address a specific FPA weakness identified by many experts within the field of software metrics. The component of the FPA model being questioned is the current estimation procedure of general system characteristics (GSC). It has been reported that the use of general system characteristics (GSC) within the FP A model does not add additional prediction accuracy to the unadjusted function point (UFP) count. By integrating environmental factors with GSCs, the UFP count could result in a more precise estimation procedure. If integrated successfully, this improvement will better equip information technology (IT) professionals to evaluate, plan, manage and control software production. The validation method of this proposal consists of a comparison of output from both the FP A and FPEA models obtained early within the software development life cycle
APA, Harvard, Vancouver, ISO, and other styles
11

MUELLER, MARCIO R. "Cálculo independente das unidades monitoras e tempos de tratamento em radioterapia." reponame:Repositório Institucional do IPEN, 2005. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11257.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:49:59Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:03:40Z (GMT). No. of bitstreams: 1 10283.pdf: 9865329 bytes, checksum: 191fbb39805c2da7c652e51119c0e642 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
12

Fiedler, Ulrich. "Computer implementation and camparision of two methods for optical waveguide dispersion calculations." Thesis, Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Xiao, Chuye, and Xi Jiang. "Implementation of utility calculations based on utility difference proportions." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-12281.

Full text
Abstract:
This pilot study presents an implementation of utility calculation that is based on utility difference proportions. To conduct a utility calculation for a consequence, one needs to evaluate utility difference proportions and to calculate utility values. The application can help decision makers to compare consequences pairwise and make decisions easier regarding to the final output utility value for each consequence. On the other hand, the application is limited to a specific case with specified preference ordering. The application can be further developed into a more real interactive analytic decision-making support tool in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

Möller, Joakim. "Aspects of the recursive projection method applied to flow calculations." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101.

Full text
Abstract:

In this thesis, we have investigated the Recursive Projection Method, RPM, as an accelerator for computations of both steady and unsteady flows, and as a stabilizer in a bifurcation analysis.

The criterion of basis extraction is discussed. It can be interpreted as a tolerance for the accuracy of the eigenspace spanned by the identified basis, alternatively it can be viewed as a criterion when the approximative Krylov sequence becomes numerically rank deficient.

Steady state calculations were performed on two different turbulent test-cases; a 2D supersonic nozzle flow with the Spalart-Allmaras 1-equation model and a 2D sub-sonic airfoil simulation using the κ - ε model. RPM accelerated the test-cases with a factor between 2 and 5.

In multi-scale problems, it is often of interest to model the macro-scale behavior, still retaining the essential features of the full systems. The ``coarse time stepper'' is a heuristic approach for circumventing the analytical derivation of models. The system studied here is a linear lattice of non-linear reaction sites coupled by diffusion. After reformulation of the time-evolution equation as a fixed-point scheme, RPM coupled with arc-length continuation is used to calculate the bifurcation diagrams of the effective (but analytically unavailable) equation.

Within the frame-work of dual time-stepping, a common approach in unsteady CFD-simulation, RPM is used to accelerate the convergence. Two test-cases were investigated; the von Karman vortex-street behind a cylinder at Re=100, and the periodic shock oscillation of a symmetric airfoil at M ∞ = 0.76 with a Reynolds number Re=11 x 106.

It was believed that once a basis had been identified, it could be retained for several steps. The simulations usually showed that the basis could only be retained for one step.

The need for updating the basis motivates the use of Krylov methods. The most common method is the (Block-) Arnoldi algorithm. As the iteration proceeds, Krylov methods become increasingly expensive and restart is required. Two different restart algorithm were tested. The first is that of Lehoucq and Maschhoff, which uses a shifted QR iteration, the second is a block extension of the single-vector Arnoldi method due to Stewart. A flexible hybrid algorithm is derived combining the best features of the two.

APA, Harvard, Vancouver, ISO, and other styles
15

Pogén, Tobias. "Asynchronous Particle Calculations on Secondary GPU for Real Time Applications." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

af, Sandeberg Jonas. "Speeding Up Value at Risk Calculations Using Accelerators." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177384.

Full text
Abstract:
Calculating Value at Risk (VaR) can be a time consuming task. Therefore it is of interest to find a way to parallelize this calculation to increase performance. Based on a system built in Java, which hardware is best suited for these calculations? This thesis aims to find which kind of processing unit that gives optimal performance when calculating scenario based VaR. First the differences of the CPU, GPU and coprocessor is examined in a theoretical study. Then multiple versions of a parallel VaR algorithm are implemented for a CPU, a GPU and a coprocessor trying to make use of the findings from the theoretical study. The performance and ease of programming for each version is evaluated and analyzed. By running performance tests it is found that the CPU was the winner when coming to performance while running the chosen VaR algorithm and problem sizes.
Att beräkna Value at Risk (VaR) kan vara tidskrävande. Därför är det instressant att finna möjligheter att parallelisera och snabba upp dessa beräkningar för att förbättra prestandan. Men vilken hårdvara är bäst lämpad för dessa beräkningar? Detta arbete syftar till att för ett system skrivet i Java hitta vilken typ av beräkningsenhet som ger optimal prestanda vid scenariobaserade VaR beräkningar. Först gjordes en teoretisk undersökning av CPUn, GPUn och en coprocessor. Flera versioner av en parallel VaR algoritm implementeras för en CPU, GPU och en coprocessor där resultaten från undersökningen utnyttjas. Prestandan samt enkelheten att programmera varje version utvärderas och analyseras. De utförda prestanda testerna visar att vinnaren vad gäller prestanda är CPUn för den valda VaR algoritmen och de testade problemstorlekarna.
APA, Harvard, Vancouver, ISO, and other styles
17

Kurtcephe, Murat. "PEDIGREE QUERY, VISUALIZATION, AND GENETIC CALCULATIONS TOOL." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1339099818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nyholm, Tufve. "Verification of dose calculations in radiotherapy." Doctoral thesis, Umeå : Umeå University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lisowski, Michael F. "Pseudopotentials for electronic structure calculations of small CdSe colloidal quantum dots." Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1339456.

Full text
Abstract:
A method of generating and testing pseudopotentials will be presented. This required the development of PPTester, a custom software program to analyze and quantify various parameters. These methods were first used to study bulk Si and verify the installation and performance of SIESTA. Plots, which agreed with published results, for band gap and charge density were generated.Next, pseudopotentials for Cd and Se were constructed and tested. Two separate Cd potentials were evaluated. Electronic structure calculations for two, four and six atom small cadmium selenide (CdSe) colloidal quantum dots were performed. The changes in geometry of initial versus relaxed atomic positions of these systems were evaluated. Output values of the electronic structure calculation, for example Fermi energy, were analyzed.
Department of Physics and Astronomy
APA, Harvard, Vancouver, ISO, and other styles
20

Abdulaziz, Imtithal Mohammed. "Mathematical modelling and computer simulations of induced voltage calculations in AC electric traction." Thesis, Edinburgh Napier University, 2003. http://researchrepository.napier.ac.uk/Output/3845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stankus, Andrea. "Implementing intersection calculations of the ray tracing algorithm with systolic arrays /." Online version of thesis, 1987. http://hdl.handle.net/1850/8857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shafer, Lawrence E. "Data Driven Calculations Histories to Minimize IEEE-755 Floating-point Computational Error." NSUWorks, 2004. http://nsuworks.nova.edu/gscis_etd/830.

Full text
Abstract:
The widely implemented and used IEEE-754 Floating-point specification defines a method by which floating-point values may be represented in fixed-width storage. This fixed-width storage does not allow the exact value of all rational values to be stored. While this is an accepted limitation of using the IEEE-754 specification, this problem is compounded when non-exact values are used to compute other values. Attempts to manage this problem have been limited to software implementations that require special programming at the source code level. While this approach works, the problem coder must be aware of the software and explicitly write high-level code specifically referencing it. The entirety of a calculation is not available to the special software so optimum results cannot always be obtained when the range of operand values is large. This dissertation proposes and implements an architecture that uses integer algorithms to minimize precision loss in complex floating-point calculations. This is done using runtime calculation operand values at a simulated hardware level. These calculations are coded in a high-level language such that the coder is not knowledgeable about the details of how the calculation is performed.
APA, Harvard, Vancouver, ISO, and other styles
23

Péchaud, Mickaël. "Shortest paths calculations, and applications to medical imaging." Phd thesis, Ecole Normale Supérieure de Paris - ENS Paris, 2009. http://tel.archives-ouvertes.fr/tel-00843997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Lihong. "Application of neural networks in the first principles calculations and computer-aided drug design /." View the Table of Contents & Abstract, 2004. http://sunzi.lib.hku.hk/hkuto/record/B30575503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hu, Lihong, and 胡麗紅. "Application of neural networks in the first principles calculations and computer-aided drug design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B45014796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Master, Cyrus Phiroze 1975. "Band structure and gain calculations for gallium nitride quantum-well structures." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9873.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 113-115).
by Cyrus Phiroze Master.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
27

Fornander, Hannes. "Denoising Monte Carlo Dose Calculations Using a Deep Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263096.

Full text
Abstract:
This thesis explores the possibility of using a deep neural network (DNN) to denoise Monte Carlo dose calculations for external beam radiotherapy. The dose distributions considered here are for inhomogeneous materials such as those of the human body. The purpose of the project is to explore whether a DNN is able to preserve important features of the dose distributions as well as to evaluate if there is a potential performance gain of using a DNN compared to the traditional approach of running a full Monte Carlo simulation. The network architecture considered in this thesis is a 3D version of the U-net. The results of using the 3D U-net for denoising suggest that it preserves the features of the dose distributions rather well while having a low propagation time. Thus, this indicates that the proposed approach could be a feasible alternative to quickly predict the final dose distribution.
Detta examensarbete utforskar möjligheten att använda ett djupt neuralt nätverk (DNN) för brusreducering av Monte Carlo-dosberäkningar för extern strålbehandling. De dosdistributioner som avses här är för inhomogena material så som människokroppen. Syftet med detta projekt är att avgöra huruvida ett DNN kan bevara dosdistributionernas viktiga attribut och om ett DNN kan öka prestandan jämfört med att beräkna dosen med en komplett Monte Carlosimulering. Nätverksarkitekturen som används i detta projekt är en 3D-version av U-net. Resultaten av att använda ett 3D U-net för avbrusning indikerar att metoden bevarar dosdistributionernas attribut relativt väl och har dessutom låg propageringstid. Detta indikerar alltså att den föreslagna metoden skulle kunna vara ett möjligt alternativ för att snabbt beräkna den slutgiltiga dosen.
APA, Harvard, Vancouver, ISO, and other styles
28

Jämbeck, Joakim P. M. "Computer Simulations of Heterogenous Biomembranes." Doctoral thesis, Stockholms universitet, Institutionen för material- och miljökemi (MMK), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-101297.

Full text
Abstract:
Molecular modeling has come a long way during the past decades and in the current thesis modeling of biological membranes is the focus. The main method of choice has been classical Molecular Dynamics simulations and for this technique a model Hamiltonian, or force field (FF), has been developed for lipids to be used for biological membranes. Further, ways of more accurately simulate the interactions between solutes and membranes have been investigated. A FF coined Slipids was developed and validated against a range of experimental data (Papers I-III). Several structural properties such as area per lipid, scattering form factors and NMR order parameters obtained from the simulations are in good agreement with available experimental data. Further, the compatibility of Slipids with amino acid FFs was proven. This, together with the wide range of lipids that can be studied, makes Slipids an ideal candidate for large-scale studies of biologically relevant systems. A solute's electron distribution is changed as it is transferred from water to a bilayer, a phenomena that cannot be fully captured with fixed-charge FFs.  In Paper IV we propose a scheme of implicitly including these effects with fixed-charge FFs in order to more realistically model water-membrane partitioning. The results are in good agreement with experiments in terms of free energies and further the differences between using this scheme and the more traditional approach were highlighted. The free energy landscape (FEL) of solutes embedded in a model membrane is explored in Paper V. This was done using biased sampling methods with a reaction coordinate that included intramolecular degrees of freedom (DoF). These DoFs were identified in different bulk liquids and then used in studies with bilayers. The FELs describe the conformational changes necessary for the system to follow the lowest free energy path. Besides this, the pitfalls of using a one-dimensional reaction coordinate are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
29

Çubuk, Ekin Doğuş. "Investigating Non-Periodic Solids Using First Principles Calculations and Machine Learning Algorithms." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33493370.

Full text
Abstract:
Computational methods are expected to play an increasingly important role in materials design. In order to live up to these expectations, simulations need to have predictive power. To achieve this, there are two hurdles, both relating to the complexity of physical interactions. The first is the quantum mechanical interactions of ions and electrons at short timescales, which have proven difficult to simulate using classical computation. While it is now possible to model some properties and materials using first principles methods (e.g. density functional theory), accuracy, consistency and computational efficiency need to be improved to meet the demands of high-throughput materials design. The second hurdle is the difficulty of predicting the outcomes of interactions between ions at longer timescales. These interactions are central to some of the biggest mysteries in condensed matter physics, such as the glass transition. Meanwhile, the field of machine learning and artificial intelligence has seen rapid progress in the last decade. Due to improvements in hardware, software, and methodology, machine learning algorithms are now able to learn complex tasks by mastering fundamental concepts from data. Thus, this thesis explores the applicability of machine learning to the main challenges facing computational materials design. First, as a case study, we investigate the lithiation of amorphous silicon. We show that large unit cells need to be simulated to model lithium-silicon alloys accurately. By analyzing the geometric structures of local neighborhoods of silicon atoms, it is possible to explain the macroscopic behavior from microscopic signatures. In response to the first hurdle as discussed above, we train neural networks to reproduce energies of silicon structures and silicon-lithium alloys, which allows us to study much larger unit cells. We then explore silicon neural networks in detail, in order to explain how this specific machine learning architecture can model quantum mechanical interactions. The following two chapters focus on the second hurdle which arises from complex ionic configurations. By studying Lennard-Jones supercooled liquids, we try to resolve two mysteries related to supercooled liquids: 1) why the dynamics are spatially heterogeneous, and 2) why the relaxation time increases super-exponentially as the temperature is lowered. Through machine learning, we can resolve the first mystery quantitatively. Furthermore, we show that the second can also be resolved in our framework, by using empirical measurements of the machine learned representation, which we call ``softness''. Finally, we discuss the physical meaning of softness, by comparing it to other measures and applying unsupervised learning and reduced curve-fitting models.
Engineering and Applied Sciences - Applied Physics
APA, Harvard, Vancouver, ISO, and other styles
30

Filiz, Anil Yigit. "A New Approach For Better Load Balancing Of Visibility Detection And Target Acquisition Calculations." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612255/index.pdf.

Full text
Abstract:
Calculating visual perception of entities in simulations requires complex intersection tests between the line of sight and the virtual world. In this study, we focus on outdoor environments which consist of a terrain and various objects located on terrain. Using hardware capabilities of graphics cards, such as occlusion queries, provides a fast method for implementing these tests. In this thesis, we introduce an approach for better load balancing of visibility detection and target acquisition calculations by the use of occlusion queries. Our results show that, the proposed approach is 1.5 to 2 times more efficient than the existing algorithms on the average.
APA, Harvard, Vancouver, ISO, and other styles
31

CORDEIRO, THIAGO da S. "Estudo da propagacao de pulsos laser atraves de um sistema amplificador de pulsos ultracurtos para o desenvolvimento de um alargador temporal do tipo offner." reponame:Repositório Institucional do IPEN, 2009. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9409.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:26:35Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:04:11Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
32

OLIVEIRA, CARLOS A. de. "Um modelo para a analise estrutural de flanges de vasos de pressao nucleares." reponame:Repositório Institucional do IPEN, 1987. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9871.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:32:19Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:10:29Z (GMT). No. of bitstreams: 1 01539.pdf: 2619471 bytes, checksum: 65a3f84aa03ddb8d57b288f87aa1a2f1 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
33

TRYTI, JO, and JOHAN CARLSSON. "Similarity search in multimedia databases : Performance evaluation for similarity calculations in multimedia databases." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Djayapertapa, Lesmana. "A computational method for coupled aerodynamic-structural calculations in unsteady transonic flow with active control study." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Myung-Hee Y. "Calculations of the interactions of energetic ions with materials for protection of computer memory and biological systems." W&M ScholarWorks, 1995. https://scholarworks.wm.edu/etd/1539623872.

Full text
Abstract:
Theoretical calculations were performed for the propagation and interactions of particles having high atomic numbers and energy through diverse shield materials including polymeric materials and epoxy-bound lunar regolith by using transport codes for laboratory ion beams and the cosmic ray spectrum. Heavy ions fragment and lose energy upon interactions with shielding materials of specified elemental composition, density, and thickness. A fragmenting heavy iron ion produces hundreds of isotopes during nuclear reactions, which are treated in the solution of the transport problem used here. A reduced set of 80 isotopes is sufficient to represent the charge distribution, but a minimum of 122 isotopes is necessary for the mass distribution. These isotopes are adequate for ion beams with charges equal to or less than 26. to predict the single event upset (SEU) rate in electronic devices, the resultant linear energy transfer (LET) spectra from the transport code behind various materials are coupled with a measured SEU cross section versus LET curve. The SEU rate on static random access memory (SRAM) is shown as a function of shield thickness for various materials. For a given mass the most effective shields for SEU reduction are materials with high hydrogen density, such as polyethylene. The shield effectiveness for protection of biological systems is examined by using conventional quality factors to calculate the dose equivalents and also by using the probability of the neoplastic transformation of shielded C3H10T1/2 mouse cells. The attenuation of biological effects within the shield and body tissues depends on the materials properties. The results predict that hydrogenous materials are good candidates for high-performance shields. Two biological models were used. Quantitative results depended upon model.
APA, Harvard, Vancouver, ISO, and other styles
36

Bala, Jaswanth. "Filtering estimated series of residential burglaries using spatio-temporal route calculations." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11822.

Full text
Abstract:
Context. According to Swedish National Council for Crime Prevention, there is an increase of 19% in residential burglary crimes in Sweden over the last decade and only 5% of the total crimes reported were actually solved by the law enforcement agencies. In order to solve these cases quickly and efficiently, the law enforcement agencies has to look into the possible linked serial crimes. Many studies have suggested to link crimes based on Modus Operendi and other characteristic. Sometimes crimes which are not possible to travel spatially with in the reported times but have similar Modus Operendi are also grouped as linked crimes. Investigating such crimes could possibly waste the resources of the law enforcement agencies. Objectives. In this study, we investigate the possibility of the usage of travel distance and travel duration between different crime locations while linking the residential burglary crimes. A filtering method has been designed and implemented for filtering the unlinked crimes from the estimated linked crimes by utilizing the distance and duration values. Methods. The objectives in this study are satisfied by conducting an experiment. The travel distance and travel duration values are obtained from various online direction services. The filtering method was first validated on ground truth represented by known linked crime series and then it was used to filter out crimes from the estimated linked crimes. Results. The filtering method had removed a total of 4% unlinked crimes from the estimated linked crime series when the travel mode is considered as driving. Whereas it had removed a total of 23% unlinked crimes from the estimated linked crime series when the travel mode is considered as walking. Also it was found that a burglar can take an average of 900 seconds (15 minutes) for committing a burglary. Conclusions. From this study it is evident that the usage of spatial and temporal values in linking residential burglaries gives effective crime links in a series. Also, the usage of Google Maps for getting distance and duration values can increase the overall performance of the filtering method in linking crimes.
APA, Harvard, Vancouver, ISO, and other styles
37

Mahankali, Uma. "Computer Simulation Studies of CLC Chloride Channels and Transporters." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1157115905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chi, Li-Jen. "Synthesis and computer-aided structural investigation of potentially photochromic spirooxazines." Thesis, Heriot-Watt University, 2000. http://hdl.handle.net/10399/564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Khodaverdi, Afaghi Mahtab. "Application of artificial neural network modeling in thermal process calculations of canned foods." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0033/MQ64381.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rönnberg, Arvid. "Evaluation and Sensitivity Analysis of Cost Calculations in the Thermo-Economic Modeling of CSP Plants." Thesis, KTH, Kraft- och värmeteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176876.

Full text
Abstract:
Thermo-economic modeling refers to the process of estimating the cost and performance of a power plant using cost oriented equations and reference data. In this thesis the fundamentals of cost and performance modeling as well as sensitivity analysis is researched and applied to an existing model in the field of concentrated solar power. The thesis aims to isolate the sources of possible errors and presents comprehensible methods of minimizing the sensitivity these give rise to. The extensive literature study provides the knowledge and methodologies necessary to perform an evaluation of a computer model and these methodologies are applied to the tool DYESOPT developed at the Royal Institute of Technology.   The evaluation highlights the importance of reliable references of operational solar power plants and also the current lack of such data. A particular area suffering from this is the cost estimation, which includes assumptions and requires future revisions. The sensitivity analysis methodologies one-at-a-time and the sensitivity index are used to locate the areas where extra care must be taken in order to minimize error as well as provide an understanding of the internal correlation of critical inputs.   The results show that the accuracy of the model is dominated by three inputs: solar multiple, tower height and storage time, and that certain intervals and combinations of these decide the overall error of the model. By isolating the intervals in which the sensitivity is at its minimum the model error can be roughly quantified with a class system using standard error intervals. For a model such as DYESOPT a minimum error of 20 to 30 percent is a reasonable assumption.
APA, Harvard, Vancouver, ISO, and other styles
41

Eggers, Patrick. "Parallelization of ray casting for solar irradiance calculations in urban environments." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-26144.

Full text
Abstract:
The growing amount of photovoltaic systems in urban environments creates peaks of energy generation in local energy grids. These peaks can lead to unwanted instability in the electrical grid. By aligning solar panels differently, spikes could be avoided. Planning locations for solar panels in urban environments is very time-intense as they require a high spatial and temporal resolution. The aim of this thesis is to investigate the decrease in runtime of planning applications by parallelizing ray-casting algorithms. This thesis includes a software tool for professionals and laymen, which has been developed in a user centered design process and shows ways to perform those calculations on a graphics processing unit.After creating a computational concept and a concept of the software design, those concepts have been implemented starting with an implementation of the Möller-Trumbore ray-casting algorithm which has been run with Python on the central processing unit (CPU). Further the same test with the same algorithm and the same data has been performed on the graphics processing unit (GPU) by using PyCUDA, a Python wrapper for NVIDIAs Compute Unified Device Architecture (CUDA). Both results were compared resulting in, that parallelizing, transferring and performing those calculations on the graphics processing unit can decrease the runtime of a software significantly. In the used system setup, the same calculations were 42 times faster on the Graphics Processing Unit than on the Central Processing Unit. It was also found, that other factors such as the time of the year, the location of the tested points in the data model, the test interval length and the algorithm design of the ray-casting algorithm have a major impact on the performance of such. In the test scenario the processing time for the same case, but just during another time of the year, increases by factor 4.The findings of this thesis can be used in a wide range of software as it shows, that computationally intensive calculations can easily be sourced out from the Python code and executed on another platform. By doing so, the runtime can be significantly decreased and the whole software package can get an enormous speed boost.
APA, Harvard, Vancouver, ISO, and other styles
42

YORIYAZ, HELIO. "Implementacao de queima espacial modificando o programa nodal baseado no metodo de elementos finitos e matriz resposta." reponame:Repositório Institucional do IPEN, 1986. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9855.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:32:07Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:57:55Z (GMT). No. of bitstreams: 1 02450.pdf: 2194932 bytes, checksum: 6259550accee46685ee00ef1e038cf62 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
43

DALTRO, TERESINHA F. L. "Desenvolvimento de uma nova metodologia para o calculo de dose em dosimetria fotografica." reponame:Repositório Institucional do IPEN, 1994. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10357.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:37:43Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:00:16Z (GMT). No. of bitstreams: 1 05383.pdf: 5069239 bytes, checksum: 7eb90d0ab97b4fe286a135fef870f96f (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
44

Bergmark, Fabian. "Online aggregate tables : A method forimplementing big data analysis in PostgreSQLusing real time pre-calculations." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207808.

Full text
Abstract:
In modern user-centric applications, data gathering and analysis is often of vitalimportance. Current trends in data management software show that traditionalrelational databases fail to keep up with the growing data sets. Outsourcingdata analysis often means data is locked in with a particular service, makingtransitions between analysis systems nearly impossible. This thesis implementsand evaluates a data analysis framework implemented completely within a re-lational database. The framework provides a structure for implementations ofonline algorithms of analytical methods to store precomputed results. The re-sult is an even resource utilization with predictable performance that does notdecrease over time. The system keeps all raw data gathered to allow for futureexportation. A full implementation of the framework is tested based on thecurrent analysis requirements of the company Shortcut Labs, and performancemeasurements show no problem with managing data sets of over a billion datapoints.
I moderna användarcentrerade applikationer är insamling och analys av dataofta av affärskritisk vikt. Traditionalla relationsdatabaser har svårt att hanterade ökande datamängderna. Samtidigt medför användning av externa tjänster fördataanalys ofta inlåsning av data, vilket försvårar byte av analystjänst. Dennarapport presenterar och utvärderar ett ramverk för dataanalys som är imple-menterat i en relationsdatabas. Ramverket tillhandahåller strukturer för attförberäkna resultat för analytiska beräkningar på ett effektivt sätt. Resultatetblir en jämn resursanvändning med förutsägbar prestanda som inte försämrasöver tid. Ramverket sparar även all insamlad data vilket möjliggör exporter-ing. Ramverket utvärderas hos företaget Shortcut Labs och resultatet visar attramverket klarar av datamängder på över en miljard punkter.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Hyun Deok. "Computer Simulation Studies of Ion Channels at High Temperatures." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1328890332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

PEREIRA, THIAGO M. "Determinacao da difusividade termica do esmalte e dentina em funcao da temperatura, utilizando termografia no infravermelho." reponame:Repositório Institucional do IPEN, 2009. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9449.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:26:53Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:00:48Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
47

POSSANI, RAFAEL G. "Re-engenharia do software SCMS para uma linguagem orientada a objetos (JAVA) para uso em construções de phantoms segmentados." reponame:Repositório Institucional do IPEN, 2012. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9933.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:32:58Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:06:11Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
48

GIOVANINNI, ADRIANO. "Estudo dos riscos apresentados pelos radioisótopos após serem submetidos aos efeitos da detonação de um artefato explosivo." reponame:Repositório Institucional do IPEN, 2012. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10111.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:34:53Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:08:18Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
49

Miller, Richard Allen. "Computer simulation of continuous fermentation of glucose to ethanol with the use of an expert system for parameter calculations and applications for bioreactor control." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/41545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

XIMENES, EDMIR. "Modelagem computacional do manequim matemático da mulher brasileira para cálculos de dosimetria interna e para fins de comparação das frações absorvidas específicas com a mulher referência." reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11451.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:52:02Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:44Z (GMT). No. of bitstreams: 0
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography