To see the other types of publications on this topic, follow the link: Calculative comparison.

Dissertations / Theses on the topic 'Calculative comparison'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Calculative comparison.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Li-Chyn 1965. "Comparison of Monte Carlo and analytic critical area calculation." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278175.

Full text
Abstract:
Since the profitability of VLSI industries is related to yield, the IC manufacturer finds it highly desirable to be able to predict the yield by computer-aided methods. A key part in the procedure to obtain yield by computer simulation is to find the critical area of a layout. This thesis is primarily devoted to the calculations of critical area. There are two techniques to find the critical area. In the first technique, an analytic method was used to analyze the circuit geometry in order to find the critical area. In the second technique a Monte Carlo Method is used. A program using this Monte Carlo yield simulation (the main method used in this thesis) has been developed for determining critical area of the metal layer of a 4K random access memory. The analytic method is used in a supporting way. The thesis also proposes an easy method to process the vast amount of layout database. This method reduces the time consumed by Monte Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
2

Díaz, José Antonio, and Virendra N. Mahajan. "Diffraction and geometrical optical transfer functions: calculation time comparison." SPIE-INT SOC OPTICAL ENGINEERING, 2017. http://hdl.handle.net/10150/626488.

Full text
Abstract:
In a recent paper, we compared the diffraction and geometrical optical transfer functions (OTFs) of an optical imaging system, and showed that the GOTF approximates the DOTF within 10% when a primary aberration is about two waves or larger [Appl. Opt., 55, 3241-3250 (2016)]. In this paper, we determine and compare the times to calculate the DOTF by autocorrelation or digital autocorrelation of the pupil function, and by a Fourier transform (FT) of the point-spread function (PSF); and the GOTF by a FT of the geometrical PSF and its approximation, the spot diagram. Our starting point for calculating the DOTF is the wave aberrations of the system in its pupil plane, and the ray aberrations in the image plane for the GOTF. The numerical results for primary aberrations and a typical imaging system show that the direct integrations are slow, but the calculation of the DOTF by a FT of the PSF is generally faster than the GOTF calculation by a FT of the spot diagram.
APA, Harvard, Vancouver, ISO, and other styles
3

Moore, James E. (James Ernest) Carleton University Dissertation Engineering Mechanical. "A comparison of the calculation schemes for computing weld cooling." Ottawa, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Purkiss, Sheila B. A. "Comparison of methods for calculating internal work of elite running." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/10112.

Full text
Abstract:
There are two basic models that are used to calculate the internal work involved in movement. The first, an energy-based model, calculates the changes in the energy of the segments. There are many variations of this model but Aleshinsky (1986) has shown that this approach lacks mathematical validity. The other, a power-based model, integrates the joint powers to find work. A modified power model (using absolute values) was shown by Aleshinsky (1986) to be mathematically valid but has only been used in two studies (Chapman et al., 1987; Caldwell and Forrester, 1992) each having only one subject. A version of this model was used in this study and was termed the absolute power method. For comparison purposes a modified version of the energy approach, called the absolute work method, was used. The internal work was then normalized for body mass and running velocity to obtain the "internal biomechanical cost" (IBC). The IBCs of normal running for four elite male and four elite female runners were compared to their IBCs of four inefficient running styles. The absolute power method was able to detect that the inefficient runs produced significantly higher internal work than normal running in 30 out of 32 cases (94%). Absolute work (the energy approach) could detect the inefficient runs in only 15 out of 32 cases (46%). As well, the absolute work approach was shown to be more variable and less reliable than the absolute power approach. The absolute power method also proved to be a useful tool for examining the work performed at each joint during a movement, thereby providing insight into where significant inefficiencies occur.
APA, Harvard, Vancouver, ISO, and other styles
5

Eleuterio, Daniel Patrick. "A comparison of bulk aerodynamic methods for calculating air-sea flux." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA359032.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography) Naval Postgraduate School, December 1998.
"December 1998." Thesis advisor(s): Qing Wang. Includes bibliographical references (p. 77-80). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Schach, Rainer, and Manuel Hentschel. "Grundlagen für die Nutzwertanalyse für Verstärkungen aus textilbewehrtem Beton." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244049476991-75979.

Full text
Abstract:
Im Rahmen des Transferprojektes sollen baubetriebliche Rahmenbedingungen und Kennwerte, die zur Beurteilung der wirtschaftlichen Anwendung des Verfahrens geeignet sind, erarbeitet werden. Untersucht werden soll die Applikation von textilbewehrtem Beton im Bereich der Sanierung und Verstärkung von großflächigen Betonbauteilen. Generell können Bauaufgaben in sehr vielen Fällen durch verschiedene Bauverfahren realisiert werden, die sich regelmäßig hinsichtlich der Kosten, der benötigten Bauzeit aber auch hinsichtlich der gelieferten Qualität und des Einflusses auf die Umwelt unterscheiden. Aus baubetrieblicher Sicht wird traditionell über den kalkulatorischen Verfahrensvergleich jenes Verfahren ermittelt, mit dem die Realisierung am wirtschaftlichsten ausgeführt werden kann. Falls qualitative Kriterien beim Verfahrensvergleich mit berücksichtigt werden sollen, stehen verschiedene Methoden zur Auswahl. Der Begriff Nutzwertanalyse wird häufig als Synonym für diese nichtmonetären Bewertungsverfahren verwendet. In diesem Sinne ist auch der Titel des Beitrages zu verstehen. Die Grundlage bilden die baubetrieblichen Rahmenbedingungen, welche im Rahmen dieses Forschungsprojektes bestimmt werden. Hierzu zählen unter anderem die Entwicklung einer Trockenmischung des zu verwendenden Betons aus der bisher verwendeten Standardrezeptur der TU Dresden und geeigneter Maschinen für die Applikation des textilbewehrten Betons.
APA, Harvard, Vancouver, ISO, and other styles
7

Neethling, Willem Francois. "Comparison of methods to calculate measures of inequality based on interval data." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97780.

Full text
Abstract:
Thesis (MComm)—Stellenbosch University, 2015.
ENGLISH ABSTRACT: In recent decades, economists and sociologists have taken an increasing interest in the study of income attainment and income inequality. Many of these studies have used census data, but social surveys have also increasingly been utilised as sources for these analyses. In these surveys, respondents’ incomes are most often not measured in true amounts, but in categories of which the last category is open-ended. The reason is that income is seen as sensitive data and/or is sometimes difficult to reveal. Continuous data divided into categories is often more difficult to work with than ungrouped data. In this study, we compare different methods to convert grouped data to data where each observation has a specific value or point. For some methods, all the observations in an interval receive the same value; an example is the midpoint method, where all the observations in an interval are assigned the midpoint. Other methods include random methods, where each observation receives a random point between the lower and upper bound of the interval. For some methods, random and non-random, a distribution is fitted to the data and a value is calculated according to the distribution. The non-random methods that we use are the midpoint-, Pareto means- and lognormal means methods; the random methods are the random midpoint-, random Pareto- and random lognormal methods. Since our focus falls on income data, which usually follows a heavy-tailed distribution, we use the Pareto and lognormal distributions in our methods. The above-mentioned methods are applied to simulated and real datasets. The raw values of these datasets are known, and are categorised into intervals. These methods are then applied to the interval data to reconvert the interval data to point data. To test the effectiveness of these methods, we calculate some measures of inequality. The measures considered are the Gini coefficient, quintile share ratio (QSR), the Theil measure and the Atkinson measure. The estimated measures of inequality, calculated from each dataset obtained through these methods, are then compared to the true measures of inequality.
AFRIKAANSE OPSOMMING: Oor die afgelope dekades het ekonome en sosioloë ʼn toenemende belangstelling getoon in studies aangaande inkomsteverkryging en inkomste-ongelykheid. Baie van die studies maak gebruik van sensus data, maar die gebruik van sosiale opnames as bronne vir die ontledings het ook merkbaar toegeneem. In die opnames word die inkomste van ʼn persoon meestal in kategorieë aangedui waar die laaste interval oop is, in plaas van numeriese waardes. Die rede vir die kategorieë is dat inkomste data as sensitief beskou word en soms is dit ook moeilik om aan te dui. Kontinue data wat in kategorieë opgedeel is, is meeste van die tyd moeiliker om mee te werk as ongegroepeerde data. In dié studie word verskeie metodes vergelyk om gegroepeerde data om te skakel na data waar elke waarneming ʼn numeriese waarde het. Vir van die metodes word dieselfde waarde aan al die waarnemings in ʼn interval gegee, byvoorbeeld die ‘midpoint’ metode waar elke waarde die middelpunt van die interval verkry. Ander metodes is ewekansige metodes waar elke waarneming ʼn ewekansige waarde kry tussen die onder- en bogrens van die interval. Vir sommige van die metodes, ewekansig en nie-ewekansig, word ʼn verdeling oor die data gepas en ʼn waarde bereken volgens die verdeling. Die nie-ewekansige metodes wat gebruik word, is die ‘midpoint’, ‘Pareto means’ en ‘Lognormal means’ en die ewekansige metodes is die ‘random midpoint’, ‘random Pareto’ en ‘random lognormal’. Ons fokus is op inkomste data, wat gewoonlik ʼn swaar stertverdeling volg, en om hierdie rede maak ons gebruik van die Pareto en lognormaal verdelings in ons metodes. Al die metodes word toegepas op gesimuleerde en werklike datastelle. Die rou waardes van die datastelle is bekend en word in intervalle gekategoriseer. Die metodes word dan op die interval data toegepas om dit terug te skakel na data waar elke waarneming ʼn numeriese waardes het. Om die doeltreffendheid van die metodes te toets word ʼn paar maatstawwe van ongelykheid bereken. Die maatstawwe sluit in die Gini koeffisiënt, ‘quintile share ratio’ (QSR), die Theil en Atkinson maatstawwe. Die beraamde maatstawwe van ongelykheid, wat bereken is vanaf die datastelle verkry deur die metodes, word dan vergelyk met die ware maatstawwe van ongelykheid.
APA, Harvard, Vancouver, ISO, and other styles
8

Johns, Dewi. "Radiotherapy dose calculation in oesophageal cancer : comparison of analytical and Monte Carlo methods." Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/105551/.

Full text
Abstract:
In this work a distributed computing system (RTGrid) has been configured and deployed to provide a statistically robust comparison of Monte Carlo (MC) and analytical dose calculations. 52 clinical oesophageal radiotherapy plans were retrospectively re-calculated using the Pencil Beam Enhanced (PBE) and Collapsed Cone Enhanced (CCE) algorithm within the Oncentra v4.3 radiotherapy (RT) Treatment Planning System (TPS). Simulations were performed using the BEAMnrc and DOSXYZnrc codes. The Computing Environment for Radiotherapy Research (CERR) has been used to calculate Dose Volume Histogram (DVH) parameters such as the volume receiving 95% Dose for the Planning Target Volume (PTV) for the PBE, CCE and MC calculated dose distributions. An initial sample of 12 oesophageal radiotherapy treatment plans were simulated using the RTGrid system. The differences in the DVH parameters between the dose calculation methods, and the variance in the 12 cases, were used to calculate the sample size needed. The required sample size was determined to be 37, so a further 40 oesophageal cases were simulated, following the same method. The median difference in the PTV V95% between CCE and MC in the group of 40 cases was found to be 3%. To choose a suitable test for the statistical significance of the difference, the Shapiro-Wilk test was performed, which showed that the differences between the two sets of PTV V95% values did not follow a Gaussian. Therefore the Wilcoxon matched pairs test was indicated, which showed that the null hypothesis (i.e. that the distributions are the same) was rejected with a p-value less than 0.001, so there is very strong evidence for a difference in the two sets of values of PTV V95%. Similar statistical analyses were performed for other DVH parameters, as well as Conformance Indices used to describe the agreement between the 95% dose and the PTV, and estimates of the Tumour Control Probability (TCP). From the results, the use of MC simulations are recommended when non-soft tissue voxels make up > 60% of the PTV.
APA, Harvard, Vancouver, ISO, and other styles
9

Schneider, Allison (Allison M. ). "A comparison of kinematic and dynamic schemes for calculating long-range atmospheric trajectories." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/114336.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-58).
Two numerical models, one kinematic and one dynamic, were created and compared in their ability to predict trajectories of atmospheric parcels over eight days. While kinematic models are more widely used due to their accuracy, dynamic models can be used pedagogically to visualize the balance of forces in the atmosphere. The kinematic model used gridded wind speed data from the Global Forecast System (GFS) to predict parcel flow, while the dynamic model calculated wind speeds from advection equations using geopotential height fields from GFS. The trajectories of ensembles of parcels were simulated from five launch locations. The spread of parcels from each location was calculated along with the deviation from reference trajectories. The dynamic model performed comparably to the kinematic model, despite the presence of inertial oscillations in some computed trajectories at mid- and high- latitudes which are likely to be physically unrealistic. The dynamic model was more sensitive to changes in spatial resolution than the kinematic model. Dynamic trajectory models were shown to be accurate enough to be used as a tool to visualize the interplay of forces acting in the atmosphere.
by Allison Schneider.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Gold, Erica Ashley. "Calculating likelihood ratios for forensic speaker comparisons using phonetic and linguistic parameters." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/6166/.

Full text
Abstract:
The research presented in this thesis examines the calculation of numerical likelihood ratios using phonetic and linguistic parameters derived from a corpus of recordings of speakers of Southern Standard British English. The research serves as an investigation into the development of the numerical likelihood ratio as a medium for framing forensic speaker comparison conclusions. The thesis begins by investigating which parameters are claimed to be the most useful speaker discriminants according to expert opinion, and in turn examines four of these ‘selected/valued’ parameters individually in relation to intra- and inter-speaker variation, their capacities as speaker discriminants, and the potential strength of evidence they yield. The four parameters analyzed are articulation rate, fundamental frequency, long-term formant distributions, and the incidence of clicks (velaric ingressive plosives). The final portion of the thesis considers the combination of the four parameters under a numerical likelihood ratio framework in order to provide an overall likelihood ratio. The contributions of this research are threefold. Firstly, the thesis presents for the first time a comprehensive survey of current forensic speaker comparison practices around the world. Secondly, it expands the phonetic literature by providing acoustic and auditory analysis, as well as population statistics, for four phonetic and linguistic parameters that survey participants have identified as effective speaker discriminants. And thirdly, it contributes to the forensic speech science and likelihood ratios for forensics literature by considering what steps can be taken to conceptually align the area of forensic speaker comparison with more developed areas of forensic science (e.g. DNA) by creating a human-based (auditory and acoustic-phonetic) forensic speaker comparison system.
APA, Harvard, Vancouver, ISO, and other styles
11

Schaefer, Martin. "Methodologies for aviation emission calculation a comparison of alternative approaches towards 4D global inventories /." [Berlin] : [Univ.-Bibliothek der Techn. Univ.], 2006. http://opus.kobv.de/tuberlin/volltexte/2006/1376/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pettersson, Anna, and Staffan Carolina Puga. "A Comparison between the Wickström Compartment Fire Model with Experiments and other Calculation Methods." Thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-87262.

Full text
Abstract:
An Excel spreadsheet application has been developed for calculating pre-flashover compartment fire temperature according to the method described in the book ”Temperature Calculation in Fire Safety Engineering” written by Wickström 2016. Here called the Wickström Compartment Fire Model, WCFM, which is a model where the surrounding structure can be assumed either as lumped heat or as semi-infinite.The excel spreadsheet is user friendly and time efficient. The model has been used to calculate temperature of gas layer which is compared with results from experiments and calculations made with the alternative calculation methods MQH, FDS and CFAST. The compartment fire chosen for comparisons were performed in a normal sized room with a door opening. The walls, floor and ceiling of the room consisted of the same material but changed between the tests. Four materials were used: light weight concrete, uninsulated steel and insulated steel on inside or outside.   The results of the comparison show that the temperature calculated with WCFM follows both the experiments and FDS temperature curves well for three of the tests. The test with light weight concrete differs from the results where WCFM in the end stabilises at a lower temperature. The difference that occurred between WCFM, the experiments and FDS, for the other three tests is mainly in the beginning of the tests. WCFM predicts temperatures which are as an average 100 ℃ higher or lower than what measured depending on test setup. In the end of the test WCFM, the experiments and FDS coincide and give essentially the same final temperatures. The temperature from MQH and CFAST differ considerably from the experiments in all the tests.
APA, Harvard, Vancouver, ISO, and other styles
13

Galvin, Geordie. "Comparison of on-pond measurement and back calculation of odour emission rates from anaerobic piggery lagoons." University of Southern Queensland, Faculty of Engineering and Surveying, 2005. http://eprints.usq.edu.au/archive/00001426/.

Full text
Abstract:
Odours are emitted from numerous sources and can form a natural part of the environment. The sources of odour range from natural to industrial sources and can be perceived by the community dependant upon a number of factors. These factors include frequency, intensity, duration, offensiveness and location (FIDOL). Or in other words how strong an odour is, at what level it becomes detectable, how long it can be smelt for, whether or not the odour is an acceptable or unacceptable smell as judged by the receptor (residents) and where the odour is smelt. Intensive livestock operations cover a wide range of animal production enterprises, with all of these emitting odours. Essentially, intensive livestock in Queensland, and a certain extent Australia, refers to piggeries, feedlots and intensive dairy and poultry operations. Odour emissions from these operations can be a significant concern when the distance to nearby residents is small enough that odour from the operations is detected. The distance to receptors is a concern for intensive livestock operations as it may hamper their ability to develop new sites or expand existing sites. The piggery industry in Australia relies upon anaerobic treatment to treat its liquid wastes. These earthen lagoons treat liquid wastes through degradation via biological activity (Barth 1985; Casey and McGahan 2000). As these lagoons emit up to 80 per cent of the odour from a piggery (Smith et al., 1999), it is imperative for the piggery industry that odour be better quantified. Numerous methods have been adopted throughout the world for the measurement of odour including, trained field sniffers, electronic noses, olfactometry and electronic methods such as gas chromatography. Although these methods all have can be used, olfactometry is currently deemed to be the most appropriate method for accurate and repeatable determination of odour. This is due to the standardisation of olfactometry through the Australian / New Zealand Standard for Dynamic Olfactometry and that olfactometry uses a standardised panel of "sniffers" which tend to give a repeatable indication of odour concentration. This is important as often, electronic measures cannot relate odour back to the human nose, which is the ultimate assessor of odour. The way in which odour emission rates (OERs) from lagoons are determined is subject to debate. Currently the most commonly used methods are direct and indirect methods. Direct methods refer to placing enclosures on the ponds to measure the emissions whereas indirect methods refer to taking downwind samples on or near a pond and calculating an emission rate. Worldwide the odour community is currently divided into two camps that disagree on how to directly measure odour, those who use the UNSW wind tunnel or similar (Jiang et al., 1995; Byler et al., 2004; Hudson and Casey 2002; Heber et al., 2000; Schmidt and Bicudo 2002; Bliss et al., 1995) or the USEPA flux chamber (Gholson et al., 1989; Heber et al., 2000; Feddes et al., 2001; Witherspoon et al., 2002; Schmidt and Bicudo 2002; Gholson et al., 1991; Kienbusch 1986). The majority of peer reviewed literature shows that static chambers such as the USEPA flux chamber under predict emissions (Gao et al., 1998b; Jiang and Kaye 1996) and based on this, the literature recommends wind tunnel type devices as the most appropriate method of determining emissions (Smith and Watts 1994a; Jiang and Kaye 1996; Gao et al., 1998a). Based on these reviews it was decided to compare the indirect STINK model (Smith 1995) with the UNSW wind tunnel to assess the appropriateness of the methods for determining odour emission rates for area sources. The objective of this project was to assess the suitability of the STINK model and UNSW wind tunnel for determining odour emission rates from anaerobic piggery lagoons. In particular determining if the model compared well with UNSW wind tunnel measurements from the same source; the overall efficacy of the model; and the relationship between source footprint and predicted odour emission rate.
APA, Harvard, Vancouver, ISO, and other styles
14

Tong, Bo. "More accurate two sample comparisons for skewed populations." Diss., Kansas State University, 2016. http://hdl.handle.net/2097/35783.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Haiyan Wang
Various tests have been created to compare the means of two populations in many scenarios and applications. The two-sample t-test, Wilcoxon Rank-Sum Test and bootstrap-t test are commonly used methods. However, methods for skewed two-sample data set are not well studied. In this dissertation, several existing two sample tests were evaluated and four new tests were proposed to improve the test accuracy under moderate sample size and high population skewness. The proposed work starts with derivation of a first order Edgeworth expansion for the test statistic of the two sample t-test. Using this result, new two-sample tests based on Cornish Fisher expansion (TCF tests) were created for both cases of common variance and unequal variances. These tests can account for population skewness and give more accurate test results. We also developed three new tests based on three transformations (T[subscript i] test, i = 1; 2; 3) for the pooled case, which can be used to eliminate the skewness of the studentized statistic. In this dissertation, some theoretical properties of the newly proposed tests are presented. In particular, we derived the order of type I error rate accuracy of the pooled two-sample t-test based on normal approximation (TN test), the TCF and T[subscript i] tests. We proved that these tests give the same theoretical type I error rate under skewness. In addition, we derived the power function of the TCF and TN tests as a function of the population parameters. We also provided the detailed conditions under which the theoretical power of the two-sample TCF test is higher than the two-sample TN test. Results from extensive simulation studies and real data analysis were also presented in this dissertation. The empirical results further confirm our theoretical results. Comparing with commonly used two-sample parametric and nonparametric tests, our new tests (TCF and Ti) provide the same empirical type I error rate but higher power.
APA, Harvard, Vancouver, ISO, and other styles
15

Nestle, Ingrid [Verfasser]. "The costs of climate change in the agricultural sector : a comparison of two calculation approaches / Ingrid Nestle." Flensburg : Zentrale Hochschulbibliothek Flensburg, 2012. http://d-nb.info/1028080921/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Costello, Margaret. "A comparison of three educational strategies for the acquisition of medication calculation skills among baccalaureate nursing students /." Access resource online, 2010. http://scholar.simmons.edu/handle/10090/12577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Brothers, Michael. "A comparison of different methods for calculating tangent-stifess matrices in a massively parallel computational peridynamics code." Thesis, The University of Texas at San Antonio, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1550324.

Full text
Abstract:

In order to maintain the quadratic convergence properties of the first-order Newton's method in quasi-static nonlinear analysis of solid structures it is crucial to obtain accurate, algorithmically consistent tangent-stiffness matrices. For an extremely small class of nonlinear material models, these consistent tangent-stiffness operators can be derived analytically; however, most often in practice, they are found through numerical approximation of derivatives. A goal of the study de- scribed in this thesis was to establish the suitability of an under-explored method for computing tangent-stiffness operators, referred to here as 'complex-step'. Compared are four methods of nu- merical derivative calculation: automatic differentiation, complex-step, forward finite difference, and central finite difference in the context of tangent-stiffness matrix calculation in a massively parallel computational peridynamics code. The complex-step method was newly implemented in the peridynamics code for the purpose of this comparison. The methods were compared through in situ profiling of the code for Jacobian accuracy, solution accuracy, speed, efficiency, Newton's method convergence rate and parallel scalability. The performance data was intended to serve as practical guide for code developers and analysts faced with choosing which method best suit the needs of their application code. The results indicated that complex-step produces Jacobians very similar, as measured by a low l 2 norm of element wise difference, to automatic differentiation. The values for this accuracy metric computed for forward finite difference and central finite differ- ence indicated orders of magnitude worse Jacobian accuracy than complex-step, but convergence vstudy results showed that convergence rate and solution was not strongly affected. Ultimately it was speculated that further studies on the effect of Jacobian accuracy may better accompany experiments conducted on plastic material models or towards the evaluation of approximate and Quasi-Newton's methods.

APA, Harvard, Vancouver, ISO, and other styles
18

Semprini, Elvio, Patrizia Cafarelli, Stefanis Adriana De, and Anthony A. G. Tomlinson. "Competitive sorption of toluene and acetone on H-ZSM5 zeolite: comparison between molecular simulation calculation and experimental results." Diffusion fundamentals 6 (2007) 69, S. 1-2, 2007. https://ul.qucosa.de/id/qucosa%3A14249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Röring, Johan. "Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-135910.

Full text
Abstract:
Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volatility, which are used as the strike price for a contract. This thesis will manage that difficulty and estimate strike prices with several different models. I will de- scribe how the variance strike for a variance swap can be estimated with a theoretical replicating scheme and how the result can be manipulated to obtain the volatility strike, which is a tech- nique that require Laplace transformations. The famous Black-Scholes model is described and how it can be used to estimate a volatility strike for volatility swaps. A new model that uses the Greeks vanna and vomma is described and put to the test. The thesis will also cover a couple of stochastic volatility models, Exponentially Weighted Moving Average (EWMA) and Gener- alized Autoregressive Conditional Heteroskedasticity (GARCH). The models’ estimations are compared to the realized volatility. A comparison of the mod- els’ performance over 2015 is made as well as a more extensive backtesting for Black-Scholes, EWMA and GARCH. The GARCH model performs the best in the comparison and the model that uses vanna and vomma gives a good result. However, because of limited data, one can not fully conclude that the model that uses vanna and vomma can be used when calculating the fair volatility strike for a volatility swap.
APA, Harvard, Vancouver, ISO, and other styles
20

Nezbeda, Jiří. "Porovnání ceny dopravní stavby se skutečně vynaloženými náklady v různém stupni rozestavěnosti." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-261286.

Full text
Abstract:
This diploma thesis deals with the calculation of the cost of the building work in order to create the price, which at the stage of the contract becomes a selling price, and the costs are fixed as a budget. The subject of this work is to monitor the development of costs in time so that the cost does not exceed the budget and the construction did not get into the negative result. Then set the price by its own methods (item budget, calculation according to budget indicators) and compare the prices and costs thus obtained from different phases of construction between each other and determine the differences. In this work is monitored and compared the cost of a construction contract and its costs in the construction stages. In the practical part,is used the method of direct comparison of cost and price values over time, in the form of different outputs from the controlling program and outputs from compiled item budget and the calculation of the construction. Analyzing the differences between these costs and their evolution over time by more detailed examination of item budgets and cost calculations, they determine the origin of these deviations. In conclusion, is proposed measures for a particular transport structure whitch was handled by the author as"master and co-ordinator of the construction".
APA, Harvard, Vancouver, ISO, and other styles
21

Chan, Yin-wai Pamela. "Number facts knowledge and errors in paper-and-pencil calculation: a comparison between dyslexic and non-dyslexic Chinese children." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B29791297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Göransson, Andréas. "Fatigue life analysis of weld ends : Comparison between testing and FEM-calculations." Thesis, Linköpings universitet, Hållfasthetslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108988.

Full text
Abstract:
The thesis examines the fatigue life of weld ends, where very little usable research previously has been conducted, and often the weld ends are the critical parts of the weld. It is essential knowing the fatigue life of welds to be able to use them most efficiently.The report is divided into two parts; in the first the different calculation methods used today at Toyota Material Handling are examined and compared. Based on the results from the analysis and what is used mostly today, the effective notch approach is the method used in part two.To validate the calculation methods and models used, fatigue testing of the welded test specimens was conducted together with a stress test. New modelling methods of the weld ends that coincide with the test results were made in the finite element software Abaqus. A new way of modelling the weld ends for the effective notch method is also proposed. By using a notch radius of 0.2 mm and rounded weld ends the calculated fatigue life better matches the life of the real weld ends.
APA, Harvard, Vancouver, ISO, and other styles
23

Barchyn, Thomas Edward, and University of Lethbridge Faculty of Arts and Science. "Field-based aeolian sediment transport threshold measurement : sensors, calculation methods, and standards as a strategy for improving inter-study comparison." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Geography, 2010, 2010. http://hdl.handle.net/10133/2616.

Full text
Abstract:
Aeolian sediment transport threshold is commonly defined as the minimum wind speed (or shear stress) necessary for wind-driven sediment transport. Threshold is a core parameter in most models of aeolian transport. Recent advances in methodology for field-based measurement of threshold show promise for improving parameterizations; however, investigators have varied in choice of method and sensor. The impacts of modifying measurement system configuration are unknown. To address this, two field tests were performed: (i) comparison of four piezoelectric sediment transport sensors, and (ii) comparison of four calculation methods. Data from both comparisons suggest that threshold measurements are non-negligibly modified by measurement system configuration and are incomparable. A poor understanding of natural sediment transport dynamics suggests that development of calibration methods could be difficult. Development of technical standards was explored to improve commensurability of measurements. Standards could assist future researchers with data syntheses and integration.
xi, 108 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
24

Hopkins, Austin Jeremy. "A Comparison of DEM-based methods for fluvial terrace mapping and sediment volume calculation: Application to the Sheepscot River Watershed, Maine." Thesis, Boston College, 2014. http://hdl.handle.net/2345/bc-ir:104046.

Full text
Abstract:
Thesis advisor: Noah P. Snyder
Thesis advisor: Gail Kineke
Fluvial terraces form in both erosional and depositional landscapes and are important recorders of land-use, climate, and tectonic history. Terrace morphology consists of a flat surface bounded by valley walls and a steep-sloping scarp adjacent to the river channel. Combining these defining characteristics with high-resolution digital elevation models (DEMs) derived from airborne light detection and ranging (lidar) surveys, several methods have been developed to identify and map terraces. This research introduces a newly developed objective terrace mapping method and compares it with three existing DEM-based techniques to determine which is most applicable over entire watersheds. This work also tests multiple methods that use lidar DEMs to quantify the thickness and volume of fill terrace deposits identified upstream of dam sites. The preliminary application is to the Sheepscot River watershed, Maine, where strath and fill terraces are present and record Pleistocene deglaciation, Holocene eustatic forcing, and Anthropocene land-use change. Terraces were mapped at four former dam sites along the river using four separate methodologies and compared to manually delineated area. The methods tested were: (1) edge detection using MATLAB, (2) feature classification algorithms developed by Wood (1996), (3) spatial relationships between interpreted terraces and surrounding natural topography (Walter et al., 2007), and (4) the TerEx terrace mapping toolbox developed by Stout and Belmont (2013). Thickness and volume estimates of fill sediment were calculated at two of the study sites using three DEM-based models and compared to in situ data collected from soil pits, cut bank exposures, and ground penetrating radar surveys. The results from these comparisons served as the basis for selecting methods to map terraces throughout the watershed and quantify fill sediment upstream of current and historic dam sites. Along the main stem and West Branch of the Sheepscot River, terraces were identified along the longitudinal profile of the river using an algorithm developed by Finnegan and Balco (2013), which computes the elevation frequency distribution at regularly spaced cross-sections normal to the channel, and then mapped using the feature classification (Wood, 1996) method. For terraces upstream of current or historic dam sites, thickness and volume estimates were calculated using the two best performing datum surfaces. If all analyzed terraces are composed of impounded sediment, these DEM-based results suggest that terraces along the main stem and West Branch of the Sheepscot River potentially contain up to 1.5 x 106 m3 of fill. These findings suggest powerful new ways to quickly analyze landscape history over large regions using high-resolution, LiDAR DEMs while relying less heavily on the need for detailed and costly field data collection
Thesis (MS) — Boston College, 2014
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Geology and Geophysics
APA, Harvard, Vancouver, ISO, and other styles
25

Franklin, Michael J. "Calculation, comparison and modeling of single channel proton flux across reconstituted wildtype and mutant F₀ of the F₁F₀ ATPase from Escherichia coli." Connect to online resource - WSU on-site and authorized users, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rodríguez, Moronta Francisco Manuel, and Lucas Judith Segurola. "Comparison of ASTM and BSI Standards for the calculation of fracture energy of adhesives : Design of a fixture and testing of DCB specimens." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11221.

Full text
Abstract:
Modern synthetic structural adhesives are finding a place in the drive to improve the fuel efficiencyof automobiles through weight reduction of the structure. One of the most important properties ofthe adhesives used in this type of joining is the fracture energy.A literature study is carried out to gain a broader understanding of the methods used for thedetermination of the fracture energy of adhesives. One of the most common experimental methodsrelies on the use of the Double Cantilever Beam (DCB) test specimen. International standards for theDCB test are studied. Prediction of the fracture energy using Linear Elastic Fracture Mechanics andthe J-integral approach, a closed form solution and finite element methods are also seen. Differencesin these methods are attributed in part to the nonlinear behaviour of the adhesive being studied. It isdecided to use the results of a non-standard DCB test and the 40% error calculated by a theoreticalstandard method as a point of reference.A comprehensive comparison of the American Society for Testing Materials (ASTM) and BritishStandard Institution (BSI) standards for the determination of the fracture energy of adhesives isundertaken. Limitations and overlaps in the standards are identified. A DCB specimen isrecommended and an experimental procedure that satisfies elements one or both standards issuggested along with several small additions such as using a wire to assist in the application of theadhesive and the use of cameras to track the crack growth. In addition, a new fixture to allow testingof the recommended DCB specimen according to the standards is designed and manufactured.Materials for the preparation of tests specimens are ordered and, based on available laboratorytime, a single DCB test specimen is made for the purposes of testing a rubber-based automotivestructural adhesive. The specimen is tested using the recommended experimental procedure usingthe new fixture. The data produced during the test are collected and interpreted using themethodology proposed in the BSI standard for the calculation of the fracture energy of the selectedrubber-based adhesive. Several challenges found during this process are identified. The fractureenergy determined from the standard-based experiment ranges from 140 J/m2 to 1380 J/m2depending on the methodology used.The values of the fracture energy determined from the standard-based DCB experiment are thencompared to the fracture energy seen with the nonstandard-based experiment and to the standardbasednumerical test seen in the literature. It is shown that when simple beam theory method isused the difference in the results found in the standard-based experiment and nonstandard-basedexperiment can be confirmed to lie within the 40% error observed in the literature.Finally, the contributions of the project are summarized and recommendations for future work aremade. In particular, the lack of information given in the BSI standard when calculating the fractureenergy and the need for multiple test specimens are required by the standard, must be addressed inorder to support the obtained results and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Ying. "A Comparison of Calculation by Real-Time and by Linear-Response Time-Dependent Density Functional Theory in the Regime of Linear Optical Response." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460554444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hauser, Thomas, Jennifer Adam, and Henry Schulz. "Comparison of calculated and experimental power in maximal lactate-steady state during cycling." Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-148263.

Full text
Abstract:
Background: The purpose of this study was the comparison of the calculated (MLSSC) and experimental power (MLSSE) in maximal lactate steady-state (MLSS) during cycling. Methods: 13 male subjects (24.2 ± 4.76 years, 72.9 ± 6.9 kg, 178.5 ± 5.9 cm, V_O2max: 60.4 ± 8.6 ml min−1 kg−1, V_ Lamax: 0.9 ± 0.19 mmol l-1 s-1) performed a ramp-test for determining the V_O2max and a 15 s sprint-test for measuring the maximal glycolytic rate (V_ Lamax). All tests were performed on a Lode-Cycle-Ergometer. V_O2max and V_ Lamax were used to calculate MLSSC. For the determination of MLSSE several 30 min constant load tests were performed. MLSSE was defined as the highest workload that can be maintained without an increase of blood-lactate-concentration (BLC) of more than 0.05 mmol l−1 min−1 during the last 20 min. Power in following constant-load test was set higher or lower depending on BLC. Results: MLSSE and MLSSC were measured respectively at 217 ± 51 W and 229 ± 47 W, while mean difference was −12 ± 20 W. Orthogonal regression was calculated with r = 0.92 (p < 0.001). Conclusions: The difference of 12 W can be explained by the biological variability of V_O2max and V_ Lamax. The knowledge of both parameters, as well as their individual influence on MLSS, could be important for establishing training recommendations, which could lead to either an improvement in V_O2max or V_ Lamax by performing high intensity or low intensity exercise training, respectively. Furthermore the validity of V_ Lamax -test should be focused in further studies.
APA, Harvard, Vancouver, ISO, and other styles
29

Beijer, Anton, and Magnus Lindholm. "Beräkning av pumpkapacitet samt konstruktion av pumpfundament." Thesis, Högskolan i Skövde, Institutionen för teknik och samhälle, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-8115.

Full text
Abstract:
Ett undersöknings- och utvecklingsarbete för att lösa ett problem med att dränkbara pumpar i ett vattenavrinningssystem gick sönder med perioder på två år i genomsnitt, utfördes i samarbete med Cementa AB i Skövde. Orsak till pumpars haveri söktes och fanns vara bristande rutiner och kunskap om det underhåll pumparna krävde. För att lösa detta problem utvecklades riktlinjer för nyinköp av torruppställda pumpar för att möjliggöra kontinuerligt underhåll. Då möjligheter för placering av torruppställd pump saknades utvecklades ett pumpfundament för placering av torruppställd pump. Krav för utvecklingsarbetet togs fram i samarbete med Cementas underhållsavdelning och teoretisk dimensionering av dränkbara länspumpars dåvarande volymflödeskapacitet utfördes. Krav utvärderades och viktades med hjälp av parvis jämförelse. Dimensionering och kontroll av hållfasthet för utvecklat pumpfundamentet utfördes med hjälp av Finita Element Analyser i programvaran Pro/Engineer Creo 1.0 Mechanica. Kontroll av hållfasthet i infästning av pumpfundamentet samt svetsfogar utfördes analytiskt. Arbetet resulterade i en rekommendation till Cementa AB i Skövde att ta in offerter på nya torruppställda pumpar med hjälp av utvecklade riktlinjer och att tillverka det pumpfundament som tagits fram inom ramen för examensarbetet, för att placera nya pumpar på. Att noggrant följa de underhållsinstruktioner som pumpar har och att underlätta för personal att utföra detta underhåll ansågs kunna bidra till att pumpar skulle få en längre och mer ekonomisk livslängd.
A development project to solve problems with why submersible pumps in a run-off system broke down with periods of two years, on average, was performed in collaboration with Cementa AB in Skövde. Reason for the pumps breakdowns was searched and found to be inadequate procedures and missing knowledge of the maintenance required on the pumps. To solve this problem, guidelines for the purchase of new dry pit pumps were developed to allow for continuous maintenance. As the possibilities of placing a dry well pump did not exist at Cementa, a pump foundation was developed. Requirements for the development work were produced in cooperation with Cementas maintenance department and theoretical dimensioning of the submersible bilge pumps volume flow capacity was performed. Requirements were evaluated and weighted using Pairwise comparison. The design and control of the strength of the developed pump foundation was performed using finite element analysis in the software Pro/Engineer Creo 1.0 Mechanica. Controls of the strength of the attachment of the pump foundation and welds were performed analytically. The work resulted in a recommendation to Cementa AB in Skövde to bring in quotes on the new dry-pit pumps using the developed guidelines and to manufacture the pump foundation developed within the framework of the thesis. Cementa was also recommended to carefully follow the maintenance instructions for pumps and make it easier for staff to perform this maintenance. This was recommended to ensure that new pumps would have a longer and more economical lifetime.
APA, Harvard, Vancouver, ISO, and other styles
30

Berglund, Martin. "Ekonomisk jämförelse av prefabricerad betong och korslimmat trä-Totalkostnad av materialen i stommarna." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-92366.

Full text
Abstract:
Byggbranschen i sverige har ett mål att till 2045 uppnå noll nettoutsläpp av växthusgaser. I dagsläget såbyggs det vid större byggnationer mestadels med betongstomme, vilket har en hög koldioxidutsläpp vidnyproduktion. Detta medför att miljömålen inte kommer uppnås om inte andra alternativ tillbyggnadsmaterial börjar användas i större utsträckning. Det material som är bäst alternativ till betong iflerbostadshus är KL-trä tack vare dess hållfasthet förmåga jämfört med vanligt trä. Problemet medKL-trä är att det har en så pass mycket dyrare produktionskostnad, att det fortsätter väljas betongstommari flerbostadshus. Om byggbranschen skall ha någon chans att klara kraven som ställts för år 2045 mednoll nettoutsläpp av växthusgaser så måste kostnaden för KL-trä alltså dras ner, för att dess användningska påskyndas. Syftet med denna studie är att ta fram den exakta totalkostnads skillnaden mellan enprefabricerad betongstomme och en KL-trästomme, samtidigt som byggnadsarean och struktur påstommarna är så lika som möjligt. Målet var att bevisa hur långt KL-träet har kvar till att konkurrera medbetong i flerbostadshus ekonomiskt.För att göra jämförelsen så togs en referensbyggnad fram som redan var utförd i betong, som sedandimensioneras om till KL-trä för en rättvis jämförelse. Dimensioneringen skedde genom lastsummeringgjord förhand. Dessa laster används sedan i ett beräkningsprogram för varje konstruktionsdel i Calculatissom tar fram de dimensioner som krävs för att klara hållfasthets kraven. Med ny dimensioneradträstomme, togs två materiallistor fram för de olika stommarna och jämfördes i kalkylprogrammet Bidconför en få fram en totalkostnads differens. Denna studie har fokuserat på att jämföra kostnaderna förstomme materialen för en byggnad i KL-trä och en i prefab betong. Icke bärande väggar, takkonstruktionsamt husunderbyggnad tillhör inte stommen, och kommer därmed inte att jämföras.Studien gav ett resultat som visade att det är cirka 42% dyrare att bygga med en KL-trä stomme än enprefabricerad betongstomme i ett flerbostadshus. Mellanbjälklaget är den dyrare komponenten, medansexempelvis andra delar som balkong och bärande vägg ändå visar sig vara billigare. Enligt BBR måstesärskilda ljud och brandkrav uppfyllas i lägenhetshus. För att uppnå dessa så behöver det läggas tillljudisolering i de KL-element som är lägenhetsavskiljande och brandgipsskivor i hela stommen medKL-trä. Detta leder till att KL-trä stommen generellt får en större tjocklek jämfört med betongstommenoch även lite extra kostnader att ha i åtanke, även då det bärande materialet är mindre i KL-trä stommen.Detta leder då till att lägenheternas boarea i KL-trä byggnaden blir något mindre än i betong byggnaden.Slutsatsen är att KL-trä inte är ett ekonomiskt alternativ till prefab betong enligt Bidcons databaser närdenna studien genomförts och är 42% dyrare tack vare att mellanbjälklaget har så hög kostnad.
The construction industry in Sweden has a goal of achieving zero net emissions of greenhouse gases by2045. At present, larger constructions are mostly built with a concrete frame, which has a high carbondioxide emission during new production. This means that the environmental goals will not be achievedunless other alternative building materials are being used to a greater extent. The material that is the bestalternative to concrete in apartment buildings is cross laminated timber (CLT), due to its durabilitycompared to regular timber. The problem with CLT is that it has such a much more expensive productioncost, that concrete frames continue to be chosen in apartment buildings. If the construction industry is tohave any chance of meeting the requirements set for the year 2045 with zero net emissions of greenhousegases, the cost of CLT must therefore be reduced in order for its use to be accelerated. The purpose of thisstudy is to produce the exact total cost difference between a prefabricated concrete frame and a CLTframe, while at the same time the building area and structure of the frames are as similar as possible. Thegoal was to prove how far the CLT has financially, until it can compete with concrete in apartmentbuildings.To make the comparison, a reference building was developed out of concrete, which is laterredimensioned to CLT for a fair comparison. The dimensioning was done by summarizing all loads byhand. These loads were later used for every part in the frame, in the calculation program Calculatis to getthe dimensions required for the demands on durability. With a new dimensioned wooden frame, twomaterial lists were produced for the different frames and compared in the Bidcon calculation program toobtain a total cost difference. This study has focused on comparing the costs of frame materials for abuilding in CLTand one in prefabricated concrete. Non-load-bearing walls, roof construction and groundstructure do not belong in the frame, and will therefore not be in the comparison.The study gave a result that showed that it is about 42% more expensive to build with a CLT frame than aprefabricated concrete frame in a 7 storey apartment building. The floor is the more expensivecomponent, while for example other parts such as balconies and load-bearing walls still proved to becheaper. According to BBR, special noise and fire requirements must be met in apartment buildings. Toachieve these, some sound insulation needs to be added to the CLT elements that are apartment separatorsand fire plasterboards in the entire frame with CLT. This leads to the CLT frame generally having agreater thickness compared to the concrete frame and also a few extra costs to keep in mind, even whenthe load-bearing material is smaller in the CLT frame. This leads to the living space of the apartments inthe CLT building being slightly smaller than in the concrete building. The conclusion is that CLT is not aneconomical alternative to prefabricated concrete according to Bidcon's databases when this study wascarried out and is 42% more expensive due to the fact that the intermediate floor has such a high cost.
APA, Harvard, Vancouver, ISO, and other styles
31

Kielbassa, Janice. "Mathematical modelling of temperature effects on the life-history traits and the population dynamics of bullhead (Cottus gobio)." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10181.

Full text
Abstract:
La température de l'eau joue un rôle majeur dans le cycle de vie des poissons. Dans un contexte de changement climatique global, le réchauffement peut avoir un impact fort sur la croissance, la fécondité et la survie. L'enjeu de cette thèse est la modélisation mathématique de l'influence de la température sur les traits d'histoire de vie d'une population de chabot (Cottus gobio) afin de faire de la prédiction à la fois au niveau individuel et populationnel. Les données expérimentales qui permettront de calibrer les modèles sont issues du bassin de la Drôme (France) et plus particulièrement du sous-bassin du Bez. Dans une première étape, il s'agit de développer un modèle de rétrocalcul qui peut être utilisé pour calculer les longueurs individuelles des chabots aux âges précédents à partir des données mesurées à la capture. Il s'agit, dans un deuxième temps, de développer un modèle de croissance dépendant de la température de l'eau qui sert à prédire la longueur moyenne des chabots à un âge donné. Enfin, il s'agit de passer de l'échelle de l'individu à celle de la population en prenant en compte tous les traits d'histoire de vie et leurs dépendances vis-à-vis de la température. Plus précisément, un modèle matriciel de type Leslie, à la fois dépendant du temps et de la température, structuré en classe d'âges est développé et utilisé pour prédire la dynamique de population sous différents scénario du réchauffement climatique
Water temperature plays a key role in the life cycle of fish. Therefore, increasing temperatures due to the expected climate change may have a strong impact on growth, fecundity and survival. The goal of this thesis is to model the impact of temperature on the life-history traits of a bullhead population (Cottus gobio) in order to make predictions both at individual and at population level. The models developed here are calibrated on experimental field data from a population living in the Bez River network (Drôme, France). First, a new back-calculation model is derived that can be used to compute individual fish body lengths at earlier ages from capture data. Next, a growth model is proposed that incorporates the water temperature and can be used to predict the mean length at a given age and temperature. Finally, the population is modelled as a whole by linking all life-history traits to temperature. For this purpose, a spatialised time- and temperature-dependent Leslie matrix model structured in age classes was used to predict the population dynamics under different temperature scenarios
APA, Harvard, Vancouver, ISO, and other styles
32

Michalík, Marek. "Simulace momentové charakteristiky asynchronního stroje." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-317126.

Full text
Abstract:
This master thesis deals in first part with basic theory of an induction motor and principle of function. It also includes theory about higher harmonics of magnetic field and how asynchronous and synchronous torques are created. Various ways how to decrease effect of these torques are suggested. These findings are later applied in practical analytical calculation in second part, in which all parameters of motor are calculated from given dimensions of motor from technical documentation. This is done for basic and higher harmonics. After that a model of this motor was created in RMxprt program, which also calculated all parameters of this motor and created torque characteristic. This motor was also modelled in ANSYS Maxwell 2D. Additional simulations for finding out influence of harmonics on torque characteristic were also done in this software. Torque characteristic of motor was also practically measured in laboratory. All results were compared and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
33

Lošáková, Jana. "Porovnání nákladů výstavby rodinného domu z klasických materiálů a z materiálů přírodních." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2016. http://www.nusl.cz/ntk/nusl-240414.

Full text
Abstract:
This diploma thesis is focused on a comparison of construction cost of the house made of classic materials with the equivalent made of natural building materials and consequently on comparison of both construction variants. In order to perform those tasks, the used natural building materials were defined. Furthermore, the specific options of usage of these natural building materials as well as the used composition structures were introduced. The thesis concludes with a price calculation of the constructions made of natural building materials which were not available in the common budgetary software. The appendix of the thesis contains budgets for the first variant, where the house is made of natural building materials, and also for the second variant, where the house is made of classic materials.
APA, Harvard, Vancouver, ISO, and other styles
34

Månsson, Victor, and Robin Lexander. "Livscykelkostnadsanalys för två typer av spillvattensystem." Thesis, Linnéuniversitetet, Institutionen för byggteknik (BY), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97242.

Full text
Abstract:
För att klara av den befolkningsökning som sker i städerna projekteras nybyggnadsområden. Innan byggnation av bostäder kan ske måste ett beslut tas om val av spillvattensystem. I nuläget saknas enkla hjälpmedel för att jämföra och göra val baserat på systemens totala kostnader under dess livslängd. Målet med detta arbete var att ta fram en beräkningsmall som kan göra en livscykelkostnadsanalys samt vara ett komplement vid projektering och val av spillvattensystem. Topografin och områdesspecifika kostnader bryts ner och förs in i beräkningsmallen för den sträcka där spillvattenledningen ska anläggas. Mallen kontrollerar därefter om områdets förutsättningar uppfyller de krav som finns för spillvattenledningar eller ej. Resultatet som erhålls av beräkningsmallen påvisar initiala-, årliga och ackumulerade kostnader för de båda systemen. Skillnader och eventuella brytpunkter mellan ackumulerad kostnad för de båda systemen redovisas och kan användas som underlag vid val av spillvattensystem. Arbetet visade att det finns ett stort b
In order to cope with the population increase that occurs in the cities, new construction areas are being planned. Before the construction of housing can take place, a decision must be made regarding the choice of sewerage system. At present there are no simple tools to compare and make choices based on the systems total lifetime cost. The aim of this thesis has been to develop a calculation template that can perform a life cycle cost analysis and to be a complement to the project planning and choice of sewerage systems. The topography and area-specific costs are broken down and entered into the calculation template for the distance where the wastewater pipeline is to be constructed. The template then checks whether the area's conditions meet the requirements for wastewater pipelines or not. The result obtained from the calculation template shows initial, annual, and accumulated costs for the two systems. Differences and possible breakpoints between the accumulated cost of the two systems are reported and can be used as a basis for decision. The work showed that there is a great need for supporting tools to facilitate the planning and decision making of water systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Khayat, Joy. "Effet du calcul mental et de la comparaison de nombres sur la performance de mouvements complexes." Thesis, Lyon, 2019. https://n2t.net/ark:/47881/m6n87950.

Full text
Abstract:
Les nombres et les opérations sur les nombres contribuent à structurer notre rapport au monde. Il est ainsi logique que plusieurs études aient tenté de clarifier les mécanismes sous-tendant la cognition numérique ou les mécanismes cérébraux responsables de la dyscalculie. Ces études ont suggéré que les représentations mathématiques pourraient s’ancrer dans des expériences corporelles et/ou que la cognition numérique et la préparation du mouvement pourraient être sous-tendues par des mécanismes cérébraux similaires. Plusieurs études ont suggéré que des mouvements segmentaires ou du corps dans sa globalité peuvent influer sur la performance de tâches numériques ou arithmétiques. L’effet de telles taches sur la performance motrice demeurait en revanche à examiner, en particulier dans le cas des mouvements à haute intensité. Cet éventuel effet a été examiné via deux études impliquant au total 206 étudiants de sexe masculin, en Licence à la faculté de santé publique de l’Université Libanaise (Beyrouth, Liban). Une première étude (deux séries de deux expérimentations) a examiné les effets de la lecture d’un nombre et de la soustraction mentale complexe sur la hauteur de saut en squat jump vertical (SJV) et sur le temps de réponse d’un mouvement de pointage manuel (MPM). Dans chaque série, ces effets ont été examinés dans le cas de nombres en chiffres arabes et de nombres écrits en toutes lettres. Une seconde étude a examiné l’effet de tâches d’arithmétique mentale sur le temps de réponse d’un MPM. Trois expérimentations (1-3) ont étudié l’effet de la soustraction (complexe) et, respectivement, de : (1) l’addition (simple ou complexe), (2) la multiplication (simple ou complexe) et (3) la comparaison d’ensembles de points et la comparaison de nombres. Tout nombre était écrit en chiffres arabes. Dans ces deux études, les données ont été analysées en recourant à un modèle linéaire multiniveaux à effets mixtes. Les résultats de la première étude ont avéré une amélioration modérée de la performance en SJV (statistiquement significative, p < 0,05) suite à la lecture d’un nombre écrit en toutes lettres et un net effet de la performance en SJV et en MPM après une soustraction mentale (complexe) avec nombres en chiffres arabes (p < 0,001). Les résultats de la seconde étude ont avéré une amélioration statistiquement significative de la performance en MPM suite aux seuls calculs complexes (p < 0,001) et à la seule comparaison de nombres (p < 0,003). Ces résultats suggèrent que la relation entre une tâche arithmétique et la performance d’un mouvement à haute intensité est influencée par le format numérique, le recours à des nombres en chiffres arabes (à la différence de celui à des nombres écrits en toutes lettres ou à des ensembles de points) s’avérant conditionner un effet positif sur la performance motrice. Ces résultats ont cependant montré que cette condition n’est pas suffisante, la performance motrice étant améliorée après les tâches arithmétiques (avec chiffres arabes) favorisant le recours à des stratégies procédurales plutôt que le recours à des stratégies par recouvrement (en mémoire) de faits arithmétiques. Au regard de la littérature, l’effet des calculs mentaux complexes (soustraction, addition et multiplication) et de la comparaison de nombres, en notation arabique, sur la performance motrice peut s’expliquer par différents mécanismes. Cet effet peut être lié à la mobilisation de mécanismes d’encodage et/ou de mémorisation spécifiques des chiffres arabes. L’addition et la multiplication complexes et, éventuellement, la comparaison de nombres, peuvent en outre avoir favorisé une attention à la trajectoire optimale du mouvement subséquent. L’influence des calculs complexes et de la comparaison de nombres, avec notation arabique, sur la performance motrice pourrait enfin tenir à l’implication de régions cérébrales motrices, mobilisées durant une activité effective de calcul ou de comparaison
Numbers and operations on numbers help shape our relationship to the environment. It is thus unexceptional that several studies aimed at understanding the mechanisms of numerical cognition in the human brain and in particular those concerning the neural basis of dyscalculia. These studies have suggested that mathematical representations may be rooted in bodily experiences and/or that numerical cognition and movement preparation may share similar cerebral mechanisms. Several results have suggested that the performance of numerical and arithmetical tasks may be influenced by segmental and/or whole-body movements. The reverse influence of such tasks on motor performance remained however to be fully clarified especially in the case of maximal-intensity complex movements. This possibility has been approached by two studies involving a total of 206 male students (undergraduate students) at the Faculty of Public Health of the Lebanese University (Beirut, Lebanon). A first study including two series of two experiments examined the effects of number reading and mental subtraction (complex) on the height of a squat vertical jump (SVJ) and on the response time of a manual-pointing movement (MPM). In each series these effects have been examined in separate experiments using numbers written as words and in Arabic digits. A second study examined the effect of different arithmetical tasks on the response time of a MPM. Three experiments (1-3) examined the effect of mental subtraction (complex) and, respectively, of: (1) mental addition (simple or complex), (2) mental multiplication (simple or complex) and (3) the comparison of dot sets and number comparison. Each number was written in Arabic. In both studies the obtained data have been analyzed using a multilevel linear mixed-effect model. The results of the first study have shown a moderate increase of SVJ height (although statistically significant: p < .05) after number reading as words and a clear-cut increase of both MPT and SVJ performance after mental subtraction with Arabic digits (p < .001). The results of the second study have shown a statistically significant improvement of MPM performance only after the complex calculations (p < .001) and after number comparison (p < .003). These results suggest that the relationship between an arithmetical task and the performance of a high-intensity movement is influenced by the numerical format. It was found that the use of Arabic digits (but not the use of numbers written as words or represented by dot sets) is a condition to a positive effect of an arithmetical task upon motor performance. The results also showed that this condition is not sufficient. Motor performance was found to be improved only after arithmetical tasks (with Arabic digits) favoring the use of procedural strategies and not by arithmetical tasks favoring the use of retrieval strategies (of arithmetical facts). With regard to the literature, the effects of the complex calculations (subtraction, addition and multiplication) and of number comparison, with Arabic notation, on motor performance may be explained by different mechanisms. The effect of complex calculations and number comparison, with Arabic digits, on motor performance might be linked to mechanisms of encoding and/or memorization specific to this numerical format. Attention to the optimal path of movement might also has been favored by the spatial representation of numbers used to realize complex additions or subtraction and, possibly, to compare numbers. The influence of complex calculation and of number comparison, with Arabic digits, on motor performance might also be driven by a possible involvement, during actual calculation and comparison, of motor regions in the brain
APA, Harvard, Vancouver, ISO, and other styles
36

Turkovič, Matúš. "Porovnání návrhu plynem izolované rozvodny ve 2D a 3D prostředí ve fázi nabídky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318400.

Full text
Abstract:
This thesis deals with the design of gas insulated switchgear produced in company ABB s.r.o. Theoretically describes the basics of project engineering in the 2D and 3D environments as well as the construction, partition and properties of the switchgear substations. In thesis is included practical designs and comparison of a gas insulated switchgear draft in 2D and 3D interface in the offer phase.
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Xingliang. "Schwingungen von Offsetdruckmaschinen." Doctoral thesis, [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=968754791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Houzar, Tomáš. "Analýza tepelné spotřeby objektu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-319291.

Full text
Abstract:
Master’s thesis is focused on analysis of thermal consumption in the buiding. This work describes current state of family house and possible suggestions for heating and water heating. Part of solution is created program, which supposed to serve as universal calculation program for design and economic evaluation of suggested solar systém and allows a comparison between commonly used sources for heating and water heating.
APA, Harvard, Vancouver, ISO, and other styles
39

Němec, Martin. "Stavebně technologický projekt sídla firmy Snowboard Zezula." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-227055.

Full text
Abstract:
In this master´s thesis I am focusing on the Construction technology project of Snowboard Zezula Company. Major part of my thesis informs on a technical report for a construction technology project, a time and a financial plan of a construction, a realization study of the main technological stage, a plan of the main construction mechanism, a project of a construction site device with drawings, a financial calculation of the construction site devices and some technological rules, checking and testing plan for designed operations. Lastly I compare the two different designs of a load-bearing construction level and I calculate time of sheat removing with the economical calculation of the work fissure number.
APA, Harvard, Vancouver, ISO, and other styles
40

Schneider, Dirk. "Untersuchung von Methoden zur Früherkennung von Bränden in Wald- und Vegetationsgebieten." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-227018.

Full text
Abstract:
Dissertation of Chief Fire Officer Dipl.-Ing. M. Sc. Dirk Schneider for achieving the academic degree of Dr.-Ing. of the Faculty of Forestry, Geo and Hydro Sciences of the Technical University of Dresden with the title: “Early Detection of Fires in Areas of Forests and other Vegetation” Fires threaten and destroy extensive forest and vegetation areas every year, endangering people and its settlements, leading to significant pressures on the environment and destroying considerable high value resources. The expenditures in manpower, logistics and finance for safety in general and fire suppression in particular are considerable. To minimize these varied and extensive consequences of fires, early detection is desirable, making an effective firefighting strategy possible. This early detection is particularly of importance in remote, large-scale areas and territories not under observation by the population, especially if they are subject to an increased or high vulnerability. After investigating and considering the causes, that repeatedly lead to forest fires not only in the Federal Republic of Germany but worldwide, the author describes different traditional and modern methods for early detection of fires in areas of forests and other vegetation. Furthermore the author develops a performance item catalog, basing on practical and economic experience, by which not only novel early warning systems can be developed, but the systems and methods described in the present study also are assessed and compared. The comparison of various early warning systems is guided not only by means of technical features, but also from an economic perspective. Financial calculation methods, staff costs and the peculiarities in public administration are particularly noted. The author also shows the different parameters that influence the selection of an appropriate early warning system for the detection of forest and vegetation areas. It becomes clear that it is the scene of the incident with its specific parameters that determines the most useful early warning system.
APA, Harvard, Vancouver, ISO, and other styles
41

Vondrák, Tomáš. "Aplikace vybraných způsobů ocenění na rodinný dům v Kamenném Újezdu." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-316897.

Full text
Abstract:
The theme of master's thesis is the application of selected methods of valuation on a house in Kamenný Újezd. Thesis deals with theoretical foundation, which describes the basic principles of valuation used, including basic terminology and legislation, and secondly, the work also deals with practical application of the theoretical basis. Practical part includes a description of the location where the family house is located, there is a description of the family house, describing the situation on the local real estate market valuation of selected family house, selected cost methods and by direct comparison and there is also mentioned valuation of land used the construction methods using Naegeli. This award builds on the evaluation of performance each of the valuation methods, comparing the calculated price of shares price of shares listed on the valuation ordinance and the conclusion of this work consists of a total recap the results of the valuation.
APA, Harvard, Vancouver, ISO, and other styles
42

Bašista, Ján. "Zastřešení objektu pro společenské účely." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-226074.

Full text
Abstract:
The work treats the static recalculation of an existing roof structure of a construction for social purposes and proposes two variants of the design of a new roof structure. The construction has a rectangular ground plan of 30 m x 47 m and the roof structure is placed on the load bearing circumference concrete walls. Except for the load constituted by the function of the construction and by the climatic area, an extra load of 2 tons suspended at any place of the structure is considered because of the special requirement of working and technological equipment. The variant no. 1 is designed from the steel S355. The variant no. 2 is designed as a combination of the wood GL24h and the aluminium EN-AW 5083. Both of the variants have 11 transverse load bearing girders. The purlins are perpendicular to the girders and are placed on them. Stiffness is secured by roof stiffeners.
APA, Harvard, Vancouver, ISO, and other styles
43

Minks, Ondřej. "Materiály pro vinutí elektrických strojů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241958.

Full text
Abstract:
This work deals with the option of substitute aluminum for copper as winding material in induction motor in several variants and compare them. The advantages and disadvantages of these materials for the application are mentioned. The default element is the machine with power 29 kW, 2p = 4, which is the basis for other variants. It is about the verification of the default copper wire machine with cataloging data and then verification of several aluminum wire variants of the machine. Veryfying the machine designs is performed on the basis of three programs - RMxprt, ANSYS Maxwell and Matlab. In all of these programs, the machine calculations were realized for copper and aluminum winding in point close to the nominal. Final results are evaluated. There are also heat and ventilation calculations of the machine.
APA, Harvard, Vancouver, ISO, and other styles
44

Guo, Duen-Bang, and 郭敦邦. "Comparison of Three Deconvolution Techniques for Relative Cerebral Blood Flow Calculation." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/17288762730627353980.

Full text
Abstract:
碩士
國立陽明大學
放射醫學科學研究所
93
Dynamic susceptibility contrast magnetic resonance imaging is a technique for measuring cerebral blood perfusion. In the quantification of relative cerebral blood flow(rCBF), we need to measure an arterial input function(AIF) and to do a deconvolution calculation with the tissue concentration time curve. Singular value decomposition(SVD) is widely used, but this method is severely affected by tracer arrival time delay and dispersion effects between the AIF and tissue signals. Another method is Fourier transform(FT), almost insensitive to time delay, but it is still affected by tracer recirculation and noise. Additionally, Circulant SVD is less sensitive to time delay, but a consistent underestimation of rCBF is present. The purpose of this thesis research is to evaluate the effects of delay and dispersion in the quantification of rCBF using three deconvolution techniques. In computer simulations, an exponential decay function is used for the tissue residue function. Model-dependent calculation is proposed in FT method and model-independent calculation in SVD method. Our results show that fitting the spectrum of residual function in FT method gives more accurate rCBF than that of the two SVD methods. But FT method takes more computational time to remove recirculation. All rCBF given by three deconvolution techniques are underestimated by dispersion effect, but the performance of FT method surpasses the two SVD methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Kao, Li-Hul, and 郭麗華. "Comparisions of Indirect Cost Calculation Methods for Compensation Claim." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/08043343054739256969.

Full text
Abstract:
碩士
國立成功大學
土木工程學系碩博士班
93
The claim that the contractor proposes for the project dispute includes direct and indirect cost. Direct cost is for construction physical items and listed as contract items. However, indirect cost is spent on supporting the company or site office to complete the project and the contract doesn’t has definite items. So it is named as overhead which accounts for a certain percentage of direct cost.     In the project dispute or lawsuit, the amount of indirect cost is usually argued. Because it is difficult to know whether and how much the contractor has spent indirect cost, the client is usually unwilling to pay for it. The compensation of this part is viewed relatively inconsistent and the judge’s opinion seems to be the key to decisions in most cases. As a result, a reasonable method is needed to calculate indirect cost for compensation.     The purpose of this research is mainly to explore the methods for calculating indirect cost and identify the advantages and preconditions of different methods. First, the indirect cost is classified into site overhead and home-office overhead. Second, the domestic practice usually uses Actual Cost Method and Proportion Method, which are analyzed and compared with foreign methods, like Eichleay Formula, Total Direct Cost, and Canadian Method in order to generalize the suitable adoption situation, precondition and limitation of these methods.     Finally, this thesis analyzes six cases of domestic projects, classifies indirect cost to find out the rules of compensation, and understand the reasons for compensation success or failure.
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Ai-Fu, and 章艾茀. "Calculation and comparison of interfacial tension for binary and ternary aqueous mixtures." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/61089821606023572520.

Full text
Abstract:
碩士
國立臺灣大學
化學工程學研究所
91
Interfacial tension is an important property in separation process. However, neither experimental data nor theoretical models have been studied extensively in previous literatures. In this study, the comparison of calculation models for interfacial tension of binary and ternary aqueous systems are presented. New parameters for these models are obtained from regressing experimental data in literatures. For binary aqueous systems, applicability of these models is tested with 116 systems. It is shown that the N — A model has the lowest deviation for homologous series of alcohols、alkanes and acids. For systems with complicated functional groups, the D — B model is recommended with known solubility data. On the other hand, LSER model is more suitable in the case no solubility data are available. For ternary aqueous systems, the Fu-1989 model (k=2) is the most accurate method with available interfacial tension and solubility data for aqueous organic system. The Fu-1986 model, however, is recommended for the cases without such data.
APA, Harvard, Vancouver, ISO, and other styles
47

Chiang, Hou-Te, and 江厚德. "An Empirical Comparisonof Various Approaches in Calculating Value at Risk." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/52066381284873352767.

Full text
Abstract:
碩士
國立中央大學
統計研究所
104
Recently, risk management has become an important issue, and value at risk (VaR) is a index to measure the market risk. The thesis adopts several methods to calculate VaR, including the non assumption methods and assumption methods. Non assumption methods like historical simulation, the filtered historical simulation, and the RiskMetrics method. Assumption method like the GARCH-normal models and GARCH-t models. In order to check the accuracy of the VaR calculation methods. We consider Kupiec's POF Coverage Test, Kupiec's TUFF Coverage Test, Independent Coverage Test and Conditional Coverage Test. We focus to check which of the VaR calculating methods is more accurate.
APA, Harvard, Vancouver, ISO, and other styles
48

Chiu, Chen-Wei, and 邱湞瑋. "A Comparison of Calculating Potential Evapotranspiration Methods Applied to the Medium Elevation Area." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56949181861401511284.

Full text
Abstract:
碩士
國立臺灣大學
森林環境暨資源學研究所
94
This research choose six calculating potential evapotranspiration(PET) methods compared with Pan method to discuss the suitability of potential evapotranspiration in medium elevation area such as in Alishan and in Sun Moon Lake. Six calculating methods include Hamon method, Thornthwaite method, Makkink method, Priestly-Taylor method, Turc method, and Penman-Monteith methods. In Sun Moon Lake Meteorological station, according to daily PET and monthly PET calculated by six methods, radiation-based Priestly-Taylor method has the best correlation with Pen method. However, Makkink method has the best correlation with Pen method in Alishan meteorological station. Results in both meteorological stations show that calculating methods considered radiation would have better suitability, which is similar to Liu’s study in 2004. In Sun-Moon Lake meteorological station, the monthly PET calculated by Turc Method,using daily net radiation as a parameter, will be overestimated in whole year. Makkink method, using daily radiation as a parameter, will be underestimated in whole year. Priest-Taylor Method, using daily radiation as a parameter, will be underestimated during winter. In Alishan meteorological station, the monthly PET calculated by Priestly–Taylor Method,using daily net radiation as a parameter, will be overestimated in summer period. However, Makkink and Turc methods using daily radiation as a parameter will have similar value to Pan method. Turc Method will overestimated during summer and Makkink Method will be underestimated during winter. Therefore, when using those methods to calculate potential evapotranspiration, it is necessary to consider the difference of local radiation to avoid discrepancy between the calculated result and data tendency.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Hung-Chiao, and 陳虹巧. "Comparison of different calculation model''s application to hydraulic conductivity in Lienhuachih area." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/88110535556029473328.

Full text
Abstract:
碩士
國立臺灣大學
森林環境暨資源學研究所
103
The study applied four saturated-unsaturated hydraulic conductivity models, which were Gardner exponential model (GE), Boorks and Corey model (BC), Gardner rational power model (GP) and van Genuchten-Mualem model (VGM), to the simulation of hydraulic conductivities under varied water pressure head condition. Three mathematical methods including solving the simultaneous equations, regression analysis, and numerical simulation were adopted to decide the parameters in the four models. Study area was at Lienhuachih watershed No. 4 and No. 5. Four locations were selected along ridge, respectively from the two watersheds, and each location comprised soil surface and the depth of 20 cm for field infiltration test by tension infiltrometer. As the soil became desiccated after the infiltration test, undisturbed soil samples of the infiltration test location were excavated for analyzing their physical properties, that to understand how the soil physical properties affect the measured hydraulic conductivity. Besides, the saturated hydraulic conductivities at the locations which had been measured by the double-ring infiltrometer by Yeng-Bang Tsai (2013) were used in the study to modify the hydraulic conductivities that simulated by the models under near-saturated condition. According to the analyzed data, soil physical properties of watershed No. 4 were more homogeneous than that of watershed No. 5, and the discrepancy of different depths were also smaller than that of watershed No. 5. The data of field infiltration test showed that the dispersion of hydraulic conductivities of the soil surface and the depth of 20 cm of watershed No. 4 were both smaller than that of watershed No. 5. Therefore, at the results of establishing saturated-unsaturated hydraulic conductivity models, the parameters including soil text/structure parameter and the saturated hydraulic conductivity were more similar between the locations of watershed No. 4 than that of watershed No. 5. In field infiltration test, the condition of higher water pressure head that usually has more gravitational force but less of capillary force would make water only flow through parts of large soil pores instead of filling up whole pores that probably lead to underestimate the hydraulic conductivity. In the condition of more rainfall to the soil surface before the field infiltration test, it revealed that the soil pore could be effectively filled and therefore enhance the hydraulic conductivity. The mathematical method of numerical simulation could get the lowest index of error, RMSE, which meant the method was the best way to estimate the parameters in four models. Besides, the RMSE results of GP model and VGM model were both lower 10-6, better than the results of GE model and BC model. According to GP model and VGM model, calculated values of the soil text/structure parameter were between 12 m-1 to 36 m-1, which meant that the soil of research site at Lienhuachih was well-structured, coming up to the analyzed results of the soil physical properties.
APA, Harvard, Vancouver, ISO, and other styles
50

Weng, Jian-You, and 翁健又. "Analysis and Comparison of Dose Calculation in a Heterogeneous Interface for Different Algorithm." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/32819436359673989885.

Full text
Abstract:
碩士
中臺科技大學
醫學影像暨放射科學系暨研究所
102
To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with the dose measured with EBT3 films within heterogeneous medium for intensity modulated radiation therapy (IMRT) plans of nasopharyngeal carcinoma (NPC).Two-dimensional dose distribution adjacent to both air and bone tissue equivalent phantom of IMRT treatment for NPC cases were measured with GafChromic EBT3 films. The doses near and within the nasopharyngeal (NP) region of an IMRT phantom containing heterogeneous medium were also measured with EBT3 films. The measured dose distribution were compared with that calculated by AAA and AXB. FilmQA dosimetry system was used to perform the analysis of dose distribution. For the verification of planar dose distribution within the NP region of the IMRT phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 96.69% and 97.09% for AAA and AXB, respectively, averaged over IMRT plan. For the verification of planar dose distribution within the target using EBT3 film in the IMRT phantom, the Relative percentage deviations between the calculated and measured data when averaged over IMRT plan were 3~5% and 2~4% for AAA and AXB, respectively. For the verification of planar dose distribution within the NP region of the Rando phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 95.28% and 95.53% for AAA and AXB, respectively, averaged over IMRT plan. For the verification of planar dose distribution within the target using EBT3 film in the IMRT phantom, the Relative percentage deviations between the calculated and measured data when averaged over IMRT plan were 5~7% and 4~6% for AAA and AXB, respectively. In general, the verification measurements demonstrated that both algorithms produced acceptable accuracy when compared to the measured data. GafChromic film results indicated that AXB produced slightly better accuracy compared to AAA for dose calculation adjacent to and within the heterogeneous media. Users should be aware of the differences in calculated target doses between AXB and AAA, especially in bone, for IMRT in NPC cases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography