To see the other types of publications on this topic, follow the link: Direct method of standardization.

Dissertations / Theses on the topic 'Direct method of standardization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Direct method of standardization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Štefaňáková, Michaela. "Porovnání potratovosti v zemích střední Evropy." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-205944.

Full text
Abstract:
The objective of this diploma thesis is the analysis and comparison of abortion in selected countries of Central Europe - Czech republic, Slovakia, Poland and Hungary. In the thesis is analyzed in detail the controversial topic of abortions in terms of the various factors that affect it. For a detailed comparison of the differences between countries are used the indicators of abortion and direct method of standardization, through which is possible to compare the countries and eliminate the influance of age structure. The analysis showed that even though it is the neighboring countries of Central Europe, the situation regarding abortions is significantly different, especiall due to legislative changes and other important factors.
APA, Harvard, Vancouver, ISO, and other styles
2

Hensley, Eric Charles. "The Direct Method of Teaching Latin." Thesis, The University of Arizona, 2015. http://hdl.handle.net/10150/579266.

Full text
Abstract:
This paper examines the Direct Method of language instruction and how it has been implemented in Latin pedagogy. It shows that the method has been used in Latin instruction throughout history and that it has been proven as an effective method. A section of textbook reviews also shows how the Direct Method has evolved and is used today in the classroom. Further, a study was conducted where direct methodology was used in a classroom setting which showed it as an effective means of instruction.
APA, Harvard, Vancouver, ISO, and other styles
3

Capetillo, Pascal, and Jonathan Hornewall. "Introduction to the Hirota Direct Method." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297554.

Full text
Abstract:
The primary subject matter of the report is the Hirota Direct Method, and the primary goal of the report is to describe and derive the method in detail, and then use it to produce analytic soliton solutions to the Boussinesq equation and the Korteweg-de Vries (KdV) equation. Our hope is that the report may also serve as an introduction to soliton theory at an undergraduate level. The report follows the structure of first introducing Hirota's bi-linear operator and giving an account of its relevant properties. The properties of the operator are then used to find soliton solutions for differential equations that can be expressed in a "bilinear" form. Thereafter, a set of methods for finding the bilinear form of a more general non-linear differential equation are presented. Finally, we apply the tools to the Boussinesq and KdV equations respectively to derive their soliton solutions.
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Sangwoo. "Direct sampling method in inverse electromagnetic scattering problem." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS417/document.

Full text
Abstract:
Le problème de l'imagerie non itérative dans le cadre de la diffraction électromagnétique inverse utilisant la méthode d'échantillonnage direct (DSM) est considéré. Grâce à une combinaison de l'expression asymptotique du champ proche ou du champ lointain diffracté et de l'hypothèse de petits obstacles, les expressions analytiques de la fonction d'indicateur DSM sont présentées dans diverses configurations telles que des configurations 2D/3D, mono-/multi-configurations statiques, à vue limitée/complète et fréquence unique/ diversité en fréquence. Une fois l'expression analytique obtenue, sa structure est analysée et des améliorations proposées. Notre approche est validée à l’aide de données de simulation, et d’expériences le cas échéant. Premièrement, la structure mathématique du DSM à fréquence fixe en 2D dans divers problèmes de diffusion est établie, permettant une analyse théorique de son efficacité et de ses limites. Pour surmonter les limitations connues, une méthode alternative d'échantillonnage direct (DSMA) est proposée. Puis le cas multi-fréquence est investigué en introduisant et en analysant le DSM multi-fréquence (MDSM) et le DSMA multi-fréquence (MDSMA). Enfin, notre approche est étendue aux problèmes de diffraction électromagnétique inverse 3D pour lesquels le choix de la polarisation du dipôle de test est un paramètre clé. De par notre approche analytique, ce choix peut être effectué sur la base de la polarisation du champ incident
The non-iterative imaging problem within the inverse electromagnetic scattering framework using the direct sampling method (DSM) is considered. Thanks to the combination of the asymptotic expression of the scattered near-field or far-field and of the small obstacle hypothesis the analytical expressions of the DSM indicator function are presented in various configurations such as 2D/3D configurations and/or mono-/multi-static configurations and/or limited-/full-view case and/or mono-/multi-frequency case. Once the analytical expression obtained, its structure is analyzed and improvements proposed. Our approach is validated using synthetic data and experimental ones when available. First, the mathematical structure of DSM at a fixed frequency in 2D various scattering problems is established allowing a theoretical analysis of its efficiency and limitations. To overcome the known limitations an alternative direct sampling method (DSMA) is proposed. Next, the multi-frequency case is investigated by introducing and analyzing the multi-frequency DSM (MDSM) and the multi-frequency DSMA (MDSMA).Finally, our approach is extended to 3D inverse electromagnetic scattering problems for which the choice of the polarization of the test dipole is a key parameter. Thanks to our analytical analysis it can be made based on the polarization of the incident field
APA, Harvard, Vancouver, ISO, and other styles
5

Clark, Matthew. "Direct-search method for the computer design of holograms." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Armour, Jessica D. "On the Gap-Tooth direct simulation Monte Carlo method." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72863.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, February 2012.
"February 2012." Cataloged from PDF version of thesis.
Includes bibliographical references (p. [73]-74).
This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of reducing the computational cost of microscopic (e.g. molecular) simulation methods using simulation particles only in small regions of space (teeth) surrounded by (ideally) large gaps. This scheme requires an algorithm for transporting particles between teeth. Such an algorithm can be readily developed and implemented within direct Monte Carlo simulations of dilute gases due to the non-interacting nature of the particle-simulators. The present work develops and evaluates particle treatment at the boundaries associated with diffuse-wall boundary conditions and investigates the drawbacks associated with GT-DSMC implementations which detract from the theoretically large computational benefit associated with this algorithm (the cost reduction is linear in the gap-to-tooth ratio). Particular attention is paid to the additional numerical error introduced by the gap-tooth algorithm as well as the additional statistical uncertainty introduced by the smaller number of particles. We find the numerical error introduced by transporting particles to adjacent teeth to be considerable. Moreover, we find that due to the reduced number of particles in the simulation domain, correlations persist longer, and thus statistical uncertainties are larger than DSMC for the same number of particles per cell. This considerably reduces the computational benefit associated with the GT-DSMC algorithm. We conclude that the GT-DSMC method requires more development, particularly in the area of error and uncertainty reduction, before it can be used as an effective simulation method.
by Jessica D. Armour.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Ng, Jack Hoy-Gig. "Development of a direct metalisation method for micro-engineering." Thesis, Heriot-Watt University, 2013. http://hdl.handle.net/10399/2690.

Full text
Abstract:
This research concentrates on the establishment of a metalisation and micro-patterning technique that eliminates metal evaporation and/or photoresist molding procedures. The process design is chosen from the analysis of the broad field of direct metalisation techniques where novel photocatalysts or photoreducing agents are increasingly employed to create new processes. The new photolithographic process in this study introduces two novel photoreducing agents for additive metal thin film fabrication: methoxy poly(ethylene glycol) and photosystem I. This work proves the concept of using light energy to directly reduce metal ions incorporated within an ion-exchanged polyimide substrate to produce metal thin films. The patterning step can be operated at atmospheric pressure, in a dry environment, using a coating of the photoreducing agent. This process offers a significant improvement to prior related work that relied on a water layer to enable the metalisation. Of particular importance for this process is the influence of light energy dose and heat treatment, which promote silver nanoparticles growth at the cost of degradation of the substrate polymer. The investigation was carried out thoroughly by laser writing experiments for a selected range of laser power and scan speed. To complement the phenomenon observed in the laser experiments, prolonged UV light exposure time and heat treatment experiments were carried out to confirm the hypothesis postulated in this thesis. The morphology of the silver nanoparticles produced, the changes of the substrate surface and the adhesion of electroless plating were characterised. Results indicate that UV irradiation with the energy density required for reasonable production speed causes inevitable molecular damage to the polymer substrate. Photosystem I was found to be able to catalyse the production of visually similar silver thin film by light sources in the blue region. Using a similar light intensity, the exposure time was reduced by an order of magnitude whilst the degradation phenomenon observed during the UV process appears to be eradicated. With the fundamentals of the process established in this thesis, future optimization is suggested for the transition from a proof of concept to industrial implementation.
APA, Harvard, Vancouver, ISO, and other styles
8

Català, i. Castro Frederic. "Implementation of the direct force measurement method in optical tweezers." Doctoral thesis, Universitat de Barcelona, 2018. http://hdl.handle.net/10803/665757.

Full text
Abstract:
Mechanics is the branch of physics that studies movement and force, and plays an evident role in life. The swimming dynamics of bacteria in search of nutrients, organelle transport by molecular motors or sensing different kinds of stimuli by neurons, are some of the processes that need to be explained in terms of mechanics. At a human scale, distance and force can be measured with a ruler and a calibrated spring. However, assessing these magnitudes may become an important challenge at a micron scale. Among several techniques, optical tweezers stand out as a non-invasive tool that is capable of using light to grab micron-sized particles and measuring position and force with nanometer (10(-9) and femto-Newton (10(-15) accuracy. Small specimens, such as a bacterium or a cell membrane, can be trapped and effectively manipulated with a focused laser beam. Light momentum exchanged with the trapped sample can be used for eventually measuring the otherwise inaccessible forces that govern biological processes. Optical tweezers have enabled, after trapping cell vesicles in vivo, to measure the pulling force exerted by molecular motors, such as kinesin. Flagellar propulsion forces and energy generation have been investigated by optically trapping the head of a bacterium. Cell membranes have been deformed with optical tweezers and the underlying tension determined. However, the exact forces exerted by optical tweezers are difficult to measure beyond the in vitro approach. In order to calibrate the optical traps, the trapped samples often need to be spherical or present some degree of symmetry, it is important to bear information on the experimental parameters, and one needs high control of several variables that determine the trapping dynamics, such as medium homogeneity and temperature. A cutting-edge method, developed in the Optical Trapping Lab – BiOPT, from the Universitat de Barcelona, targets the light-momentum change as a direct reading of the force exerted by an optical trap. This frees experiments from the necessity of calibrating the optical traps, and makes possible to perform accurate force measurement experiments in vivo and involving irregular samples. In my PhD thesis, the direct force detection method for optical tweezers has been implemented and tested in some of such situations. I first give a technical description of the set-up used for the experiments. The use of a spatial light modulator (SLM) for holographic optical tweezers (HOTs), a piezo-electric platform to induce drag forces, and the trapping laser emission characteristics, are explained in detail. The light-momentum set-up is tested against certain situations deviating from the ideal performance and some steps for optimization of several effects are analyzed. Backscattering light loss is quantified through experiments and numerical simulations and finally assessed to account for an average ±5% uncertainty in force measurements. Then, the method is used to measure forces on irregular samples. First, arbitrary systems composed of microspheres of different kinds are collectively treated as irregular samples, in which the global momentum exchanged with the trapping beam coincides with the total Stokes-drag force. Second, pairs of optical tweezers are used to stably trap cylinders of sizes from 2 milimicras to 50 milimicras and measure forces in accordance with slender-body hydrodynamic theory. Another aspect of the thesis deals with the temperature change induced by water absorption of IR light, which is one of the major concerns within the optical trapping community. As main reasons, accurate knowledge of local temperature is needed for understanding thermally-driven processes, as well as eventual damage to live specimens. Here we use direct force measurements to detect changes in viscosity that are due to laser heating, and compare the results with heat transport simulations to discuss the main conclusions on this effect. The last goal of my thesis has been the implementation of the method inside tissue. The laser beam is affected by the scattering structures present in vivo, such as refractive index mismatches throughout different cells, nuclei, cell membranes or vesicles. As a primary result, despite the trapping beam is captured beyond 95%, I quantified this effect to result in an increase in the standard deviation of force measurements around ±20%. The approach has consisted in comparing the trapping force profiles of spherical probes in vitro (water) and in vivo (zebrafish embryos). To conclude, I here demonstrate that the direct force measurement method can be applied in an increasing number of experiments for which trap calibration becomes intricate or even impossible. Quantitative measurements become feasible in samples with unknown properties, the more important examples being arbitrary, non-spherical samples and the interior of an embryonic tissue.
Les pinces òptiques són una eina que permet la manipulació d'objectes de mida micromètrica mitjançant llum làser. En no ser necessari el contacte mecànic directe sobre una mostra, els dóna la característica de ser una eina no invasiva, fet que obre moltes aplicacions en nombrosos camps de la biologia, com ara en estudis de mecànica cel·lular en teixits. A més a més, una pinça o trampa òptica pot emprar-se per tal de realitzar mesures quantitatives, com ara posicions i forces amb precisió de nanòmetres (10-9) i femto- Newtons (10-15). D'aquesta manera, magnituds que altrament foren inaccessibles, com ara la força en un contacte cel·lular, poden obtenir-se i engegar així una nova dimensió en la recerca en biomecànica. El mètode de mesura directa de forces analitza els canvis en el moment lineal dels fotons que conformen el feix per tal de mesurar forces òptiques. Aquest mètode permet de mesurar forces sense dependre d’un alt control experimental, cosa que fa possible la mesura de forces, per exemple, en objectes irregulars. Per contra, això és gràcies a un disseny experimental capaç de capturar tota la llum que crea la pinça òptica i de mesurar-ne els canvis de moment. En la meva tesi doctoral, demostrem l’aplicabilitat del mètode en situacions en què la força no es pot obtenir de manera indirecta a partir de tècniques de calibració. En primer lloc, analitzem les millores tècniques que fan del mètode de detecció de moment una eina robusta per tal de realitzar mesures de força en un ampli ventall de situacions experimentals. Seguidament, emprem pinces òptiques controlades hologràficament per tal d’atrapar objectes irregulars, com ara sistemes de múltiples esferes i micro-cilindres, i mostrem la capacitat de mesurar l’intercanvi de moment entre el feix i les partícules que dóna lloc a les forces òptiques. Un altre aspecte àmpliament analitzat gràcies a aquesta tècnica de mesura és l’escalfament que origina una pinça òptica sobre el medi que envolta la partícula atrapada. Finalment, ens endinsem en la biologia de teixits per esbrinar com la dispersió a través d’aquests afecta el moment del feix i, per tant, les mesures. Les meves conclusions demostren l’aplicabilitat del mètode de mesura en situacions en què la calibració in situ pot esdevenir-se molt complicada o, fins i tot, impossible. Podem considerar que, per tant, el camp d’aplicació de les pinces òptiques anirà creixent i trobarà nous experiments en què s’elucidaran alguns dels interrogants més importants de la biologia.
APA, Harvard, Vancouver, ISO, and other styles
9

Ashrafizadeh, Ali. "A direct shape design method for thermo-fluid engineering problems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/NQ53484.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schlottke-Lakemper, Michael [Verfasser]. "A Direct-Hybrid Method for Aeroacoustic Analysis / Michael Schlottke-Lakemper." München : Verlag Dr. Hut, 2017. http://d-nb.info/1135596190/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jappy, Alan. "A constitutively consistent lower bound, direct shakedown and ratchet method." Thesis, University of Strathclyde, 2014. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=23743.

Full text
Abstract:
When a structure is subject to cyclic loads there is a possibility of it failing due to ratchet or incremental collapse. In many engineering structures the demonstration of non-ratcheting behaviour is a fundamental requirement of the design and assessment process. Whilst it is possible to use incremental finite element analysis to simulate the cyclic response for a given load case to demonstrate shakedown or ratchet, it does not yield any information on the safety factor. In addition, there are several practical problems in using this approach to determine whether or not a component has achieved shakedown. Consequently several direct methods which find the loads at the shakedown and ratchet boundaries have been developed in the past 3 decades. In general, lower bound methods are preferred for design and assessment methodologies. However, to date, the lower bound methods which have been proposed for shakedown and ratchet analysis have not been fully reliable and accurate. In this thesis a lower bound shakedown and ratchet method which is both reliable and accurate is proposed. Previously proposed elastic plastic lower bound ratchet methods are revisited and modified to understand the limitations in current methods. From this, Melan's theorem is reinterpreted in terms of plasticity modelling and shown to have the same form as a non-smooth multi yield surface plasticity model. A new shakedown method is then proposed based on the non-smooth multi yield surface plasticity model. The new shakedown method is extended using a two stage process to determine the ratchet boundary for cyclic loads in excess of the alternating plasticity boundary. Two simplified variants of the ratchet method are also proposed to decrease the computational expense of the proposed ratchet method. Through several common benchmark problems the proposed methods are shown to give excellent agreement with the current upper bound methods which have been demonstrated to be accurate. The flexibility of the shakedown method is demonstrated by extending the method to incorporate temperature dependent yield, hardening and simplified non-linear geometric effects.
APA, Harvard, Vancouver, ISO, and other styles
12

Ross, Christopher Roger. "Direct and inverse scattering by rough surfaces." Thesis, Brunel University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Carranza, Masso Laura Carolina. "Standardization and internal validation of a bacteria identification method utilizing focal-plane-array fourier transformed infrared spectroscopy." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106248.

Full text
Abstract:
Food-borne diseases collectively affect significant portions of the world's population. In Canada recent food-borne outbreaks and diseases resulted in significant expenses and resource allocations. Consequently, finding methods that can detect and identify microorganisms in faster, reliable and cost effective ways is a pressing need. Microbial identification methods based on the infrared spectral analysis of microorganisms have been shown to be potentially viable. The recent development of focal plane array Fourier transform infrared (FPA-FTIR) spectroscopy provided the means of acquiring thousands of infrared spectra in the time it takes to record a single spectrum. The increased number of infrared spectra of each organism provides infrared-based microbial differentiation and identification methods with added reliability. Accordingly, we generated a comprehensive standardized and internally validated the FPA-FTIR based bacteria identification method. All infrared spectra were collected from bacterial colonies, lifted from agar and deposited on zinc selenide (ZnSe) slides, on a Agilent Excalibur FTIR spectrometer equipped with a UMA-600 infrared microscope and a 32 x 32 (1024 pixels) mercury-cadmium-telluride focal-plane-array detector and operated under Resolutions Pro 4.0 software (Agilent Technologies, Melbourne, Australia). The effect of growth media formulations, growth time and inactivation techniques on the spectral reproducibility of microorganism was tested. Most of these variables showed some degree of influence on spectral variability, therefore a single combination of these was recommended in the form of a laboratory protocol for consistent microbial preparations. This enabled the construction of a comprehensive spectral database including spectra from 180 different microbial strains. The FPA-FTIR based method was assessed for the discrimination of Campylobacter jejuni and C. coli isolated from poultry. The method was also evaluated as a tool for the identification of Escherichia coli and Listeria monocytogenes strains isolated from deliberately inoculated food matrices. In all cases identification of bacteria based on their FPA-FTIR spectra was highly reliable and comparable to standard methods of bacteria identification; provided that the unknown bacteria samples and reference strains were prepared in a consistent manner, and that appropriate spectral databases were employed.
Les maladies d'origine alimentaire affectent une proportion significative de la population mondiale. Au Canada, les cas récents d'éclosions d'intoxications alimentaires et de maladies d'origine alimentaire ont eu des conséquences significatives en termes de dépenses et d'utilisation de ressources. Pour cette raison, développer des méthodes permettant la détection et l'identification de micro-organismes de manière fiable, rentable et plus rapide, s'avère une nécessité urgente. Les méthodes d'identification microbienne basées sur l'analyse spectrale infrarouge des micro-organismes apparaissent potentiellement viables. Le développement récent de la spectroscopie infrarouge à transformée de Fourier couplée à un détecteur de type « matrice à plan focal » (FPA-FTIR) à balayage rapide, a permis d'acquérir, dans le temps préalablement requis pour un seul spectre, des milliers de spectres infrarouges à être enregistrés. L'augmentation du nombre de spectres infrarouge pour chaque organisme accroît la fiabilité des méthodes de différentiation et d'identification microbiennes basées sur l'infrarouge. En conséquence, nous avons élaboré une méthode standardisée et validée à l'interne d'identification des bactéries basée sur la méthode FPA-FTIR. Tous les spectres infrarouges ont été collectés à partir de colonies bactériennes, prises sur l'agar et déposées sur des glissières de séléniure de zinc (ZnSe), grâce à un spectromètre d'Agilent Excalibur FTIR équipé d'un microscope UMA-600 infrarouge et d'un détecteur type « matrice à plan focal » de mercure-cadmium-tellurure 32 x 32 (1024 pixels) et opéré par le logiciel Resolutions Pro 4.0 (Agilent Technologies, Melbourne, Australie). L'effet des formulations de milieu de croissance, des temps de croissance et des techniques d'inactivation sur la reproductibilité spectrale des micro-organismes a été examiné. La plupart de ces variables ont montré un certain degré d'influence sur la variabilité spectrale, de sorte qu'une combinaison spécifique de ces dernières a été recommandée dans un protocole de laboratoire afin d'obtenir des préparations microbiennes constantes. Ceci a permis la construction d'une base de données spectrales complète comprenant des spectres de 180 souches microbiennes différentes. La méthode FPA-FTIR a été évaluée pour la discrimination de Campylobacter jejuni et C. coli, isolés à partir de volailles. La méthode a été également évaluée comme un outil d'identification de souches d'Escherichia coli et de Listeria monocytogenes isolés à partir d'aliments ou de matrices alimentaires inoculées délibérément. Dans tous les cas, l'identification des bactéries à partir de leurs spectres FPA-FTIR s'est avérée d'une grande fiabilité et comparable aux méthodes standards d'identification de bactéries, à condition que les échantillons de bactéries inconnues et les souches de référence aient été préparés d'une façon systématique et que des bases de données spectrales appropriées aient été utilisées.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Xiaolin. "Direct and Line Based Iterative Methods for Solving Sparse Block Linear Systems." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543921330763997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tugluk, Ozan. "Direct Numerical Simulation Of Pipe Flow Using A Solenoidal Spectral Method." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614293/index.pdf.

Full text
Abstract:
In this study, which is numerical in nature, direct numerical simulation (DNS) of the pipe flow is performed. For the DNS a solenoidal spectral method is employed, this involves the expansion of the velocity using divergence free functions which also satisfy the prescribed boundary conditions, and a subsequent projection of the N-S equations onto the corresponding dual space. The solenoidal functions are formulated in Legendre polynomial space, which results in more favorable forms for the inner product integrals arising from the Petrov-Galerkin scheme employed. The developed numerical scheme is also used to investigate the effects of spanwise oscillations and phase randomization on turbulence statistics, and drag, in turbulent incompressible pipe flow for low to moderate Reynolds numbers (i.e. $mathrm{Re} sim 5000$) ).
APA, Harvard, Vancouver, ISO, and other styles
16

Terdalkar, Rahul J. "Direct numerical simulation of swirling flows using the front tracking method." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-122007-233351/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Norikane, Joey Hajime 1963. "An evaluation of the heat balance method for direct transpiration measurement." Thesis, The University of Arizona, 1995. http://hdl.handle.net/10150/291730.

Full text
Abstract:
The measurement of sap flow has been sought after for many years. Various methods have been devised to accomplish this task, one of which is the heat balance method. This method is non-invasive and accurate, but its simplifying assumptions were questionable and needed to be critically examined. This study evaluated the heat balance method and sap flow gauges. The method yielded satisfactory results when compared to the calibration system. The satisfactory results were over a limited range, which exemplified the necessity for the gauges to be calibrated. The heat balance method's simplified heat transfer analysis does not reflect the complexity of the physical situation. Sap flow gauge improvements were suggested.
APA, Harvard, Vancouver, ISO, and other styles
18

Reeve, Thomas Henry. "The method of fundamental solutions for some direct and inverse problems." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4278/.

Full text
Abstract:
We propose and investigate applications of the method of fundamental solutions (MFS) to several parabolic time-dependent direct and inverse heat conduction problems (IHCP). In particular, the two-dimensional heat conduction problem, the backward heat conduction problem (BHCP), the two-dimensional Cauchy problem, radially symmetric and axisymmetric BHCPs, the radially symmetric IHCP, inverse one and two-phase linear Stefan problems, the inverse Cauchy-Stefan problem, and the inverse two-phase one-dimensional nonlinear Stefan problem. The MFS is a collocation method therefore it does not require mesh generation or integration over the solution boundary, making it suitable for solving inverse problems, like the BHCP, an ill-posed problem. We extend the MFS proposed in Johansson and Lesnic (2008) for the direct one-dimensional heat equation, and Johansson and Lesnic (2009) for the direct one-phase one-dimensional Stefan problem, with source points placed outside the space domain of interest and in time. Theoretical properties, including linear independence and denseness, the placement of source points, and numerical investigations are included showing that accurate results can be efficiently obtained with small computational cost. Regularization techniques, in particular, Tikhonov regularization, in conjunction with the L-curve criterion, are used to solve the illconditioned systems generated by this method. In Chapters 6 and 8, investigating the linear and nonlinear Stefan problems, the MATLAB toolbox lsqnonlin, which is designed to minimize a sum of squares, is used.
APA, Harvard, Vancouver, ISO, and other styles
19

Brock, Jerry S. "A consistent direct-iterative inverse design method for the Euler equations." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40033.

Full text
Abstract:
A new, consistent direct-iterative method is proposed for the solution of the aerodynamic inverse design problem. Direct-iterative methods couple analysis and shape modification methods to iteratively determine the geometry required to support a target surface pressure. The proposed method includes a consistent shape modification method wherein the identical governing equations are used in both portions of the design procedure. The new shape modification method is simple, having been developed from a truncated, quasi-analytical Taylor's series expansion of the global governing equations. This method includes a unique solution algorithm and a design tangency boundary condition which directly relates the target pressure to shape modification. The new design method was evaluated with an upwind, cell-centered finite-volume formulation of the two-dimensional Euler equations. Controlled inverse design tests were conducted with a symmetric channel where the initial and target geometries were known. The geometric design variable was a channel-wall ramp angle, 0, which is nominally five degrees. Target geometries were defined with ramp angle perturbations of J10 = 2 %, 10%, and 20 %. The new design method was demonstrated to accurately predict the target geometries for subsonic, transonic, and supersonic test cases; M=0.30, 0.85, and 2.00. The supersonic test case efficiently solved the design tests and required very few iterations. A stable and convergent solution process was also demonstrated for the lower speed test cases using an under-relaxed geometry update procedure. The development and demonstration of the consistent direct-iterative method herein represent the important first steps required for a new research area for the advancement of aerodynamic inverse design methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Mascarenhas, Manuel Maria Brás Pereira. "Speed control of induction machine based on direct torque control method." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/9957.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Multi-level converters have been receiving attention in the recent years and have been proposed as the best choice in a wide variety of medium voltage applications. They enable a commutation at substantially reduced voltages and an improved harmonic spectrum without a series connection of devices, which is the main advantage of a multi-level structure. The use of multi-level inverters contributes to the performances amelioration of the induction machine control. In fact, the use of three level inverter (or multilevel inverter) associated with DTC control can contribute to more reducing harmonics and the ripple torque and to have a high level of output voltage. A variation of DTC-SVM with a three level neutral point clamped inverter is proposed and discussed in the literature. The goal of this project is to study, evaluate and compare the DTC and the proposed DTC-SVM technique when applied to induction machines through simulations. The simulations were carried out using MATLAB/ SIMULINK simulation package. Evaluation was made based on the drive performance, which includes dynamic torque and flux responses, feasibility and the complexity of the systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Demirtas, Afsin Emrah. "A Comparative Study On Direct Analysis Method And Effective Length Method In One-story Semi-rigid Frames." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614723/index.pdf.

Full text
Abstract:
For steel structures, stability is a very important concept since many steel structures are governed by stability limit states. Therefore, stability of a structure should be assessed carefully considering all parameters that affect the stability of the structure. The most important of these parameters can be listed as geometric imperfections, member inelasticity and connection rigidity. Geometric imperfections and member inelasticity are taken into account with the stability method used in the design. At this point, the stability methods gain importance. The Direct Analysis Method, the default stability method in 2010 AISC Specification, is a new, more transparent and more straightforward method, which captures the real structure behavior better than Effective Length Method. In this thesis, a study has been conducted on the semi-rigid steel frames to compare Direct Analysis Method and Effective Length Method and to investigate the effect of flexible connections to stability. Four frames are designed for different connection rigidities with stability methods existing in the 2010 AISC Specification: Direct Analysis Method and Effective Length Method. At the end,conclusions are drawn about the comparison of these two stability methods and the effect of semi-rigid connections to stability.
APA, Harvard, Vancouver, ISO, and other styles
22

Wishart, Stuart Jackson. "A Parallel Solution Adaptive Implementation of the Direct Simulation Monte Carlo Method." University of Sydney. School of Aerospace, Mechanical and Mechatronic Engineering, 2005. http://hdl.handle.net/2123/619.

Full text
Abstract:
This thesis deals with the direct simulation Monte Carlo (DSMC) method of analysing gas flows. The DSMC method was initially proposed as a method for predicting rarefied flows where the Navier-Stokes equations are inaccurate. It has now been extended to near continuum flows. The method models gas flows using simulation molecules which represent a large number of real molecules in a probabilistic simulation to solve the Boltzmann equation. Molecules are moved through a simulation of physical space in a realistic manner that is directly coupled to physical time such that unsteady flow characteristics are modelled. Intermolecular collisions and moleculesurface collisions are calculated using probabilistic, phenomenological models. The fundamental assumption of the DSMC method is that the molecular movement and collision phases can be decoupled over time periods that are smaller than the mean collision time. Two obstacles to the wide spread use of the DSMC method as an engineering tool are in the areas of simulation configuration, which is the configuration of the simulation parameters to provide a valid solution, and the time required to obtain a solution. For complex problems, the simulation will need to be run multiple times, with the simulation configuration being modified between runs to provide an accurate solution for the previous run�s results, until the solution converges. This task is time consuming and requires the user to have a good understanding of the DSMC method. Furthermore, the computational resources required by a DSMC simulation increase rapidly as the simulation approaches the continuum regime. Similarly, the computational requirements of three-dimensional problems are generally two orders of magnitude more than two-dimensional problems. These large computational requirements significantly limit the range of problems that can be practically solved on an engineering workstation or desktop computer. The first major contribution of this thesis is in the development of a DSMC implementation that automatically adapts the simulation. Rather than modifying the simulation configuration between solution runs, this thesis presents the formulation of algorithms that allow the simulation configuration to be automatically adapted during a single run. These adaption algorithms adjust the three main parameters that effect the accuracy of a DSMC simulation, namely the solution grid, the time step and the simulation molecule number density. The second major contribution extends the parallelisation of the DSMC method. The implementation developed in this thesis combines the capability to use a cluster of computers to increase the maximum size of problem that can be solved while simultaneously allowing excess computational resources to decrease the total solution time. Results are presented to verify the accuracy of the underlying DSMC implementation, the utility of the solution adaption algorithms and the efficiency of the parallelisation implementation.
APA, Harvard, Vancouver, ISO, and other styles
23

Akcin, Haci Mustafa. "Direct adjustment method on Aalen's additive hazards model for competing risks data." unrestricted, 2008. http://etd.gsu.edu/theses/available/etd-04182008-095207/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2008.
Title from file title page. Xu Zhang, committee chair; Yichuan Zhao, Jiawei Liu, Yu-Sheng Hsu, committee members. Electronic text (51 p.) : digital, PDF file. Description based on contents viewed July 15, 2008. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
24

Nash, Jonathan. "Application of the direct timing method in the ZEUS Central Tracking Detector." Thesis, University of Oxford, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Utzmann, Jens. "A domain decomposition method for the efficient direct simulation of aeroacoustic problems." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-38383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

VARGENS, JOSE MUNIZ DA COSTA. "DIRECT EXPONENTIAL SMOOTHING METHOD INCORPORATING SEASONAL COMPONENT MODELLED BY HARRISON HARMONIC APPROACH." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1985. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9479@1.

Full text
Abstract:
Os métodos de amortecimento exponencial, apesar de originalmente proposto nos anos 60, continuam em pleno uso nos dias de hoje. Neste trabalho apresentamos um método novo para previsão de séries temporais com ou sem sazonalidade utilizando as teorias de amortecimento exponencial e análise harmônica. Assume-se que a série seja composta por uma tendência secular (constante, linear ou quadrática) e seus parâmetros são atualizados seqüencialmente pelo procedimento de amortecimento direto. Já a parte sazonal é tratada separadamente através da técnica de análise harmônica, conforme sugerida por Harrison, 1964. Dessa forma, o método proposto se apresenta como uma alternativa ao método de Souza & Epprecht, (1983) ; tendo como principal vantagem a rotina de estimação inicial dos parâmetros que no método de Souza & Epprecht produz estimadores tendenciosos em alguns casos.
The method of exponential smoothing, although originally propesed during the 60´s, still continues in use up to today. In this thesis we present a new forecasting method for time series / with and/or without seasonality, applying the theory of exponential smoothing and harmonic analysis. It is assume that the series is composed of secular trend (constant, linear or quadratic) and a seasonal part. The trend parameters are sequentially using direct smoothing procedure. The seasonal part of the process is treated / separately through the technic of harmonica analysis according to Harrison´s suggestion, (1964). In this way, the proposed method can be viewed as an alternative to that of Souza & Epprecht, (1983), which has, as the most important advantage, the routine of initial estimation of the parameters, which in Souza & Epprecht method produces, in some cases, biased estimators.
APA, Harvard, Vancouver, ISO, and other styles
27

Seelam, Praveen Kumar Reddy. "Direct Strength Method for Web Crippling of Cold-formed Steel C-sections." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc271893/.

Full text
Abstract:
Web crippling is a form of localized buckling that occurs at points of transverse concentrated loading or supports of thin-walled structural members. The theoretical computation of web crippling strength is quite complex as it involves a large number of factors such as initial imperfections, local yielding at load application and instability of web. The existing design provision in North American specification for cold-formed steel C-sections (AISI S100, 2007) to calculate the web-crippling strength is based on the experimental investigation. The objective of this research is to extend the direct strength method to the web crippling strength of cold-formed steel C-sections. ABAQUS is used as a main tool to apply finite element analysis and is used to do the elastic buckling analysis. The work was carried out on C-sections under interior two flange (ITF) loading, end two flange (ETF) loading cases. Total of 128 (58 ITF, 70 ETF) sections were analyzed. Sections with various heights (3.5 in.to 6 in.) and various lengths (21 in. to 36 in.) were considered. Data is collected from the tests conducted in laboratory and the data from the previous researches is used, to extend the direct strength method to cold formed steel sections. Proposing a new design for both the loading cases and calculation of the resistance factors under (AISI S100, 2007) standards is done.
APA, Harvard, Vancouver, ISO, and other styles
28

Shao, Jianwen. "Direct Back EMF Detection Method for Sensorless Brushless DC (BLDC) Motor Drives." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/35065.

Full text
Abstract:
Brushlesss dc (BLDC) motors and their drives are penetrating the market of home appliances, HVAC industry, and automotive applications in recent years because of their high efficiency, silent operation, compact form, reliability, and low maintenance. Traditionally, BLDC motors are commutated in six-step pattern with commutation controlled by position sensors. To reduce cost and complexity of the drive system, sensorless drive is preferred. The existing sensorless control scheme with the conventional back EMF sensing based on motor neutral voltage for BLDC has certain drawbacks, which limit its applications. In this thesis, a novel back EMF sensing scheme, direct back EMF detection, for sensorless BLDC drives is presented. For this scheme, the motor neutral voltage is not needed to measure the back EMFs. The true back EMF of the floating motor winding can be detected during off time of PWM because the terminal voltage of the motor is directly proportional to the phase back EMF during this interval. Also, the back EMF voltage is referenced to ground without any common mode noise. Therefore, this back EMF sensing method is immune to switching noise and common mode voltage. As a result, there are no attenuation and filtering necessary for the back EMFs sensing. This unique back EMF sensing method has superior performance to existing methods which rely on neutral voltage information, providing much wider motor speed range at low cost. Based on the fundamental concept of the direct Back EMF detection, improved circuitry for low speed /low voltage and high voltage applications are also proposed in the thesis, which will further expand the applications of the sensorless BLDC motor drives. Starting the motor is critical and sometime difficult for a BLDC sensorless system. A practical start-up tuning procedure for the sensorless system with the help of a dc tachometer is described in the thesis. This procedure has the maximum acceleration performance during the start-up and can be used for all different type applications. An advanced mixed-signal microcontroller is developed so that the EMF sensing scheme is embedded in this low cost 8-bit microcontroller. This device is truly SOC (system-on-chip) product, with high-throughput Micro core, precision-analog circuit, in-system programmable memory and motor control peripherals integrated on a single die. A microcontroller-based sensorless BLDC drive system has been developed as well, which is suitable for various applications, including hard disk drive, fans, pumps, blowers, and home appliances, etc.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Zhao. "A Novel Lattice Boltzmann Method for Direct Numerical Simulation of Multiphase Flows." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259466323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Shui, Pei. "Novel immersed boundary method for direct numerical simulations of solid-fluid flows." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10050.

Full text
Abstract:
Solid-fluid two-phase flows, where the solid volume fraction is large either by geometry or by population (as in slurry flows), are ubiquitous in nature and industry. The interaction between the fluid and the suspended solids, in such flows, are too strongly coupled rendering the assumption of a single-way interaction (flow influences particle motion alone but not vice-versa) invalid and inaccurate. Most commercial flow solvers do not account for twoway interactions between fluid and immersed solids. The current state-of-art is restricted to two-way coupling between spherical particles (of very small diameters, such that the particlediameter to the characteristic flow domain length scale ratio is less than 0.01) and flow. These solvers are not suitable for solving several industrial slurry flow problems such as those of hydrates which is crucial to the oil-gas industry and rheology of slurries, flows in highly constrained geometries like microchannels or sessile drops that are laden with micro-PIV beads at concentrations significant for two-way interactions to become prominent. It is therefore necessary to develop direct numerical simulation flow solvers employing rigorous two-way coupling in order to accurately characterise the flow profiles between large immersed solids and fluid. It is necessary that such a solution takes into account the full 3D governing equations of flow (Navier-Stokes and continuity equations), solid translation (Newton’s second law) and solid rotation (equation of angular momentum) while simultaneously enabling interaction at every time step between the forces in the fluid and solid domains. This thesis concerns with development and rigorous validation of a 3D solid-fluid solver based on a novel variant of immersed-boundary method (IBM). The solver takes into account full two-way fluid-solid interaction with 6 degrees-of-freedom (6DOF). The solid motion solver is seamlessly integrated into the Gerris flow solver hence called Gerris Immersed Solid Solver (GISS). The IBM developed treats both fluid and solid in the manner of “fluid fraction” such that any number of immersed solids of arbitrary geometry can be realised. Our IBM method also allows transient local mesh adaption in the fluid domain around the moving solid boundary, thereby avoiding problems caused by the mesh skewness (as seen in common mesh-adaption algorithms) and significantly improves the simulation efficiency. The solver is rigorously validated at levels of increasing complexity against theory and experiment at low to moderate flow Reynolds number. At low Reynolds numbers (Re 1) these include: the drag force and terminal settling velocities of spherical bodies (validating translational degrees of freedom), Jeffrey’s orbits tracked by elliptical solids under shear flow (validating rotational and translational degrees of freedom) and hydrodynamic interaction between a solid and wall. Studies are also carried out to understand hydrodynamic interaction between multiple solid bodies under shear flow. It is found that initial distance between bodies is crucial towards the nature of hydrodynamic interaction between them: at a distance smaller than a critical value the solid bodies cluster together (hydrodynamic attraction) and at a distance greater than this value the solid bodies travel away from each other (hydrodynamic repulsion). At moderately high flow rates (Re O(100)), the solver is validated against migratory motion of an eccentrically placed solid sphere in Poisuelle flow. Under inviscid conditions (at very high Reynolds number) the solver is validated against chaotic motion of an asymmetric solid body. These validations not only give us confidence but also demonstrate the versatility of the GISS towards tackling complex solid-fluid flows. This work demonstrates the first important step towards ultra-high resolution direct numerical simulations of solid-fluid flows. The GISS will be available as opensource code from February 2015.
APA, Harvard, Vancouver, ISO, and other styles
31

Matos, Norman A. Lopez. "Monte Carlo modeling of direct X-ray imaging systems /." Online version of thesis, 2008. http://hdl.handle.net/1850/5745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ibrahim, Omar Moh'd Musa. "DEVELOPMENT AND COMPARISON OF RISK-ADJUSTED MODELS TO BENCHMARK ANTIBIOTIC USE IN THE UNIVERSITY HEALTHSYSTEM CONSORTIUM HOSPITALS." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2871.

Full text
Abstract:
Background. Infectious diseases societies recommend that hospitals risk-adjust their antimicrobial use before comparing it to their peers, a process called benchmarking. The purpose of this investigation is to apply and compare 3 risk-adjustment procedures for benchmarking hospital antibacterial consumption (AbC). Two standardization of rates procedures, direct and indirect standardization, are compared with one another as well as with regression modeling. Methods. Total aggregate adult AbC for 52 systemic antibacterial agents was measured in 70 hospitals that subscribed to the University HealthSystem Consortium Clinical Resource Manager database in 2009 and expressed as days of therapy (DOTs) per either 1000 patients days (PDs) or 1000 discharges. The two AbC rates served the role of the outcome while several known risk factors for AbC served the role of potential predictor variables in the linear regression models. Selection criteria were applied to select a model that represented the first rate (Model I) and another that represented the second (Model II), respectively, and outliers were identified. Adult discharges in each hospital were then stratified into 35 clinical service lines based upon their Medicare Severity-Diagnosis Related Group (MS-DRG) assignment. Direct and indirect standardization were applied to this set and the expected-to-observed (E/O) and observed-to-expected (O/E) ratios, respectively, for AbC were determined. The agreement of the different methods in ranking hospitals according to their risk-adjusted rates and in identifying outliers was determined. Results. The mean total AbC rate was 821.2 DOTs/1000 PDs or 4487.6 DOTs/1000 discharges. Model I explained 31% of the variability in AbC measured in DOTs/1000 PDs while Model II explained 64% of the variability in AbC measured in DOTs/1000 discharges. The E/O ratios ranged from 0.76-1.44 while the O/E ratios ranged from 0.73-1.45. The comparison of the risk-adjustment methods revealed a very good agreement between the two regression models as well as between the two standardization methods whereas the agreement of Model II with either standardization method was moderate. Conclusion. Standardization provides a viable alternative to regression for benchmarking hospital AbC rates. Direct standardization appears to be especially useful for benchmarking purposes since it allows the direct comparison of risk-adjusted rates.
APA, Harvard, Vancouver, ISO, and other styles
33

Klicker, Laura. "A Method for Standardization within the Payload Interface Definition of a Service-Oriented Spacecraft using a Modified Interface Control Document​." Thesis, KTH, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217971.

Full text
Abstract:
With a big picture view of increasing the accessibility of space, standardization is applied within a service-oriented space program. The development of standardized spacecraft interfaces for numerous and varied payloads is examined through the lens of the creation of an Interface Control Document (ICD) within the Peregrine Lunar Lander project of Astrobotic Technologies, Inc. The procedure is simple, transparent, and adaptable; its applicability to other similar projects is assessed.
För en ökad tillgång till rymden finns det behov av standardisering för en förbättrad service. Utvecklingen av standardiserade rymdfarkostgränsytor för flera och olika nyttolaster har undersökts via ett dokumentet för gränssnittskontroll (ICD) inom projektet Peregrine Lunar Lander för Astrobotic Technologies, Inc. Proceduren är enkel, transparent och anpassningbar; dess användning för andra liknande projekt har värderats.
APA, Harvard, Vancouver, ISO, and other styles
34

POONDRU, SHIRDISH. "A NEW DIRECT MATRIX INVERSION METHOD FOR ECONOMICAL AND MEMORY EFFICIENT NUMERICAL SOLUTIONS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060976742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Shuo. "A New Multidomain Approach and Fast Direct Solver for the Boundary Element Method." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1505125721346283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gurav, Hardik. "Experimental Validation of the Global Transmissibility (Direct Method) Approach to Transfer Path Analysis." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563273082454307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Eastman, Michael Wayne. "An investigation into direct sparse matrix solution schemes in the finite element method." Master's thesis, University of Cape Town, 1987. http://hdl.handle.net/11427/7603.

Full text
Abstract:
Includes bibliographical references.
The application of the finite element method invariably involves the solution of large systems of sparse linear algebraic equations. The solution of these systems often represents a significant or even dominant component of the total solution time. Various sparse matrix techniques and strategies have been developed to reduce the time and cost of solving these equations. These techniques exploit both the zero-nonzero structure of the matrix problem and the manner in which the actual numerical components of the problem are computed. This thesis describes some of the direct methods, including the banded, sky line or profile, wavefront and hypermatrix schemes. The relative merits of each of these schemes are also indicated with respect to the number of arithmetical operations, data structure organization, secondary storage requirements and implementation strategy. The second section of this thesis discusses the implementation of an equation solution package for application in the finite element method. Initially a partitioning scheme for a wavefront solver was investigated but due to problems encountered and the increasing complexity of the code, it was decided to use an alternative method. A Cholesky decomposition method with a hypermatrix data storage scheme was then investigated and developed. The equation solution method was developed using a virtual paging scheme as implemented by the DAS package, and a module of general hypermatrix management routines. Finally, the package was implemented and tested in the NEW NOSTRUM development at the University of Cape Town. Suggestions for further developments are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Shaopeng. "Development of algorithms for the direct multi-configuration self-consistent field (MCSCF) method." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6945.

Full text
Abstract:
In order to improve the performance of the current parallelized direct multi-configuration self-consistent field (MCSCF) implementations of the program package Gaussian [42], consisting of the complete active space (CAS) SCF method [43] and the restricted active space (RAS) SCF method [44], this thesis introduces a matrix multiplication scheme as part of the CI eigenvalue evaluation of these methods. Thus highly optimized linear algebra routines, which are able to use data in a sequential and predictable way, can be used in our method, resulting in a much better performance overall than the current methods. The side effect of this matrix multiplication scheme is that it requires some extra memory to store the additional intermediate matrices. Several chemical systems are used to demonstrate that the new CAS and RAS methods are faster than the current CAS and RAS methods respectively. This thesis is structured into four chapters. Chapter One is the general introduction, which describes the background of the CASSCF/RASSCF methods. Then the efficiency of the current CASSCF/RASSCF code is discussed, which serves as the motivation for this thesis, followed by a brief introduction to our method. Chapter Two describes applying the matrix multiplication scheme to accelerate the current direct CASSCF method, by reorganizing the summation order in the equation that generates non-zero Hamiltonian matrix elements. It is demonstrated that the new method can perform much faster than the current CASSCF method by carrying out single point energy calculations on pyracylene and pyrene molecules, and geometry optimization calculations on anthracene+ / phenanthrene+ molecules. However, in the RASSCF method, because an arbitrary number of doubly-occupied or unoccupied orbitals are introduced into the CASSCF reference space, many new orbital integral cases arise. Some cases are suitable for the matrix multiplication scheme, while others are not. Chapter Three applies the scheme to those suitable integral cases that are also the most time-consuming cases for the RASSCF calculation. The coronene molecule - with different sizes of orbital active space - has been used to demonstrate that the new RASSCF method can perform significantly faster than the current Gaussian method. Chapter Four describes an attempt to modify the other integral cases, based on a review of the method developed by Saunders and Van Lenthe [95]. Calculations on coronene molecule are used again to test whether this implementation can further improve the performance of the RASSCF method developed in Chapter Three.
APA, Harvard, Vancouver, ISO, and other styles
39

Hwang, Wonjoong. "Standardization and Application of Spectrophotometric Method for Reductive Capacity Measurement of Nanomaterials." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-08-8204.

Full text
Abstract:
In this study, a reproducible spectrophotometric method was established and applied to measure reductive capacity of various nanomaterials. Reductive capacity had been implicated in the toxicity of nanomaterials, but a standardized measurement method had been lacking until this work. The reductive capacity of nanoparticles was defined as the mass of iron reduced from Fe3 to Fe2 by unit mass of nanoparticles, in an aqueous solution that initially contained ferric ions. To measure the reductive capacity, the nanomaterials were incubated in a ferric aqueous solution for 16 hours at 37 degrees C, and the reductive capacity of the nanoparticles was determined by measuring the amount of Fe3 reduced to Fe2 using a spectrophotometric method. The reagents 1,10-phenanthroline and hydroquinone were used as a Fe2 indicator and a reducing agent respectively for the assay. To standardize this method, various experiments were carried out. For the initial ferric solution, various Fe salts were tested, and Iron(III) sulfate was chosen as Fe salt for the standard method. The measured reductive capacity of nanoparticles was found to vary with the measurement conditions; the measured reductive capacity increased with increasing the Fe/nanoparticle ratio; the measured reductive capacity increased with incubation time and leveled off after 8 hours of incubation. For hydrophobic materials, the surfactant Tween-20 was added so that the particles could be wetted and suspended in the ferric aqueous solution. After incubation, the particles were removed from the solution by either filtration or centrifugation before applying the spectrophotometric method. In addition, optimal pH and minimum time to reach ultimate color intensity were also found. Carbon-based nanomaterials, standard reference material and metal oxides were measured for their reductive capacities with this method and characterized by transmission electron microscopy (TEM), energy dispersive x-ray spectroscopy (EDS), x-ray diffraction (XRD), BET measurement and Raman spectroscopy. For some nanoparticles, the reductive capacity was measured for both the pristine form and the form treated by oxidization or grinding. All carbon-based nanomaterials, except for pristine C60, have a significant reductive capacity while reductive capacity of metal oxides is very low. And it was found that reductive capacity can be increased by surface functional groups or structural defects and reduced by oxidization or heating (graphitization). The reductive capacity of a material can play an important role in its toxicology by synergistic toxic effects in the presence of transition metal ions through the Fenton reaction. Moreover, even without transition metal ions, the ability of a material to donate electrons can be involved in toxicity mechanisms via generation of reactive oxygen species.
APA, Harvard, Vancouver, ISO, and other styles
40

Van, Geest Jordana. "Bioaccumulation of sediment-associated contaminants in freshwater organisms: Development and standardization of a laboratory method." Thesis, 2010. http://hdl.handle.net/10214/2270.

Full text
Abstract:
This thesis describes studies and research conducted as part of the development, standardization, and validation of a new laboratory protocol for measuring the bioaccumulation of sediment-associated contaminants in freshwater organisms. The test species used in this method are the oligochaete Lumbriculus variegatus, the mayfly nymph Hexagenia spp., and the juvenile fathead minnow Pimephales promelas. Bioaccumulation methods in the literature were critically reviewed to properly guide the development and standardization of methods. This enabled data gaps to be addressed and the conditions and exposure techniques of the new method to be standardized, properly justified, and based on experimental evidence. Method development included the investigation of the effect of the density of organisms on bioaccumulation in the three test species. The importance of standardizing loading density to total organic carbon (TOC) in sediment was demonstrated, as was the appropriateness of using a ratio of TOC to organism dry weight of 27:1 as a standard loading density for the different test species. To validate the new method and assess the relative effectiveness of the three test species for accumulating different contaminants, a variety of field-contaminated sediments were tested, representing a range of contaminants, levels of contamination, and physical properties of sediment. It was observed that differences in bioaccumulation between the three species may, but do not always, exist, and can vary with contaminant and sediment type. It was also demonstrated that estimates of bioaccumulation, such as biota-sediment accumulation factors (BSAFs), can be species- and site-specific, supporting the need and use of standardized bioaccumulation methods and test species to facilitate comparisons across sites or over time. Comparisons of laboratory- and field-based estimates of bioaccumulation further validated the new laboratory method. Good agreement was observed between laboratory and field estimates for fish, while bioaccumulation was higher in laboratory-exposed invertebrates compared to mussels caged in situ. The laboratory method generally overestimated the relative bioavailability of contaminants compared to the field, but provides a conservative estimate of bioaccumulation. A kinetic study investigated the uptake and elimination of PCBs in the three test species and demonstrated that a 28-d test duration was a sufficient standard for both invertebrate species to reach steady-state concentrations. There was conflicting evidence of whether steady-state concentrations were truly reached in the fish and uncertainty remains as to the appropriateness of a 28-d test for these organisms, for which additional testing is necessary.
APA, Harvard, Vancouver, ISO, and other styles
41

Hsueh, Sheng-Wen, and 薛勝文. "Improvements of Direct Transform Method in Computerized Tomography." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/80525871071353890094.

Full text
Abstract:
碩士
大同大學
資訊工程研究所
89
In this thesis, two projection-slice theorems based on cas-cas and Hartley transform are proposed. The reconstruction methods applying these theorems are also proposed. The commonly used transform in transform based computerized tomography (CT) is the Fourier transform. Two proposed direct transform methods, the direct Fourier method (DFM) and the direct cosine method (DCM), of this type of imaging system are both based on the projection-slice theorem. This theorem is generally referred to corresponding transform as the Fourier projection-slice and cosine projection-slice theorems, respectively. The computational complexity of Fourier transform is higher than that of the cas-cas transform because the former requires complex calculation while the later is done via real computation. In addition, the reference center of the cosine transform located at the lower-left corner of the object is different from general used methods which are fixed at the center of the object. The direct cas-cas method has both the advantages of direct Fourier method and direct cosine method. Furthermore, the Hartley transform inherits the properties of Fourier transform. In most applications, the algorithms are unchanged except computation is changed from complex to real. From these points of view, the cas-cas and Hartley transforms would be a better choice for CT reconstruction. In this thesis, the cas-cas projection-slice (CCPS) theorem is derived first. The CCPS theorem states that the summation of the one-dimensional sine-like transform of one projection and the one-dimensional cosine-like transform of another projection is a slice of the two-dimensional cas-cas transform of the projected object. The theorem is the basis of applying cas-cas transform to tomogram reconstruction. Methods analog to DFM and DCM are also proposed and called the direct cas-cas method (DCCM). The Hartley transform projection-slice (HPS) theorem is proposed next. The HPS theorem states that the one-dimensional Hartley transform of the projection is a slice of the two-dimensional Hartley transform of the projected object. The theorem is the basis of applying Hartley transform to tomogram reconstruction. Methods analog to DFM and DCM are also proposed and called the direct Hartley method (DHM). The Hartley transform projection-slice (HPS) theorem is proposed next. The HPS theorem states that the one-dimensional Hartley transform of the projection is a slice of the two-dimensional Hartley transform of the projected object. The theorem is the basis of applying Hartley transform to tomogram reconstruction. Methods analog to DFM and DCM are also proposed and called the direct Hartley method (DHM).
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Yun-Ju, and 陳韻如. "New Direct Updating Method in Structural Model Updating." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/19104640078447407717.

Full text
Abstract:
博士
國立臺灣大學
土木工程學研究所
97
Discrepancies always exist between the dynamic properties predicted by a finite element model and those measured directly from the structure. In this study, a direct updating method based on the orthogonality constraints is proposed for updating the mass and stiffness matrices of the structure first using a single set of modal data. This method hinges on replacement of the modal vector of concern by the modal matrix in computing the correction matrices to solve the problem of insufficient known conditions. Such a method is then extended to update the structural model for each of the first few sets of modal data that are experimentally made available. Two kinds of updating procedures are proposed, one is to conduct the model updating in a mode-by-mode manner and the other is in a simultaneous manner. In the numerical studies, it was demonstrated that for buildings of the shear type, the cantilever beam, continuous bridges and domes, the natural frequencies predicted by the updated model agree well with the measured ones for those modes that are experimentally made available, while the remaining modes remain basically untouched. In the end, a comparative study is performed for the proposed direct model updating method and the improved inverse eigensenstivity method (IIEM) proposed by Lin et al. (1995) for updating the mass and stiffness matrices of a structure based on the measured modal data. From the comparison study, it is demonstrated that the direct updating method presented herein is superior and more suitable for engineering applications. Since the proposed approach is simple, accurate and robust, it should be favored by engineers for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Jau-Ming, and 陳昭銘. "Study on the Improving of Direct Design Method." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/74659852309760618325.

Full text
Abstract:
碩士
國立臺灣大學
造船工程學系
82
Direct design method is a design method for hull forms by inputting some relational form parameters such as main dimen-sions,displacement, section area curve and longitudinal position of buoyancy center. A hull form surface is defined by a explicit polynomial. Transverse sectional curves at any position can be fast calculated by this way. To transform the preceding form parameters by some transformation function, an effect of distortion is available. Except for the requirement of merchant ship design, direct design method can also repre- sent transverse sectional curves of a high-speed craft by al- tering the explict polynomial expression. Let the result of direct design method be the prototype of objective hull form. Choose the region between about 1/2 and about 9 1/2 station, sectional curves generated in the region are used as frames (master curves). Parametric cubic Bezier curves are adopted as longitudinal curves to fit through the master curves. They are then extended to stem and stern, and hull form can be created in this hybrid method.
APA, Harvard, Vancouver, ISO, and other styles
44

Lue, Chung-Chia, and 呂崇嘉. "Applying direct current resistivity method to hydrogeological problems." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/59715410023191407706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Negahdaripour, Shahriar, and Berthold K. P. Horn. "A Direct Method for Locating the Focus of Expansion." 1987. http://hdl.handle.net/1721.1/6457.

Full text
Abstract:
We address the problem of recovering the motion of a monocular observer relative to a rigid scene. We do not make any assumptions about the shapes of the surfaces in the scene, nor do we use estimates of the optical flow or point correspondences. Instead, we exploit the spatial gradient and the time rate of change of brightness over the whole image and explicitly impose the constraint that the surface of an object in the scene must be in front of the camera for it to be imaged.
APA, Harvard, Vancouver, ISO, and other styles
46

Liao, Hsiong-Ming, and 廖雄明. "Direct Stiffness Method for Structural AnalysisUsing Computer-Assisted Tools." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/44920627338040869265.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
99
Abstract The direct stiffness method to investigate the object, its content, including direct stiffness method introduced the principle of derivation, computerized step instructions and demonstration examples, analysis and comparison. Method uses a direct stiffness matrix as a tool to carry out its computing tasks, this study, CAL-90, SSTAN and STADD.pro implementation of demonstration examples, the results of the comparative analysis. The results are summarized as follows: 1. Through the use of principle of direct stiffness method, the necessary steps are derived and the computerization of order and summarized. 2. Computation and comparison of CAL-90, SSTAN software and STADD.pro other three 2-D analysis. 3. Implementation and comparison of CAL-90, SSTAN and STADD.pro other three software 3-D analysis. 4. Implementation and comparison of CAL-90 of the 2-D analysis and SSTAN and STADD.pro software such as two 3-D analyses. 5. Computation and comparison of CAL-90 of the 3-D analysis and SSTAN and STADD.pro software such as two 3-D analyses.
APA, Harvard, Vancouver, ISO, and other styles
47

Song-Chin, Chi, and 季松青. "Determination of Hydraulic Parameter by Direct Current Resistivity Method." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/33429793765955739873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Kan-Ying, and 李侃穎. "Application of Direct and Inverse Calculation Method in Tunnelling." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/65s4ns.

Full text
Abstract:
碩士
中華大學
土木工程學系碩士班
101
ABSTRACT The purpose of this research is using explicit analysis method on the convergence confinement method in order to apply Direct calculation method and Inverse calculation method implemented in a spreadsheet in Excel, to calculate the elastic module value and lateral pressure coefficient of rock mass around the tunnels of the North Link Railway before excavation, and the convergence displacement of balance point on cross section after excavation in order to provide some references for tunnel engineering design in the future. The approaches considered in this research are specified in the following list: (1) obtain the data measured by instruments after excavating the tunnel, and obtain the predicted value by using a regression analysis, and design the transform curve and its function on longitudinal section, and to establish the simulated confined loss curve of forward effect when digging the tunnels. (2) develop the interactive equation between Support Characteristic Curve and Ground Reaction Curve under both homogenous hydrostatic and non-hydrostatic stress field conditions. (3) calculate the plasticity radius when Support Characteristic Curve and Ground Reaction Curve are balanced with Newton’s Recursive Method. The results obtained in this research are outlined in the following: (1) obtain the confined loss value of the tunnels on of the North Link Railway; (2) further validation of the interactive equation between Support Characteristic Curve and Ground Reaction Curve under both static liquid stress field and non-static liquid stress field condition; (3) determine the maximum shift amount of tunnel convergence by using direct calculation method; and to calculate the elastic module and the lateral pressure coefficient of rock body around the tunnels by using Inverse calculation Method. Keyword: Non-Hydrostatic Stress Field, Convergence Confinement Method, Confinement Loss, Direct Calculation Method, Invers Calculation Method, Regresion analysis, Newton’s Recursive Method
APA, Harvard, Vancouver, ISO, and other styles
49

Sheng-Xiang, Wang, and 王聖翔. "A direct method for calculating Greeks under some L." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/26824t.

Full text
Abstract:
碩士
國立中央大學
統計研究所
102
Empirical evidence has shown that some Levy processes provide a better model t for market option prices compared with the Black-Scholes models. Greeks are price sensitivities of financial derivatives and are essential for hedging and risk management. To calculate the Greeks under Levy process is a challenging task. To overcome this difficulty, this paper proposes a direct method for calculating the Greeks. Briefly speaking, our method identifies conditions to switch the order of integration and differentiation, and use the differentiation of an indicator function via the Dirac delta function. Explicit examples for calculating deltas, vegas, and gammas of European and Asian options under Merton's model and the variance-gamma process are given. Numerical results conrm that the proposed method outperforms existing methods in terms of unbiasedness, efficiency, and time.
APA, Harvard, Vancouver, ISO, and other styles
50

Ke, Cherng-Jyh, and 柯承志. "A Novel Method for Direct Synthesis of Fluorescence Gold Nanoclusters." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/96067167509427392048.

Full text
Abstract:
碩士
中原大學
醫學工程研究所
96
In this study, a novel method had been developed for synthesis of fluorescent gold nanoclusters. The fluorescent characteristics of this gold nanoclusters were determined based on their absorption and photoluminescence spectrum by using the fluorospectrometer. It suggested that these fluorescent gold nanoclusters may have the potential being an alternative probe against the traditional bio-toxic CdSe quantum dots for the application of labeling, targeting, and imaging in biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography