To see the other types of publications on this topic, follow the link: Refinement method.

Dissertations / Theses on the topic 'Refinement method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Refinement method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Grinspun, Eitan Schröder Peter. "The basis refinement method /." Diss., Pasadena, Calif. : California Institute of Technology, 2003. http://resolver.caltech.edu/CaltechETD:etd-05312003-133558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grundy, Jim. "A method of program refinement." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Antepara, Zambrano Oscar Luis. "Adaptive mesh refinement method for CFD applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/664931.

Full text
Abstract:
The main objective of this thesis is the development of an adaptive mesh refinement (AMR) algorithm for computational fluid dynamics simulations using hexahedral and tetrahedral meshes. This numerical methodology is applied in the context of large-eddy simulations (LES) of turbulent flows and direct numerical simulations (DNS) of interfacial flows, to bring new numerical research and physical insight. For the fluid dynamics simulations, the governing equations, the spatial discretization on unstructured grids and the numerical schemes for solving Navier-Stokes equations are presented. The equations follow a discretization by conservative finite-volume on collocated meshes. For the turbulent flows formulation, the spatial discretization preserves symmetry properties of the continuous differential operators and the time integration follows a self-adaptive strategy, which has been well tested on unstructured grids. Moreover, LES model consisting of a wall adapting local-eddy-viscosity within a variational multi-scale formulation is used for the applications showed in this thesis. For the two-phase flow formulation, a conservative level-set method is applied for capturing the interface between two fluids and is implemented with a variable density projection scheme to simulate incompressible two-phase flows on unstructured meshes. The AMR algorithm developed in this thesis is based on a quad/octree data structure and keeps a relation of 1:2 between levels of refinement. In the case of tetrahedral meshes, a geometrical criterion is followed to keep the quality metric of the mesh on a reasonable basis. The parallelization strategy consists mainly in the creation of mesh elements in each sub-domain and establishes a unique global identification number, to avoid duplicate elements. Load balance is assured at each AMR iteration to keep the parallel performance of the CFD code. Moreover, a mesh multiplication algorithm (MM) is reported to create large meshes, with different kind of mesh elements, but preserving the topology from a coarser original mesh. This thesis focuses on the study of turbulent flows and two-phase flows using an AMR framework. The cases studied for LES of turbulent flows applications are the flow around one and two separated square cylinders, and the flow around a simplified car model. In this context, a physics-based refinement criterion is developed, consisting of the residual velocity calculated from a multi-scale decomposition of the instantaneous velocity. This criteria ensures grid adaptation following the main vortical structures and giving enough mesh resolution on the zones of interest, i.e., flow separation, turbulent wakes, and vortex shedding. The cases studied for the two-phase flows are the DNS of 2D and 3D gravity-driven bubble, with a particular focus on the wobbling regime. A study of rising bubbles in the wobbling regime and the effect of dimensionless numbers on the dynamic behavior of the bubbles are presented. Moreover, the use of tetrahedral AMR is applied for the numerical simulation of gravity-driven bubbles in complex domains. On this topic, the methodology is validated on bubbles rising in cylindrical channels with different topology, where the study of these cases contributed to having new numerical research and physical insight in the development of a rising bubble with wall effects.
El objetivo principal de esta tesis es el desarrollo de un algoritmo adaptativo de refinamiento de malla (AMR) para simulaciones de dinámica de fluidos computacional utilizando mallas hexaédricas y tetraédricas. Esta metodología numérica se aplica en el contexto de simulaciones Large-eddie (LES) de flujos turbulentos y simulaciones numéricas directas (DNS) de flujos interfaciales, para traer nuevas investigaciones numéricas y entendimiento físicas. Para las simulaciones de dinámica de fluidos, se presentan las ecuaciones governantes, la discretización espacial en mallas no estructuradas y los esquemas numéricos para resolver las ecuaciones de Navier-Stokes. Las ecuaciones siguen una discretización conservativa por volumenes finitos en mallas colocadas. Para la formulación de flujos turbulentos, la discretización espacial preserva las propiedades de simetría de los operadores diferenciales continuos y la integración de tiempo sigue una estrategia autoadaptativa, que ha sido bien probada en mallas no estructuradas. Además, para las aplicaciones que se muestran en esta tesis, se utiliza el modelo LES que consiste en una viscosidad local que se adapta a la pared dentro de una formulación multiescala variable. Para la formulación de flujo de dos fases, se aplica un método de conjunto de niveles conservador para capturar la interfaz entre dos fluidos y se implementa con un esquema de proyección de densidad variable para simular flujos de dos fases incompresibles en mallas no estructuradas. El algoritmo AMR desarrollado en esta tesis se basa en una estructura de datos de quad / octree y mantiene una relación de 1: 2 entre los niveles de refinamiento. En el caso de las mallas tetraédricas, se sigue un criterio geométrico para mantener la calidad de la malla en una base razonable. La estrategia de paralelización consiste principalmente en la creación de elementos de malla en cada subdominio y establece un número de identificación global único, para evitar elementos duplicados. El equilibrio de carga está asegurado en cada iteración de AMR para mantener el rendimiento paralelo del código CFD. Además, se ha desarrollado un algoritmo de multiplicación de malla (MM) para crear mallas grandes, con diferentes tipos de elementos de malla, pero preservando la topología de una malla original más pequeña. Esta tesis se centra en el estudio de flujos turbulentos y flujos de dos fases utilizando un marco AMR. Los casos estudiados para aplicaciones de LES de flujos turbulentos son el flujo alrededor de uno y dos cilindros separados de sección cuadrada, y el flujo alrededor de un modelo de automóvil simplificado. En este contexto, se desarrolla un criterio de refinamiento basado en la física, que consiste en la velocidad residual calculada a partir de una descomposición de escala múltiple de la velocidad instantánea. Este criterio garantiza la adaptación de la malla siguiendo las estructuras vorticales principales y proporcionando una resolución de malla suficiente en las zonas de interés, es decir, separación de flujo, estelas turbulentas y desprendimiento de vórtices. Los casos estudiados para los flujos de dos fases son el DNS de la burbuja impulsada por la gravedad en 2D y 3D, con un enfoque particular en el régimen de oscilación. Además, el uso de AMR tetraédrico se aplica para la simulación numérica de burbujas impulsadas por la gravedad en dominios complejos. En este tema, la metodología se valida en burbujas que ascienden en canales cilíndricos con topología diferente, donde el estudio de estos casos contribuyó a tener una nueva investigación numérica y una visión física en el desarrollo de una burbuja con efectos de pared.
APA, Harvard, Vancouver, ISO, and other styles
4

Lai, Albert Y. C. "A tool for a formal refinement method." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0019/MQ49738.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Offermans, Nicolas. "Towards adaptive mesh refinement in Nek5000." Licentiate thesis, KTH, Mekanik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217501.

Full text
Abstract:
The development of adaptive mesh refinement capabilities in the field of computational fluid dynamics is an essential tool for enabling the simulation of larger and more complex physical problems. While such techniques have been known for a long time, most simulations do not make use of them because of the lack of a robust implementation. In this work, we present recent progresses that have been made to develop adaptive mesh refinement features in Nek5000, a code based on the spectral element method. These developments are driven by the algorithmic challenges posed by future exascale supercomputers. First, we perform the study of the strong scaling of Nek5000 on three petascale machines in order to assess the scalability of the code and identify the current bottlenecks. It is found that strong scaling limit ranges between 5, 000 and 220, 000 degrees of freedom per core depending on the machine and the case. The need for synchronized and low latency communication for efficient computational fluid dynamics simulation is also confirmed. Additionally, we present how Hypre, a library for linear algebra, is used to develop a new and efficient code for performing the setup step required prior to the use of an algebraic multigrid solver for preconditioning the pressure equation in Nek5000. Finally, the main objective of this work is to develop new methods for estimating the error on a numerical solution of the Navier–Stokes equations via the resolution of an adjoint problem. These new estimators are compared to existing ones, which are based on the decay of the spectral coefficients. Then, the estimators are combined with newly implemented capabilities in Nek5000 for automatic grid refinement and adaptive mesh adaptation is carried out. The applications considered so far are steady and two-dimensional, namely the lid-driven cavity at Re = 7, 500 and the flow past a cylinder at Re = 40. The use of adaptive mesh refinement techniques makes mesh generation easier and it is shown that a similar accuracy as with a static mesh can be reached with a significant reduction in the number of degrees of freedom.

QC 20171114

APA, Harvard, Vancouver, ISO, and other styles
6

Demircioglu, Ersan. "A Novel Refinement Method For Automatic Image Annotation Systems." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613346/index.pdf.

Full text
Abstract:
Image annotation could be defined as the process of assigning a set of content related words to the image. An automatic image annotation system constructs the relationship between words and low level visual descriptors, which are extracted from images and by using these relationships annotates a newly seen image. The high demand on image annotation requirement increases the need to automatic image annotation systems. However, performances of current annotation methods are far from practical usage. The most common problem of current methods is the gap between semantic words and low level visual descriptors. Because of the semantic gap, annotation results of these methods contain irrelevant noisy words. To give more relevant results, refinement methods should be applied to classical image annotation outputs. In this work, we represent a novel refinement approach for image annotation problem. The proposed system attacks the semantic gap problem by using the relationship between the words which are obtained from the dataset. Establishment of this relationship is the most crucial problem of the refinement process. In this study, we suggest a probabilistic and fuzzy approach for modelling the relationship among the words in the vocabulary, which is then employed to generate candidate annotations, based on the output of the image annotator. Candidate annotations are represented by a set of relational graphs. Finally, one of the generated candidate annotations is selected as a refined annotation result by using a clique optimization technique applied to the candidate annotation graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Lau, Tsan-sun, and 劉燦燊. "Adaptive finite element refinement analysis of shell structures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31238798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Morgenstern, Philipp [Verfasser]. "Mesh Refinement Strategies for the Adaptive Isogeometric Method / Philipp Morgenstern." Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1140525948/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tahiri, Ahmed. "A compact discretization method for diffusion problems with local refinement." Doctoral thesis, Universite Libre de Bruxelles, 2002. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Preissig, R. Stephen. "Local p refinement in two dimensional vector finite elements." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sinha, Bhaskar. "Surface mesh generation using curvature-based refinement." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-09252002-141359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

George, David L. "Finite volume methods and adaptive refinement for tsunami propagation and inundation /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/6752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Tongyu 1973. "The fast calculation of magnetic field using the local refinement method /." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80150.

Full text
Abstract:
The speed of the Finite Element Method (FEM) is an obstacle to the fast calculation of magnetic fields. A fast Local Refinement Method (LRM) using the first-order FEM is presented for quickly tracking the magnetic field changes while electromagnetic models have small changes made to their shape. This method resolves the potentials in the local mesh or submesh extracted from the whole mesh, with a boundary condition that is calculated by the initial solution based on the whole mesh. Instead of being re-meshed in the local area, the extracted submesh is coarsened and reshaped by the LRM to speed up the calculation time by sharply decreasing the time used for building the S matrix and solving the matrix equation Ax = b. The new potentials in the submesh are, with an acceptable error, embedded back into the whole problem to update the magnetic fields which provide designers or users with a fast visual feedback to their adjustment.
APA, Harvard, Vancouver, ISO, and other styles
14

Vieira, Gabriel da Silva. "Disparity map production: an architectural proposal and a refinement method design." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9088.

Full text
Abstract:
Submitted by Liliane Ferreira (ljuvencia30@gmail.com) on 2018-11-26T13:24:36Z No. of bitstreams: 2 Dissertação - Gabriel da Silva Vieira - 2018.pdf: 13740412 bytes, checksum: ddb7d4353e4f2d7650b087dd0d4bd796 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-26T13:43:18Z (GMT) No. of bitstreams: 2 Dissertação - Gabriel da Silva Vieira - 2018.pdf: 13740412 bytes, checksum: ddb7d4353e4f2d7650b087dd0d4bd796 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-11-26T13:43:18Z (GMT). No. of bitstreams: 2 Dissertação - Gabriel da Silva Vieira - 2018.pdf: 13740412 bytes, checksum: ddb7d4353e4f2d7650b087dd0d4bd796 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-05
Outro
Disparity maps are key components of a stereo vision system. Autonomous navigation, 3D reconstruction, and mobility are examples of areas of research which use disparity maps as an important element. Although a lot of work has been done in the stereo vision field, it is not easy to build stereo systems with concepts such as reuse and extensible scope. In this study, we explore this gap and it presents a software architecture that can accommodate different stereo methods through a standard structure. Firstly, it introduces some scenarios that illustrate use cases of disparity maps and it shows a novel architecture that foments code reuse. A Disparity Computation Framework (DCF) is presented and we discuss how its components are structured. Then we developed a prototype which closely follows the proposal architecture and we prepared some test cases to be performed. Furthermore, we have implemented disparity methods for validation purposes and to evaluate our disparity refinement method. This refinement method, named as Segmented Consistency Check (SCC), was designed to increase the robustness of stereo matching algorithms. It consists of a segmentation process, statistical analysis of grouping areas and a support weighted function to find and to fill in unknown disparities. The experimental results show that the DCF can satisfy different scenarios on-demand. Besides, they show that SCC method is an efficient approach that can make some enhancements in disparity maps, as reducing the disparity error measure.
Mapas de disparidade são elementos cruciais em sistemas de visão estéreo. Navegação autônoma, reconstrução 3D e mobilidade são exemplos de área de pesquisa que utilizam mapas de disparidade como elementos-chave. Embora muitos trabalhos têm sido feitos na área de visão estéreo, ainda assim, não é trivial construir sistemas estéreos com aplicação de conceitos como reutilização e escopo extensível. Neste estudo, exploramos essa lacuna e apresentamos uma arquitetura de software capaz de acomodar diferentes métodos de visão estéreo através de uma estrutura bem definida. Inicialmente, cenários que ilustram usos de mapa de disparidade são introduzidos e uma arquitetura que fomenta reutilização de código é apresentada. Dessa forma, um Framework de Cálculo de Disparidade (FCD) é apresentado e seus componentes são discutidos a fim de especificar a sua estrutura. Em seguida, um protótipo que segue a arquitetura proposta é apresentado e alguns casos de teste são preparados e executados. Além disso, métodos de cálculo de disparidade foram implementados para propostas de validação e para avaliar o método de refinamento de disparidade proposto pelos autores. Esse método de refinamento, chamado de Checagem de Consistência de Segmento (CCS), foi projetado para aumentar a robustez de algoritmos de combinação estéreo. Trata-se de um método que utiliza um processo de segmentação preliminar, análise estatística de áreas definidas e função ponderada de suporte para encontrar e preencher disparidades marcadas como desconhecidas. Os resultados dos experimentos realizados apontam que o FCD pode satisfazer diferentes cenários sob demanda. Além disso, os resultados mostram que o método CCS é uma abordagem eficiente que pode trazer certos melhoramentos em mapas de disparidade, como reduzir a medida de erro no cálculo de correspondências estéreo.
APA, Harvard, Vancouver, ISO, and other styles
15

Müller, Alexandra [Verfasser]. "Dynamic Refinement and Coarsening for the Smoothed Particle Hydrodynamics Method / Alexandra Müller." Aachen : Shaker, 2017. http://d-nb.info/1124366555/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chavannes, Nicolas Pierre. "Local mesh refinement algorithms for enhanced modeling capabilities in the FDTD method /." Konstanz : Hartung-Gorre, 2002. http://www.loc.gov/catdir/toc/fy0801/2006483066.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mohammed, Najla Abdullah. "Grid refinement and verification estimates for the RBF construction method of Lyapunov functions." Thesis, University of Sussex, 2016. http://sro.sussex.ac.uk/id/eprint/65711/.

Full text
Abstract:
Lyapunov functions are functions with negative orbital derivative, whose existence guarantee the stability of an equilibrium point of an ODE. Moreover, sub-level sets of a Lyapunov function are subsets of the domain of attraction of the equilibrium. In this thesis, we improve an established numerical method to construct Lyapunov functions using the radial basis functions (RBF) collocation method. The RBF collocation method approximates the solution of linear PDE's using scattered collocation points, and one of its applications is the construction of Lyapunov functions. More precisely, we approximate Lyapunov functions, that satisfy equations for their orbital derivative, using the RBF collocation method. Then, it turns out that the RBF approximant itself is a Lyapunov function. Our main contributions to improve this method are firstly to combine this construction method with a new grid refinement algorithm based on Voronoi diagrams. Starting with a coarse grid and applying the refinement algorithm, we thus manage to reduce the number of collocation points needed to construct Lyapunov functions. Moreover, we design two modified refinement algorithms to deal with the issue of the early termination of the original refinement algorithm without constructing a Lyapunov function. These algorithms uses cluster centres to place points where the Voronoi vertices failed to do so. Secondly, we derive two verification estimates, in terms of the first and second derivatives of the orbital derivative, to verify if the constructed function, with either a regular grid of collocation points or with one of the refinement algorithms, is a Lyapunov function, i.e., has negative orbital derivative over a given compact set. Finally, the methods are applied to several numerical examples up to 3 dimensions.
APA, Harvard, Vancouver, ISO, and other styles
18

Clack, Jhules. "Theoretical Analysis for Moving Least Square Method with Second Order Pseudo-Derivatives and Stabilization." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418910272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wittebol, Laura A. 1973. "Refinement of the nocturnal boundary layer budget method for quantifying agricultural greenhouse gas emissions." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115843.

Full text
Abstract:
Accompanying materials housed with archival copy.
Measuring greenhouse gas (GHG) emissions directly at the farm scale is most relevant to the agricultural sector and has the potential to eliminate some of the uncertainty arising from scaling up from plot or field studies or down from regional or national levels. The stable nighttime atmosphere acts as a chamber within which sequentially-measured GHG concentration profiles determine the flux of GHGs. With the overall goal of refining the nocturnal boundary layer (NBL) budget method to obtain reliable flux estimates at a scale representative of the typical eastern Canadian farm (approximately 1 km2), fluxes of CO2, N2O, and CH4 were measured at two agricultural farms in Eastern Canada. Field sites in 1998 and 2002 were located on an experimental farm adjacent to a suburb southwest of the city of Ottawa, ON, a relatively flat area with corn, hay, and soy as the dominant crops. The field site in 2003 was located in the rural community of Coteau-du-Lac, QC, about 20 km southwest of the island of Montreal, a fairly flat area bordered by the St. Lawrence River to the south, consisting mainly of corn and hay with a mixture of soy and vegetable crops. A good agreement was obtained between the overall mean NBL budget-measured CO2 flux at both sites, near-in-time windy night eddy covariance data and previously published results. The mean NBL-measured N2O flux from all wind directions and farming management was of the same order of magnitude as, but slightly higher than, previously published baseline N2O emissions from agroecosystems. Methane fluxes results were judged to be invalid as they were extremely sensitive to wind direction change. Spatial sampling of CO 2, N2O, and CH4 around the two sites confirmed that [CH4] distribution was particularly sensitive to the nature of the emission source, field conditions, and wind direction. Optimal NBL conditions for measuring GHG fluxes, present approximately 60% of the time in this study, consisted of a very stable boundary layer in which GHG profiles converged at the top of the layer allowing a quick determination of the NBL flux integration height. For suboptimal NBL conditions consisting of intermittent turbulence where GHG profiles did not converge, a flux integration method was developed which yielded estimates similar to those obtained during optimal conditions. Eighty percent of the GHG flux in optimal NBL conditions corresponded to a footprint-modelled source area of approximately 2 km upwind, slightly beyond the typical length of a farm in Coteau-du-Lac. A large portion (50%) of the flux came from within 1 km upwind of the measurement site, showing the influence of local sources. 'Top-down' NBL-measured flux values were compared with aggregated field, literature and IPCC flux values for four footprint model-defined areas across both sites, with results indicating that in baseline climatic and farm management conditions, with no apparent intermittent NBL phenomena, the aggregated flux was a good approximation of the NBL-measured flux.
APA, Harvard, Vancouver, ISO, and other styles
20

Wan, Ka-ho, and 溫家豪. "Transition finite elements for mesh refinement in plane and plate bending analyses." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29478546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bouffard, Laura Annie. "Maturing metalinguistically : negotiation of form and the refinement of repair." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82686.

Full text
Abstract:
Research has shown that children attending immersion programs reach a native-like level in comprehension and in reading by the end of elementary level. However, in writing and speaking, they rarely achieve target-like proficiency. Some conditions seem to favor the production of output. This study presents an investigation of children's ability to notice errors in their French second language in immersion program in Montreal. The study was conducted with forty-three (43) children aged 8-9, and aimed to gather information related to the following research questions:
Can we train 8 year-old second language learners to: (a) notice their errors; (b) self-correct (given certain prompts); (c) use metalinguistic terminology to identify forms; and (d) negotiate form using language as a conscious tool to improve their L2 oral production?
Children were required to participate in two (2) stages: first, video recording of communicative activities whit ungrammatical episodes with provision of corrective feedback were selected; and second, audio recording of children's attempts to negotiate form. The database was collected from these stimulated recall sessions of collaborative discussion. Results show how young learners may benefit from the provision of metalinguistic information, thus facilitating their second language learning development.
APA, Harvard, Vancouver, ISO, and other styles
22

Park, Gi-Ho. "p-Refinement Techniques for Vector Finite Elements in Electromagnetics." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/10602.

Full text
Abstract:
The vector finite element method has gained great attention since overcoming the deficiencies incurred by the scalar basis functions for the vector Helmholtz equation. Most implementations of vector FEM have been non-adaptive, where a mesh of the domain is generated entirely in advance and used with a constant degree polynomial basis to assign the degrees of freedom. To reduce the dependency on the users' expertise in analyzing problems with complicated boundary structures and material characteristics, and to speed up the FEM tool, the demand for adaptive FEM grows high. For efficient adaptive FEM, error estimators play an important role in assigning additional degrees of freedom. In this proposal study, hierarchical vector basis functions and four error estimators for p-refinement are investigated for electromagnetic applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Alizada, Alaskar [Verfasser]. "The eXtended Finite Element Method (XFEM) with Adaptive Mesh Refinement for Fracture Mechanics / Alaskar Alizada." Aachen : Shaker, 2012. http://d-nb.info/1052408818/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wallace, Deanne M. "Evaluation and refinement of a micrometeorological method for the measurement of mercury fluxes in natural settings." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58386.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chilton, Ryan Austin. "H-, P- and T-Refinement Strategies for the Finite-Difference-Time-Domain (FDTD) Method Developed via Finite-Element (FE) Principles." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1219064270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Gagnon, Michael Anthony. "An adaptive mixed finite element method using the Lagrange multiplier technique." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050409-115850/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: a posteriori error estimate; adaptive; mesh refinement; lagrange multiplier; finite element method. Includes bibliographical references (leaf 26).
APA, Harvard, Vancouver, ISO, and other styles
27

Barnes, Caleb J. "An Implicit High-Order Spectral Difference Method for the Compressible Navier-Stokes Equations Using Adaptive Polynomial Refinement." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1315591802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cavin, Pauline. "Méthode éléments finis avec raffinement spatial et temporel adaptatif et automatique : "STAR-method" (Space Time Automatic Refinement)." Lyon, INSA, 2006. http://theses.insa-lyon.fr/publication/2006ISAL0034/these.pdf.

Full text
Abstract:
La dynamique non linéaire des structures conduit à des modèles numériques qui nécessitent des moyens de calcul très importants voire prohibitifs. La méthode numérique développée, basée sur la méthode des éléments finis, est proposée dans le but de réaliser de telles simulations. Le principe repose sur l'optimisation du maillage spatial et temporel tout en contrôlant la qualité de la solution. Ainsi, une méthode de résolution avec plusieurs échelles d'espace et de temps, la "STAR-method", est mise en place (Space Time Automatic Refinement). La stratégie adoptée permet d'identifier automatiquement, au moyen d'indicateurs d'erreurs, les zones oµu les discrétisations spatiale et temporelle ne sont pas suffisamment fines pour satisfaire le critère de précision requis. L'apport d'une stratégie de type "STAR-method" est multiple. L'utilisateur n'intervient plus pour définir le maillage adapté µa une précision donnée. Le raffinement local de maillage permet de concentrer l'effort de résolution uniquement dans les zones spatiales et temporelles de la structure qui le nécessitent. Le nombre de degrés de liberté et le nombre de piquets de temps sont réduits par rapport à une méthode classique. Enfin, la précision de la solution est contrôlée au cours de la résolution
Complex numerical simulations of non linear dynamic systems require large computational efforts. The developed method, based on finite element techniques, aims to reduce the computing time. The idea is to optimize the spatial and temporal mesh controlling the solution quality. So, the proposed method solves the problem on different spatial and temporal grids. The method is named "STAR-method" for Space Time Automatic Refinement. With the "STAR-method", an error indicator detects the areas where spatial and temporal discretisations are insufficient to obtain the required precision. The \STAR-method" then automatically refines the meshes in these domains. Results show several advantages of the \STAR-method". The final spatial and temporal meshes become user independent. The local space time mesh refinement focuses the calculational effort only there where it is necessary. With the "STAR-method" the number of degrees of freedom and the number of the time steps are reduced compared to classical FEM. Finally, the solution precision is controlled during the calculation. At the end of calculation, the user obtains the solution with constant precision over the entire calculational domain and the spatial and temporal mesh associated
APA, Harvard, Vancouver, ISO, and other styles
29

Alexe, Mihai. "Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37515.

Full text
Abstract:
Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a stateâ ofâ theâ art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of fluxâ limited numerical algorithms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Sert, Cuneyt. "Nonconforming formulations with spectral element methods." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/1268.

Full text
Abstract:
A spectral element algorithm for solution of the incompressible Navier-Stokes and heat transfer equations is developed, with an emphasis on extending the classical conforming Galerkin formulations to nonconforming spectral elements. The new algorithm employs both the Constrained Approximation Method (CAM), and the Mortar Element Method (MEM) for p-and h-type nonconforming elements. Detailed descriptions, and formulation steps for both methods, as well as the performance comparisons between CAM and MEM, are presented. This study fills an important gap in the literature by providing a detailed explanation for treatment of p-and h-type nonconforming interfaces. A comparative eigenvalue spectrum analysis of diffusion and convection operators is provided for CAM and MEM. Effects of consistency errors due to the nonconforming formulations on the convergence of steady and time dependent problems are studied in detail. Incompressible flow solvers that can utilize these nonconforming formulations on both p- and h-type nonconforming grids are developed and validated. Engineering use of the developed solvers are demonstrated by detailed parametric analyses of oscillatory flow forced convection heat transfer in two-dimensional channels.
APA, Harvard, Vancouver, ISO, and other styles
31

Akargun, Yigit Hayri. "Least-squares Finite Element Solution Of Euler Equations With Adaptive Mesh Refinement." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614138/index.pdf.

Full text
Abstract:
Least-squares finite element method (LSFEM) is employed to simulate 2-D and axisymmetric flows governed by the compressible Euler equations. Least-squares formulation brings many advantages over classical Galerkin finite element methods. For non-self-adjoint systems, LSFEM result in symmetric positive-definite matrices which can be solved efficiently by iterative methods. Additionally, with a unified formulation it can work in all flight regimes from subsonic to supersonic. Another advantage is that, the method does not require artificial viscosity since it is naturally diffusive which also appears as a difficulty for sharply resolving high gradients in the flow field such as shock waves. This problem is dealt by employing adaptive mesh refinement (AMR) on triangular meshes. LSFEM with AMR technique is numerically tested with various flow problems and good agreement with the available data in literature is seen.
APA, Harvard, Vancouver, ISO, and other styles
32

Srisukh, Yudhapoom. "Development of hybrid explicit/implicit and adaptive h and p refinement for the finite element time domain method." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1135879014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Alencar, Thiago Leite de. "Physical changes in a cambisol treaty with biofertilizer: quality indicators and refinement the method of evaluation by srelativo." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13166.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior
The knowledge about soil physical changes and soil quality is important for the adequate targeting of management strategies to be adopted when soil is used for cropping. Considering the hypotheses that a) cultivation worsens soil quality, compared to soil under natural vegetation, for degrading its properties related to the porous geometry; b) biofertilizer application (organic matter) in soil under cultivation promotes an improvement in its physical attributes, compared to the soil under natural vegetation, for acting as a cementing agent between particles; c) soil physical changes can be assessed through indices and interpreted under the qulitative aspect; and d) the Srelative index obtained using the soil-water retention curve determined as close as possible to the structural porosity is more sensitive to physical changes than the Srelative obtained using a soil-water retention curve determined from air-dried soil, the objectives of this study were: 1) evaluate the effects of cultivation and biofertilizer application on the physical quality of a Cambisol cultivated with Ficus carica L., irrigated by drip system; 2) verify the efficiency of indicators at assessing changes in physical attributes; and 3) refine the method of obtention of the Srelative index, aiming to increase its sensitivity to the soil physical changes. In order to evaluate physical quality, five soil scenarios were analyzed: under fig cultivation without biofertilizer application (control), with application of 20%, 40% and 60% of the biofertilizer through irrigation, and secondary native forest (additional control), until the depth of 0.3 m, in the layers of 0.0-0.1 m, 0.1-0.2 m and 0.2-0.3 m, and four replicates. In these layers, disturbed and undisturbed soil samples were collected in order to perform physical analyses. The completely randomized design was adopted. For the refinement of Srelative, with the soil-water retention curve containing only textural porosity (reference curve), soil dispersion was performed in water and with the addition of 1 N sodium hydroxide (with and without removing sodium through washing). F tests were applied for the variance analysis and Dunnett test for mean comparison. Line parallelism and intercept tests were performed for the regressions between soil physical variables and Srelative obtained using air-dried soil, with dispersion in water and addition of 1 N sodium hydroxide (with and without washing). A multivariate analysis was also performed in the dataset. It was concluded that: 1) the porous network quality is improved, or kept, when soil is cultivated under the conditions described in this experiment; 2) when cultivated, biofertilizer application improves or, at least, maintains the quality of soil physical attributes in all considered layers, except for the soil air intrinsic permeability in the layer of 0.0-0.1 m; 3) regarding the soil under native forest, biofertilizer application improves or, at least, maintains the quality of soil physical attributes in all considered layers, except for the clay floculation degree in the layer of 0.0-0.1 m; 4) cases where the quality of soil physical attributes was worsened as a result of the applied treatments, although they were not considered as critical for plant development, are an indication that the adoption of specific management techniques is needed to avoid soil degradation; 5) most of the selected soil physical quality indicators are efficient at quantifying changes imposed to the soil structure; and 6) the Srelative index obtained from the method of soil dispersion in water is more sensitive to soil physical changes than the Srelative obtained using air-dried soil.
O conhecimento sobre as alteraÃÃes fÃsicas e qualidade do solo à importante para o direcionamento adequado das estratÃgias de manejo a serem utilizadas quando da exploraÃÃo do solo por cultivos agrÃcolas. Partindo das hipÃteses de que o cultivo, por degradar as propriedades do solo relacionadas com a geometria porosa, piora sua qualidade em relaÃÃo ao solo sob vegetaÃÃo natural, de que a aplicaÃÃo de biofertilizante (matÃria orgÃnica) em solo sob cultivo, por atuar como agente cimentante entre as partÃculas, promove a melhoria de seus atributos fÃsicos em relaÃÃo ao solo sob vegetaÃÃo natural, de que as alteraÃÃes fÃsicas do solo podem ser aferidas por Ãndices e interpretadas sob o aspecto qualitativo, e de que o Ãndice Srelativo obtido a partir da curva caracterÃstica de Ãgua no solo construÃda o mais prÃximo da porosidade textural à mais sensÃvel Ãs alteraÃÃes fÃsicas do solo do que o Srelativo obtido com a curva construÃda a partir de terra fina seca ao ar (TFSA), objetivou-se: 1) avaliar os efeitos do cultivo e da aplicaÃÃo de um biofertilizante sobre a qualidade fÃsica de um Cambissolo cultivado com Ficus carica L., irrigado por sistema de gotejamento; 2) verificar a eficiÃncia de indicadores em mensurar alteraÃÃes em atributos fÃsicos; e 3) refinar o mÃtodo de obtenÃÃo do Srelativo com o propÃsito de aumentar a sua sensibilidade Ãs alteraÃÃes fÃsicas do solo. Para fins de avaliaÃÃo da qualidade fÃsica, foram contempladas cinco situaÃÃes de solo: sob cultivo de figo sem aplicaÃÃo do biofertilizante (testemunha), com aplicaÃÃo de 20%, 40% e 60% do biofertilizante na lÃmina de irrigaÃÃo, e mata nativa secundÃria (testemunha adicional), atà a profundidade de 0,3 m, nas camadas de 0,0-0,1 m, 0,1-0,2 m e 0,2-0,3 m, e quatro repetiÃÃes. Nestas camadas foram coletadas amostras de solo com estruturas deformada e indeformada para a realizaÃÃo de anÃlises fÃsicas pertinentes ao objetivo do estudo. Foi adotado o delineamento inteiramente casualizado. Para o refinamento do Srelativo, com curva caracterÃstica de Ãgua no solo contendo somente porosidade textural (curva de referÃncia), foi feita a dispersÃo de solo em Ãgua e com adiÃÃo de hidrÃxido de sÃdio 1 N (com e sem remoÃÃo do sÃdio por lavagem). Foram aplicados os testes F para a anÃlise de variÃncia e de Dunnett para a comparaÃÃo de mÃdias. Foram realizados testes de paralelismo de retas e de intercepto para as regressÃes entre variÃveis fÃsicas do solo e Srelativo obtidos por TFSA, com dispersÃo em Ãgua e com adiÃÃo de hidrÃxido de sÃdio 1N (com e sem lavagem). TambÃm foi realizada anÃlise multivariada dos dados. Concluiu-se que: 1) a qualidade da rede porosa à melhorada, senÃo mantida, quando o solo à cultivado sob as condiÃÃes descritas neste experimento; 2) quando cultivado, a aplicaÃÃo de biofertilizante melhora ou, no mÃnimo, mantÃm a qualidade dos atributos fÃsicos do solo em todas as camadas consideradas, com exceÃÃo da permeabilidade intrÃnseca do solo ao ar na camada de 0,0-0,1 m; 3) em relaÃÃo ao solo de mata nativa, a aplicaÃÃo de biofertilizante melhora ou, pelo menos, mantÃm a qualidade dos atributos fÃsicos do solo em todas as camadas consideradas, exceto quanto ao grau de floculaÃÃo das argilas na camada de 0,0-0,1 m; 4) os casos em que houve piora da qualidade do atributo fÃsico avaliado em decorrÃncia dos tratamentos aplicados, ainda que eles nÃo sejam considerados crÃticos ao desenvolvimento de plantas, sÃo indicativos de que hà a necessidade de adoÃÃo de prÃticas de manejo especÃficas para evitar a degradaÃÃo do solo; 5) a maioria dos indicadores de qualidade fÃsica do solo selecionados à eficiente em quantificar as alteraÃÃes impostas à estrutura do solo; e 6) o Ãndice Srelativo obtido a partir do mÃtodo da dispersÃo de solo em Ãgua à mais sensÃvel Ãs alteraÃÃes fÃsicas do solo do que o Srelativo obtido por terra fina seca ao ar.
APA, Harvard, Vancouver, ISO, and other styles
34

Ren, Da Qi. "Analysis and design development of parallel 3-D mesh refinement algorithms for finite element electromagnetics with tetrahedra." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103003.

Full text
Abstract:
Optimal partitioning of three-dimensional (3-D) mesh applications necessitates dynamically determining and optimizing for the most time-inhibiting factors, such as load imbalance and communication volume. One challenge is to create an analytical model where the programmer can focus on optimizing load imbalance or communication volume to reduce execution time. Another challenge is the best individual performance of a specific mesh refinement demands precise study and the selection of the suitable computation strategy. Very-large-scale finite element method (FEM) applications require sophisticated capabilities for using the underlying parallel computer's resources in the most efficient way. Thus, classifying these requirements in a manner that conforms to the programmer is crucial.
This thesis contributes a simulation-based approach for the algorithm analysis and design of parallel, 3-D FEM mesh refinement that utilizes Petri Nets (PN) as the modeling and simulation tool. PN models are implemented based on detailed software prototypes and system architectures, which imitate the behaviour of the parallel meshing process. Subsequently, estimates for performance measures are derived from discrete event simulations. New communication strategies are contributed in the thesis for parallel mesh refinement that pipeline the computation and communication time by means of the workload prediction approach and task breaking point approach. To examine the performance of these new designs, PN models are created for modeling and simulating each of them and their efficiencies are justified by the simulation results. Also based on the PN modeling approach, the performance of a Random Polling Dynamic Load Balancing protocol has been examined. Finally, the PN models are validated by a MPI benchmarking program running on the real multiprocessor system. The advantages of new pipelined communication designs as well as the benefits of PN approach for evaluating and developing high performance parallel mesh refinement algorithms are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
35

Krishnan, Sreedevi. "An Adaptively refined Cartesian grid method for moving boundary problems applied to biomedical systems." Diss., University of Iowa, 2006. https://ir.uiowa.edu/etd/87.

Full text
Abstract:
A major drawback in the operation of mechanical heart valve prostheses is thrombus formation in the near valve region potentially due to the high shear stresses present in the leakage jet flows through small gaps between leaflets and the valve housing. Detailed flow analysis in this region during the valve closure phase is of interest in understanding the relationship between shear stress and platelet activation. An efficient Cartesian grid method is developed for the simulation of incompressible flows around stationary and moving three-dimensional immersed solid bodies as well as fluid-fluid interfaces. The embedded boundaries are represented using Levelsets and treated in a sharp manner without the use of source terms to represent boundary effects. The resulting algorithm is implemented in a straightforward manner in three dimensions and retains global second-order accuracy. When dealing with problems of disparate length scales encountered in many applications, it is necessary to resolve the physically important length scales adequately to ensure accuracy of the solution. Fixed grid methods often have the disadvantage of heavy mesh requirement for well resolved calculations. A quadtree based adaptive local mesh refinement scheme is developed to complement the sharp interface Cartesian grid method scheme for efficient and optimized calculations. Detailed timing and accuracy data is presented for a variety of benchmark problems involving moving boundaries. The above method is then applied to modeling heart valve closure and predicting thrombus formation. Leaflet motion is calculated dynamically based on the fluid forces acting on it employing a fluid-structure interaction algorithm. Platelets are modeled and tracked as point particles by a Lagrangian particle tracking method which incorporates the hemodynamic forces on the particles. Leaflet closure dynamics including rebound is analyzed and validated against previous studies. Vortex shedding and formation of recirculation regions are observed downstream of the valve, particularly in the gap between the valve and the housing. Particle exposure to high shear and entrapment in recirculation regions with high residence time in the vicinity of the valve are observed corresponding to regions prone to thrombus formation.
APA, Harvard, Vancouver, ISO, and other styles
36

Babazadeh, Khameneh Keyvan. "An investigation of aluminum grain refinement process and study of the abilities of ultrasonic detection method in this process." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/MQ45599.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gokhale, Nandan Bhushan. "A dimensionally split Cartesian cut cell method for Computational Fluid Dynamics." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289732.

Full text
Abstract:
We present a novel dimensionally split Cartesian cut cell method to compute inviscid, viscous and turbulent flows around rigid geometries. On a cut cell mesh, the existence of arbitrarily small boundary cells severely restricts the stable time step for an explicit numerical scheme. We solve this `small cell problem' when computing solutions for hyperbolic conservation laws by combining wave speed and geometric information to develop a novel stabilised cut cell flux. The convergence and stability of the developed technique are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. This work was recently published in the Journal of Computational Physics (Gokhale et al., 2018). Subsequently, we develop the method further to be able to compute solutions for the compressible Navier-Stokes equations. The method is globally second order accurate in the L1 norm, fully conservative, and allows the use of time steps determined by the regular grid spacing. We provide a full description of the three-dimensional implementation of the method and evaluate its numerical performance by computing solutions to a wide range of test problems ranging from the nearly incompressible to the highly compressible flow regimes. This work was recently published in the Journal of Computational Physics (Gokhale et al., 2018). It is the first presentation of a dimensionally split cut cell method for the compressible Navier-Stokes equations in the literature. Finally, we also present an extension of the cut cell method to solve high Reynolds number turbulent automotive flows using a wall-modelled Large Eddy Simulation (WMLES) approach. A full description is provided of the coupling between the (implicit) LES solution and an equilibrium wall function on the cut cell mesh. The combined methodology is used to compute results for the turbulent flow over a square cylinder, and for flow over the SAE Notchback and DrivAer reference automotive geometries. We intend to publish the promising results as part of a future publication, which would be the first assessment of a WMLES Cartesian cut cell approach for computing automotive flows to be presented in the literature.
APA, Harvard, Vancouver, ISO, and other styles
38

Arjunon, Sivakkumar. "P-version refinement studies in the boundary element method a dissertation presented to the faculty of the Graduate School, Tennessee Technological University /." Click to access online, 2009. http://proquest.umi.com/pqdweb?index=19&sid=2&srchmode=1&vinst=PROD&fmt=6&startpage=-1&clientid=28564&vname=PQD&RQT=309&did=1786737301&scaling=FULL&ts=1250860988&vtype=PQD&rqt=309&TS=1250861000&clientId=28564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Turner, Kate. "Assessing wild plant vulnerability to over-harvesting: refinement of the "rapid vulnerability assessment" method and its application in Huitzilac, Mexico." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18450.

Full text
Abstract:
Many concerns have been voiced about the impacts of non-timber forest product harvesting on forest ecosystems, prompting the development of tools for assessing harvest sustainability. The "rapid vulnerability assessment" (RVA) is one method that predicts vulnerability of plants to over-harvesting. However, there is little consensus on how to perform an RVA or interpret the results. The objective of my research is to analyze and refine the RVA method to enhance its utility. I examine factors affecting plant vulnerability used in various versions of the RVA to create a "short list" of important factors. I then use this short list of factors to perform an RVA for selected wild plant species in Huitzilac, Mexico, to further refine the RVA method. Information for this assessment comes from literature, interviews, observation, plant sampling and a town survey. Based upon these I re-conceptualize the RVA method, placing increased value on several critical factors which directly affect the availability and the rate of loss of the target species. I also suggest a method to aid in interpreting the results. Results from the case study, presented in this re-conceptualized format, indicate that none of the selected species is highly vulnerable to over-harvesting.
Plusieurs critiques ont été formulées concernant l'impacts de la collecte des produits forestiers non ligneux (PFLN) sur les écosystèmes forestiers, ce qui a mené au développement d'outils afin d'évaluer la durabilité de la sous dite collecte. L'évaluation rapide de vulnérabilité (ERV) est une méthode afin de prédire la vulnérabilité de plantes à la sur collecte. Cependant, il y a peu de consensus sur la manière optimale d'effectuer une ERV ou d'interpréter les résultats obtenus. L'objectif de ma recherche est d'analyser et de raffiner la méthode ERV afin d'augmenter son utilité pratique. J'examine des facteurs influençant la vulnérabilité des plantes utilisées dans différentes versions de l'ERV afin de créer une liste abrégée de facteurs-clés. J'utilise ensuite cette liste abrégée afin d'effectuer une ERV sur une série de plantes sauvages sélectionnées à Huitzilac, au Mexique; dans le but d'améliorer la méthode ERV. L'information pour cette ERV provient de la littérature scientifique, des entrevues, des observations directes, l'échantillonnage de plantes et une évaluation d'utilisation en ville. Basée sur ces informations, je re-conceptualise la méthode ERV plaçant plus de valeur sur plusieurs facteurs critiques qui affectent directement l'existence et le taux de perte d'espèces sélectionnées. Je suggère également une méthode afin de faciliter l'interprétation de résultats. Les résultats de l'étude de cas, présentée de forme re-conceptualisée, indiquent qu'aucune des espèces sélectionnées n'est extrêmement vulnérable à la sur-collecte.
APA, Harvard, Vancouver, ISO, and other styles
40

Wittebol, Laura. "Refinement and verification of the nocturnal boundary layer budget method for estimating greenhouse gas emissions from Eastern Canadian agricultural farms." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66706.

Full text
Abstract:
Measuring greenhouse gas (GHG) emissions directly at the farm scale is most relevant to the agricultural sector and has the potential to eliminate some of the uncertainty arising from scaling up from plot or field studies or down from regional or national levels. The stable nighttime atmosphere acts as a chamber within which sequentially-measured GHG concentration profiles determine the flux of GHGs. With the overall goal of refining the nocturnal boundary layer (NBL) budget method to obtain reliable flux estimates at a scale representative of the typical eastern Canadian farm (approximately 1 km2), fluxes of CO2, N2O, and CH4 were measured at two agricultural farms in Eastern Canada. Field sites in 1998 and 2002 were located on an experimental farm adjacent to a suburb southwest of the city of Ottawa, ON, a relatively flat area with corn, hay, and soy as the dominant crops. The field site in 2003 was located in the rural community of Coteau-du-Lac, QC, about 20 km southwest of the island of Montreal, a fairly flat area bordered by the St. Lawrence River to the south, consisting mainly of corn and hay with a mixture of soy and vegetable crops. A good agreement was obtained between the overall mean NBL budget-measured CO2 flux at both sites, near-in-time windy night eddy covariance data and previously published results. The mean NBL-measured N2O flux from all wind directions and farming management was of the same order of magnitude as, but slightly higher than, previously published baseline N2O emissions from agroecosystems. Methane fluxes results were judged to be invalid as they were extremely sensitive to wind direction change. Spatial sampling of CO2, N2O, and CH4 around the two sites confirmed that [CH4] distribution was particularly sensitive to the nature of the emission source, field conditions, and wind direction. Optimal NBL conditions for measuring GHG fluxes, present approximately 60% of the t
Les don nées sur les émissions des gaz à effet de serre (GES) obtenues au niveau des fermes entières agricoles sont pertinentes au secteur agricole et ont le potentiel d'éliminer une partie de l'incertitude qui se produit quant à l'extrapolation du niveau de la parcelle jusqu'au niveau du champ. La couche limite nocturne (CLN) agit comme une chambre virtuelle dans laquelle on fait plusieurs ascensions pour déterminer les fluxes de GES. Dans le but géneral de raffiner la méthode du budget de la CLN afin d'obtenir de plus fiables estimées au niveau de la ferme typique (environ 1 kilomètre carré), les fluxes de CO2, N2O, et CH4 ont été mesurés sur deux fermes agricoles dans l'est du Canada. En 1998 et 2002, les sites d'étude se trouvaient sur une ferme près d'une banlieue au sud-ouest d'Ottawa (Ontario), où le terrain est relativement plat et les principales cultures sont le maïs, le foin et le soya. En 2003, le site d'étude se situait dans la communauté rurale de Coteau-du-Lac (Québec), environ 20 km au sud-ouest de Montréal. Bordé par le fleuve St-Laurent au sud, ce terrain est plat et on y cultive surtout le maïs, le foin et un mélange de soya et de légumes. Le flux moyen de CO2 mesuré aux deux sites par la méthode du budget de la CLN correspondait bien avec celui mesuré par la technique de la covariance des fluctuations et aussi avec ce qui est rapporté dans la littérature. Considérant toutes les directions de vent et toutes les pratiques agricoles, la moyenne des flux de N2O mesurés par la technique de NBL était du même ordre de grandeur, quoiqu'un peu plus élevée, que ce qui est rapporté dans la littérature pour les émissions de base de N2O des écosystèmes agricoles. Les résultats pour le CH4 ont été jugés non-valides car l'échantillonage concurrente des trois gaz aux alentours des deux sites a confirmé que le CH4 était particulièrement sensible à la
APA, Harvard, Vancouver, ISO, and other styles
41

Promwungkwa, Anucha. "Data Structure and Error Estimation for an Adaptive p-Version Finite Element Method in 2-D and 3-D Solids." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30507.

Full text
Abstract:
Automation of finite element analysis based on a fully adaptive p-refinement procedure can reduce the magnitude of discretization error to the desired accuracy with minimum computational cost and computer resources. This study aims to 1) develop an efficient p-refinement procedure with a non-uniform p analysis capability for solving 2-D and 3-D elastostatic mechanics problems, and 2) introduce a stress error estimate. An element-by-element algorithm that decides the appropriate order for each element, where element orders can range from 1 to 8, is described. Global and element data structures that manage the complex data generated during the refinement process are introduced. These data structures are designed to match the concept of object-oriented programming where data and functions are managed and organized simultaneously. The stress error indicator introduced is found to be more reliable and to converge faster than the error indicator measured in an energy norm called the residual method. The use of the stress error indicator results in approximately 20% fewer degrees of freedom than the residual method. Agreement of the calculated stress error values and the stress error indicator values confirms the convergence of final stresses to the analyst. The error order of the stress error estimate is postulated to be one order higher than the error order of the error estimate using the residual method. The mapping of a curved boundary element in the working coordinate system from a square-shape element in the natural coordinate system results in a significant improvement in the accuracy of stress results. Numerical examples demonstrate that refinement using non-uniform p analysis is superior to uniform p analysis in the convergence rates of output stresses or related terms. Non-uniform p analysis uses approximately 50% to 80% less computational time than uniform p analysis in solving the selected stress concentration and stress intensity problems. More importantly, the non-uniform p refinement procedure scales the number of equations down by 1/2 to 3/4. Therefore, a small scale computer can be used to solve equation systems generated using high order p-elements. In the calculation of the stress intensity factor of a semi-elliptical surface crack in a finite-thickness plate, non-uniform p analysis used fewer degrees of freedom than a conventional h-type element analysis found in the literature.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Ozcelikkale, Altug. "Development Of An Incompressible, Laminar Flowsolver Based On Least Squares Spectral Element Methodwith P-type Adaptive Refinement Capabilities." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612096/index.pdf.

Full text
Abstract:
The aim of this thesis is to develop a flow solver that has the ability to obtain an accurate numerical solution fast and efficiently with minimum user intervention. In this study, a two-dimensional viscous, laminar, incompressible flow solver based on Least-Squares Spectral Element Method (LSSEM) is developed. The LSSEM flow solver can work on hp-type nonconforming grids and can perform p-type adaptive refinement. Several benchmark problems are solved in order to validate the solver and successful results are obtained. In particular, it is demonstrated that p-type adaptive refinement on hp-type non-conforming grids can be used to improve the quality of the solution. Moreover, it is found that mass conservation performance of LSSEM can be enhanced by using p-type adaptive refinement strategies while keeping computational costs reasonable.
APA, Harvard, Vancouver, ISO, and other styles
43

Weir, Kenneth. "A refinement to the semi-quantitative RT-PCR method and defense related, stress resistance and insulin pathway gene expression during Sarcophaga crassipalpis diapause." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
44

Akdag, Osman. "Incompressible Flow Simulations Using Least Squares Spectral Element Method On Adaptively Refined Triangular Grids." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614944/index.pdf.

Full text
Abstract:
The main purpose of this study is to develop a flow solver that employs triangular grids to solve two-dimensional, viscous, laminar, steady, incompressible flows. The flow solver is based on Least Squares Spectral Element Method (LSSEM). It has p-type adaptive mesh refinement/coarsening capability and supports p-type nonconforming element interfaces. To validate the developed flow solver several benchmark problems are studied and successful results are obtained. The performances of two different triangular nodal distributions, namely Lobatto distribution and Fekete distribution, are compared in terms of accuracy and implementation complexity. Accuracies provided by triangular and quadrilateral grids of equal computational size are compared. Adaptive mesh refinement studies are conducted using three different error indicators, including a novel one based on elemental mass loss. Effect of modifying the least-squares functional by multiplying the continuity equation by a weight factor is investigated in regards to mass conservation.
APA, Harvard, Vancouver, ISO, and other styles
45

Santos, Cássio Morilla dos [UNESP]. "Síntese e caracterização de compostos HoMn1-x(Ni,Co)xO3." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/106644.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:35:45Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-03-11Bitstream added on 2014-06-13T19:46:44Z : No. of bitstreams: 1 santos_cm_dr_bauru.pdf: 6826027 bytes, checksum: 1ad686f4be17bb277f0631a83afd94f0 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Nesse trabalho foi realizado o processo de síntese, e a caracterização estrutural e magnética de compostos HoMn1-x(Ni, Co)xO3 de estrutura perovskita. As sínteses das amostras foram realizadas por meio do método dos precursores poliméricos modificado. Após a síntese e a remoção do solvente, a resina polimérica formada foi tratada em 350ºC/4h para a remoção dos constituintes orgânicos, seguida de tratamentos térmicos em 500ºC/4h e 900ºC/20h para obtenção da fase cristalina. Para a caracterização estrutural foi utilizada a linha D10B-XPD do Laboratório Naiconal de Luz Síncrotron (LNLS), onde comprimentos de onda de raios X abaixo da borda de absorção do cobalto, do maganês e do níquel, foram utilizados. A formação das fases HoNi0.50Mn0.50O3, HoCo0.50O3 e HoNi0.25Co0.25Mn0.50O3, foram observadas pelas técnica da difração de raios X. Com o método de refinamento Reitveld para a amostra HoNi0.25Co0.25Mn0.50O3, foi determinado que o cobalto e o níquel apresentaram ocupações similares, no topo e na base da cela unitária, enquanto que o manganês ocupou preferencialmente o plano 002. A resposta magnética das amostras foi estudada através de curvas de magnetização em função da temperatura, e do campo magnético aplicado. As curvas ZFC demonstraram uma resposta paramagnética associada ao momento magnético do hólmio, além da coexistência de ferromagnetismo, antiferromagnetismos e ferrimagnetismo, devido às sub-redes formadas pelos metais de transição. As curvas FC evidenciaram o fenômeno da inversão de spin, associado à interação entre as sub-redes formadas pelos metais de transição com a sub-rede formada pela terra-rara, considerando um mecanismo de interação de troca antiferromagnética
In this work was accomplished the syntheis process and structural and magnetic characterization of HoMn1-x(NiCo)xO3 compounds of pervskites structure. The sample synthesis were performed through modified polymeric precursos method. After synthesis and solvent removal, the polymer resin formed was treated at 350ºC/4h for organic constituents removal, followed by heating treatment at 500ºC/4h and 900ºC/20h to obtain the crystalline phase. For structural characterization, it was used D10B-XPD beam line of Laboratório Nacional de Luz Síncrotron (LNLS), where X-rays wavelengths below cobalt, manganeses and nickel absorption edge, were used. The formation of HoNi0.50Mn0.50O3, HoCo0.50O3 e HoNi0.25Co0.25Mn0.50O3 phases were observed by X-ray diffraction technique. By Rietveld refinement method for sample HoNi0.25Co0.25Mn0.50O3, it was determined that cobalt and nickel had similar occupations at the top and bottom of unit cell, while the manganeses preferentially occupied plan 002. The magnetic response of samples was studied through magnetization curves according to the temperature function and the applied magnetic field. The ZFC curves showed a paramagnetic response associated to holmium magnetic moment, and ferromagnetism, antiferromagnetism and ferrimagnetism coexistence, due to sublattices formed by transition metals. The FC curves evidenced the spin reversal phenomenon, associated to the interaction between the sublattice formed bu transition metals with sublattices formed by rare-earth, considering a mechanism of antiferromagnetic exchange interaction
APA, Harvard, Vancouver, ISO, and other styles
46

Yao, Haiqiong. "Methods and Algorithms for Scalable Verification of Asynchronous Designs." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4263.

Full text
Abstract:
Concurrent systems are getting more complex with the advent of multi-core processors and the support of concurrent programs. However, errors of concurrent systems are too subtle to detect with the traditional testing and simulation. Model checking is an effective method to verify concurrent systems by exhaustively searching the complete state space exhibited by a system. However, the main challenge for model checking is state explosion, that is the state space of a concurrent system grows exponentially in the number of components of the system. The state space explosion problem prevents model checking from being applied to systems in realistic size. After decades of intensive research, a large number of methods have been developed to attack this well-known problem. Compositional verification is one of the promising methods that can be scalable to large complex concurrent systems. In compositional verification, the task of verifying an entire system is divided into smaller tasks of verifying each component of the system individually. The correctness of the properties on the entire system can be derived from the results from the local verification on individual components. This method avoids building up the global state space for the entire system, and accordingly alleviates the state space explosion problem. In order to facilitate the application of compositional verification, several issues need to be addressed. The generation of over-approximate and yet accurate environments for components for local verification is a major focus of the automated compositional verification. This dissertation addresses such issue by proposing two abstraction refinement methods that refine the state space of each component with an over-approximate environment iteratively. The basic idea of these two abstraction refinement methods is to examine the interface interactions among different components and remove the behaviors that are not allowed on the components' interfaces from their corresponding state space. After the extra behaviors introduced by the over-approximate environment are removed by the abstraction refinement methods, the initial coarse environments become more accurate. The difference between these two methods lies in the identification and removal of illegal behaviors generated by the over-approximate environments. For local properties that can be verified on individual components, compositional reasoning can be scaled to large systems by leveraging the proposed abstraction refinement methods. However, for global properties that cannot be checked locally, the state space of the whole system needs to be constructed. To alleviate the state explosion problem when generating the global state space by composing the local state space of the individual components, this dissertation also proposes several state space reduction techniques to simplify the state space of each component to help the compositional minimization method to generate a much smaller global state space for the entire system. These state space reduction techniques are sound and complete in that they keep all the behaviors on the interface but do not introduce any extra behaviors, therefore, the same verification results derived from the reduced global state space are also valid on the original state space for the entire system. An automated compositional verification framework integrated with all the abstraction refinement methods and the state space reduction techniques presented in this dissertation has been implemented in an explicit model checker Platu. It has been applied to experiments on several non-trivial asynchronous circuit designs to demonstrate its scalability. The experimental results show that our automated compositional verification framework is effective on these examples that are too complex for the monolithic model checking methods to handle.
APA, Harvard, Vancouver, ISO, and other styles
47

Gloger, Oliver [Verfasser]. "Combined Applications of the Level Set Method with Multi-Step Recognition and Refinement Algorithms for Fully Automatic Organ and Tissue Segmentation in MRI Data / Oliver Gloger." Greifswald : Universitätsbibliothek Greifswald, 2012. http://d-nb.info/1022617842/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hoffmann, Helene [Verfasser], and Ingeborg [Akademischer Betreuer] Levin. "Micro radiocarbon dating of the particulate organic carbon fraction in Alpine glacier ice: method refinement, critical evaluation and dating applications / Helene Margarethe Hoffmann ; Betreuer: Ingeborg Levin." Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://d-nb.info/1180611837/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Carrio, Juan Alfredo Guevara. "Análise estrutural de materiais cerâmicos com estrutura de perovskita." Universidade de São Paulo, 1998. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-10032014-171450/.

Full text
Abstract:
Vários compostos de fórmula AMO3 (A = Sr, Ca, Ba; M = Ru, Ti, Hf), com estrutura de perovskita foram sintetizados. Realizou-se uma análise estrutural dos compostos, usando difração de raios X, de fonte convencional e síncrotron, e difração de nêutrons em material policristalino. Para esta análise usaram-se dois dos programas mais reconhecidos internacionalmente para o método de Rietveld: DBWS e GSAS. Todas as estruturas analisadas foram classificadas segundo o sistema de inclinação dos octaedros de Glazer e representadas graficamente com o programa Atoms. A estrutura do SrHfO3 foi determinada por difração de nêutrons e raios X em material policristalino. Foi estudada a dependência da estrutura desses compostos com a temperatura e, com a composição, no caso das soluções sólidas SrTi1-xRuxO3 (O≤ x≤1). Duas transições estruturais de fase com a temperatura foram encontradas nos compostos SrRuO3 e SrHfO3. Nas soluções sólidas SrTi1-xRuO3 foi estudada a correlação da estrutura com as propriedades elétricas e com a estrutura eletrônica
Several compounds whose general formula was AMO3 (A = Sr, Ca, Ba; M = Ru, Ti, Hf) and which present the perovskite structure were synthesized. A structural analysis of the compounds by conventional and synchrotron X ray and neutron powder diffraction was performed. For this study was used the internationally accepted software for Rietveld analysis DBWS and GSAS. A11 of the structures analyzed were classified according to the octahedral tilt classification system of Glazer and represented with the program Atoms. The structure of SrHfO3 was determined by neutron and X ray powder diffraction. The dependence of the structure of severa1 compounds with temperature was studied. Two different structural phase transitions were found in the compounds SrRuO3 and SrHfO3. In the case of the solid solutions SrTi1-xRuxO3 (O≤ x ≤1) the dependence of the structure with the composition and the correlation with electric properties and the electronic structure was studied
APA, Harvard, Vancouver, ISO, and other styles
50

Saeed, Usman. "Adaptive numerical techniques for the solution of electromagnetic integral equations." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41173.

Full text
Abstract:
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography