To see the other types of publications on this topic, follow the link: Latin hypercube sampling technique.

Dissertations / Theses on the topic 'Latin hypercube sampling technique'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Latin hypercube sampling technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

OTTONELLO, ANDREA. "Application of Uncertainty Quantification techniques to CFD simulation of twin entry radial turbines." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1046507.

Full text
Abstract:
L'argomento principale della tesi è l'applicazione delle tecniche di quantificazione dell'incertezza (UQ) alla simulazione numerica (CFD) di turbine radiali twin entry impiegate nella turbosovralimentazione automobilistica. Lo studio approfondito di questo tipo di turbomacchine è affrontato nel capitolo 3, finalizzato alla comprensione dei principali parametri che caratterizzano e influenzano le prestazioni fluidodinamiche delle turbine twin scroll. Il capitolo 4 tratta di una piattaforma per l'analisi UQ sviluppata internamente tramite il set di strumenti open source ‘Dakota’. La piattaforma è stata testata dapprima su un caso di interesse industriale, ovvero un ugello de Laval supersonico (capitolo 5); l'analisi ha evidenziato l'utilizzo pratico delle tecniche di quantificazione dell'incertezza nella previsione delle prestazioni di un ugello affetto da condizioni di fuori progetto con complessità fluidodinamica dovuta alla forte non linearità. L'esperienza maturata con l'approccio UQ ha agevolato l'identificazione di metodi idonei per applicare la propagazione dell’incertezza alla simulazione CFD di turbine radiali twin scroll (capitolo 6). In tal caso sono state studiate e messe in pratica diverse tecniche di quantificazione dell'incertezza al fine di acquisire un'esperienza approfondita sull’attuale stato dell'arte. Il confronto dei risultati ottenuti dai diversi approcci e la discussione dei pro e dei contro relativi a ciascuna tecnica hanno portato a conclusioni interessanti, che vengono proposte come linee guida per future applicazioni di quantificazione dell’incertezza alla simulazione CFD delle turbine radiali. L'integrazione di modelli e metodologie UQ, oggi utilizzati solo da alcuni centri di ricerca accademica, con solutori CFD commerciali consolidati ha permesso di raggiungere l'obiettivo finale della tesi di dottorato: dimostrare all'industria l'elevato potenziale delle tecniche UQ nel migliorare, attraverso distribuzioni di probabilità, la previsione delle prestazioni relative ad un componente soggetto a diverse fonti di incertezza. Lo scopo dell’attività di ricerca consiste pertanto nel fornire ai progettisti dati prestazionali associati a margini di incertezza che consentano di correlare meglio simulazione e applicazione reale. Per accordi di riservatezza, i parametri geometrici relativi alla turbina twin entry in oggetto sono forniti adimensionali, i dati sensibili sugli assi dei grafici sono stati omessi e nelle figure si è reso necessario eliminare le legende dei contours ed ogni eventuale riferimento dimensionale.<br>The main topic of the thesis is the application of uncertainty quantification (UQ) techniques to the numerical simulation (CFD) of twin entry radial turbines used in automotive turbocharging. The detailed study of this type of turbomachinery is addressed in chapter 3, aimed at understanding the main parameters which characterize and influence the fluid dynamic performance of twin scroll turbines. Chapter 4 deals with the development of an in-house platform for UQ analysis through ‘Dakota’ open source toolset. The platform was first tested on a test case of industrial interest, i.e. a supersonic de Laval nozzle (chapter 5); the analysis highlighted the practical use of uncertainty quantification techniques in predicting the performance of a nozzle affected by off-design conditions with fluid dynamic complexity due to strong non-linearity. The experience gained with the UQ approach facilitated the identification of suitable methods for applying the uncertainty propagation to the CFD simulation of twin entry radial turbines (chapter 6). In this case different uncertainty quantification techniques have been investigated and put into practice in order to acquire in-depth experience on the current state of the art. The comparison of the results coming from the different approaches and the discussion of the pros and cons related to each technique led to interesting conclusions, which are proposed as guidelines for future uncertainty quantification applications to the CFD simulation of radial turbines. The integration of UQ models and methodologies, today used only by some academic research centers, with well established commercial CFD solvers allowed to achieve the final goal of the doctoral thesis: to demonstrate to industry the high potential of UQ techniques in improving, through probability distributions, the prediction of the performance relating to a component subject to different sources of uncertainty. The purpose of the research activity is therefore to provide designers with performance data associated with margins of uncertainty that allow to better correlate simulation and real application. Due to confidentiality agreements, geometrical parameters concerning the studied twin entry radial turbine are provided dimensionless, confidential data on axes of graphs are omitted and legends of the contours as well as any dimensional reference have been shadowed.
APA, Harvard, Vancouver, ISO, and other styles
2

Green, Robert C. II. "Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1338894641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Batiston, Evandro Lino. "Simulação Sequencial Gaussiana usando Latin Hypercube Sampling : estudo de caso minério de ferro Carajás." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/28838.

Full text
Abstract:
A utilização de modelos de incerteza geológica é fundamental para a quantificação e avaliação da flutuação dos atributos analisados pelos departamentos de planejamento da indústria mineira. O método de simulação seqüencial Gaussiana (SSG) é amplamente utilizado para a construção destes modelos. O SSG caracteriza-se por representar adequadamente o espaço de incerteza da variável aleatória (VA) Z(u), desde que o número de realizações L seja adequado para reproduzi-lo. Existem dois algoritmos implementados em SSG que efetuam a tiragem aleatória da distribuição condicional local de probabilidade (dclp) cumulativa, visando gerar as realizações que vão compor a simulação. O algoritmo clássico, baseado na tiragem simples por Monte Carlo, denomina-se Simple Random Sampling (SRS), enquanto que o método alternativo é denominado Latin Hypercube Sampling (LHS). Esta dissertação compara a eficiência destes dois algoritmos, como forma de caracterizar o espaço de incerteza de algumas funções de transferência usadas na indústria mineral. O estudo de caso envolveu a análise do número de realizações necessárias para caracterizar adequadamente a variabilidade da resposta destas funções, como mecanismo para comparação, para um banco de dados de minério de ferro da Província Mineral de Carajás. Observou-se que o método LHS ofereceu maior eficiência na caracterização do espaço de incerteza da VA Z(u), estratificando a dclp de acordo com cada realização, proporcionando menor número de realizações e melhor cobertura da dclp, na construção do modelo de incerteza. Estes benefícios facilitam a implementação da técnica de SSG nas rotinas de planejamento, de forma que os modelos de incerteza serão menores e mais fáceis de manipular.<br>Assessing geological uncertainty is of paramount importance in mining industry risk analysis. Sequential Gaussian Simulation (SGS) is widely used for building such models, especially when mapping grade uncertainty. SGS is commonly used for mapping the uncertainty space of a random variable (RV) Z(u), and the number of realizations L to adequate characterize this space is possible large. Two algorithms were herein implemented combined with SGS for random drawing from the conditional cumulative distribution function (ccdf). The classical algorithm, based on Monte Carlo simple drawing known as Simple Random Sampling (SRS), whereas the alternative method, Latin Hypercube Sampling (LHS). The present dissertation compares the efficiency of these two algorithms checking their efficiency in characterizing the uncertainty space of some transfer functions employed in the mineral industry. Through a case study it was checked the number of necessary realizations to adequately characterize the variability of these response functions, as a mechanism for comparison. The dataset comes from an iron ore mine at the Carajás Mineral Province It was observed that the LHS method is more efficient in characterizing uncertainty space of RV Z(u), by stratifying the ccdf according to each realization. Such characteristic of LHS requires fewer realizations to proper build the uncertainty model. These benefits facilitate the implementation simulations into the routines of planning, using smaller and easier to manipulate uncertainty models.
APA, Harvard, Vancouver, ISO, and other styles
4

Arnesson, Jennifer. "Numerical Modeling of Radioactive Release Implemented for Loss of Coolant Accident, Evaluated using Latin Hypercube Sampling." Thesis, KTH, Reaktorfysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lim, Nay Kim. "Optimization of TIG weld geometry using a Kriging surrogate model and Latin Hypercube sampling for data generation." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1527976.

Full text
Abstract:
<p> The purpose of this study was to use a systematic approach to model the welding parameters for tungsten inert gas (TIG) welding of two thin pieces of titanium sheet metal. The experiment was carried out using a mechanized welding equipment to consistently perform each trial. The Latin Hyper Cube sampling method for data generation was applied to four input variables and the Kriging interpolation technique was used to develop surrogate models that map the four input variables to the three output parameters of interest. A total of fifty data points were generated. To utilize the minimal amount of weld samples, Kriging models were created using five sets of data at a time and the surrogate model was tested against an actual data output. Once the models have been generated and verified, an attempt is made to optimize the output variables individually and as a multi-objective problem using genetic algorithm (GA) optimization.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Packham, Natalie [Verfasser]. "Credit dynamics in a first-passage time model with jumps and Latin hypercube sampling with dependence / Natalie Packham." Frankfurt am Main : Frankfurt School of Finance & Management gGmbH, 2008. http://d-nb.info/1011759780/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brungard, Colby W. "Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern Utah." DigitalCommons@USU, 2009. http://digitalcommons.usu.edu/etd/472.

Full text
Abstract:
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
APA, Harvard, Vancouver, ISO, and other styles
8

Huh, Jungwon, Quang Tran, Achintya Haldar, Innjoon Park, and Jin-Hee Ahn. "Seismic Vulnerability Assessment of a Shallow Two-Story Underground RC Box Structure." MDPI AG, 2017. http://hdl.handle.net/10150/625742.

Full text
Abstract:
Tunnels, culverts, and subway stations are the main parts of an integrated infrastructure system. Most of them are constructed by the cut-and-cover method at shallow depths (mainly lower than 30 m) of soil deposits, where large-scale seismic ground deformation can occur with lower stiffness and strength of the soil. Therefore, the transverse racking deformation (one of the major seismic ground deformation) due to soil shear deformations should be included in the seismic design of underground structures using cost- and time-efficient methods that can achieve robustness of design and are easily understood by engineers. This paper aims to develop a simplified but comprehensive approach relating to vulnerability assessment in the form of fragility curves on a shallow two-story reinforced concrete underground box structure constructed in a highly-weathered soil. In addition, a comparison of the results of earthquakes per peak ground acceleration (PGA) is conducted to determine the effective and appropriate number for cost- and time-benefit analysis. The ground response acceleration method for buried structures (GRAMBS) is used to analyze the behavior of the structure subjected to transverse seismic loading under quasi-static conditions. Furthermore, the damage states that indicate the exceedance level of the structural strength capacity are described by the results of nonlinear static analyses (or so-called pushover analyses). The Latin hypercube sampling technique is employed to consider the uncertainties associated with the material properties and concrete cover owing to the variation in construction conditions. Finally, a large number of artificial ground shakings satisfying the design spectrum are generated in order to develop the seismic fragility curves based on the defined damage states. It is worth noting that the number of ground motions per PGA, which is equal to or larger than 20, is a reasonable value to perform a structural analysis that produces satisfactory fragility curves.
APA, Harvard, Vancouver, ISO, and other styles
9

Taheri, Mehdi. "Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81417.

Full text
Abstract:
The application of stochastic modeling for learning the behavior of multibody dynamics models is investigated. The stochastic modeling technique is also known as Kriging or random function approach. Post-processing data from a simulation run is used to train the stochastic model that estimates the relationship between model inputs, such as the suspension relative displacement and velocity, and the output, for example, sum of suspension forces. Computational efficiency of Multibody Dynamics (MBD) models can be improved by replacing their computationally-intensive subsystems with stochastic predictions. The stochastic modeling technique is able to learn the behavior of a physical system and integrate its behavior in MBS models, resulting in improved real-time simulations and reduced computational effort in models with repeated substructures (for example, modeling a train with a large number of rail vehicles). Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, various sampling plans are investigated, and a space-filling Latin Hypercube sampling plan based on the traveling salesman problem (TPS) is suggested for efficiently representing the entire parameter space. The simulation results confirm the expected increased modeling efficiency, although further research is needed for improving the accuracy of the predictions. The prediction accuracy is expected to improve through employing a sampling strategy that considers the discrete nature of the training data and uses infill criteria that considers the shape of the output function and detects sample spaces with high prediction errors. It is recommended that future efforts consider quantifying the computation efficiency of the proposed learning behavior by overcoming the inefficiencies associated with transferring data between multiple software packages, which proved to be a limiting factor in this study. These limitations can be overcome by using the user subroutine functionality of SIMPACK and adding the stochastic modeling technique to its force library.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Grabaskas, David. "Efficient Approaches to the Treatment of Uncertainty in Satisfying Regulatory Limits." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345464067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Thew, Johnny D. "Mathematical modelling of optimal bFGF delivery to chronic wounds." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121153/1/Johnny_Thew_Thesis.pdf.

Full text
Abstract:
This project investigated the impact from controlled growth factor release rates on chronic wound healing, with the aim of re-establishing concentrations associated with acute wound healing while minimising the amount of growth factor required. Theoretical predictions were designed using differential equation-based models simulating the wound environment and optimal control theory to measure the cost of the treatment. Outcomes identified key parameters associated with growth factor proteolysis, contributing to future clinical and experimental trials, and demonstrated optimal release rates by which growth factors concentrations can be re-established in chronic wounds.
APA, Harvard, Vancouver, ISO, and other styles
12

Alhasan, Zakaraya. "Pravděpodobnostní řešení porušení ochranné hráze v důsledku přelití." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-355636.

Full text
Abstract:
Doctoral thesis deals with reliability analysis of flood protection dikes by estimating the probability of dike failure. This study based on theoretical knowledge, experimental and statistical researches, mathematical models and field survey extends present knowledge concerning with reliability analysis of dikes vulnerable to the problem of breaching due to overtopping. This study contains the results of probabilistic solution of breaching of a left bank dike of the River Dyje at a location adjacent to the village of Ladná near the town of Břeclav in the Czech Republic. Within thin work, a mathematical model describing the overtopping and erosion processes was proposed. The dike overtopping is simulated using simple surface hydraulics equations. For modelling the dike erosion which commences with the exceedance of erosion resistance of the dike surface, simple transport equations were used with erosion parameters calibrated depending on data from past real embankment failures. In the context of analysis of the model, uncertainty in input parameters was determined and subsequently the sensitivity analysis was carried out using the screening method. In order to achieve the probabilistic solution, selected input parameters were considered random variables with different probability distributions. For generating the sets of random values for the selected input variables, the Latin Hypercube Sampling (LHS) method was used. Concerning with the process of dike breaching due to overtopping, four typical phases were distinguished. The final results of this study take the form of probabilities for those typical dike breach phases.
APA, Harvard, Vancouver, ISO, and other styles
13

Šomodíková, Martina. "Nelineární analýza zatížitelnosti železobetonového mostu." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2012. http://www.nusl.cz/ntk/nusl-225475.

Full text
Abstract:
The subject of master’s thesis is determination of bridge load-bearing capacity and fully probabilistic approach to reliability assessment. It includes a nonlinear analysis of the specific bridge load-bearing capacity in compliance with co-existing Standards and its stochastic and sensitivity analysis. In connection with durability limit states of reinforced concrete structures, the influence of carbonation and the corrosion of reinforcement on the structure’s reliability is also mentioned.
APA, Harvard, Vancouver, ISO, and other styles
14

Michal, Ondřej. "Stanovení hodnot materiálových parametrů s využitím experimentů různých konfigurací." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2012. http://www.nusl.cz/ntk/nusl-225579.

Full text
Abstract:
The work occupy by inverse analysis based on artificial neural network. This identification algorithm enable correct determine parameters of applied material model on creation of numerical model of construction so it's possible that the results of computerized simulation correspond with experiments. It look's like suitable approach especially in cases with complicated problems and complex models with many material parameters.
APA, Harvard, Vancouver, ISO, and other styles
15

Hortová, Michaela. "Využití softwarové podpory pro ekonomické hodnocení investičního projektu." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2016. http://www.nusl.cz/ntk/nusl-240057.

Full text
Abstract:
This thesis deals with economic evaluation case study of Ekofarm construction using applications Crystal Ball and Pertmaster Risk Project. The thesis represents the fundamental characteristics of the investment project and methods of its evaluation. There are introduced basic features of both applications on probabilistic risk analysis performed by simulation method Latin Hypercube Sampling. The case study is described in detail including breeding system and method of financing. This is linked to the calculation of economic fundamentals and creation of project cash flow. The result is probabilistic analysis which is output from tested software tools, and its evaluation.
APA, Harvard, Vancouver, ISO, and other styles
16

Martinásková, Magdalena. "Porovnání účinnosti návrhů experimentů pro statistickou analýzu úloh s náhodnými vstupy." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226781.

Full text
Abstract:
The thesis presents methods and criteria for creation and optimization of design of computer experiments. Using the core of a program Freet the optimized designs were created by combination of these methods and criteria. Then, the suitability of the designs for statistical analysis of the tasks vith input random variables was assessed by comparison of the obtained results of six selected functions and the exact (analytically obtained) solutions. Basic theory, definitions of the evaluated functions, description of the setting of optimization and the discussion of the obtained results, including recommendations related to identified weaknesses of certain designs, are presented. The thesis also contains a description of an application that was created to display the results.
APA, Harvard, Vancouver, ISO, and other styles
17

Slowik, Ondřej. "Pravděpodobnostní optimalizace konstrukcí." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226801.

Full text
Abstract:
This thesis presents the reader the importance of optimization and probabilistic assessment of structures for civil engineering problems. Chapter 2 further investigates the combination between previously proposed optimization techniques and probabilistic assessment in the form of optimization constraints. Academic software has been developed for the purposes of demonstrating the effectiveness of the suggested methods and their statistical testing. 3th chapter summarizes the results of testing previously described optimization method (called Aimed Multilevel Sampling), including a comparison with other optimization techniques. In the final part of the thesis, described procedures have been demonstrated on the selected optimization and reliability problems. The methods described in text represents engineering approach to optimization problems and aims to introduce a simple and transparent optimization algorithm, which could serve to the practical engineering purposes.
APA, Harvard, Vancouver, ISO, and other styles
18

Wahlström, Dennis. "Probabilistic Multidisciplinary Design Optimization on a high-pressure sandwich wall in a rocket engine application." Thesis, Umeå universitet, Institutionen för fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-138480.

Full text
Abstract:
A need to find better achievement has always been required in the space industrythrough time. Advanced technologies are provided to accomplish goals for humanityfor space explorer and space missions, to apprehend answers and widen knowledges. These are the goals of improvement, and in this thesis, is to strive and demandto understand and improve the mass of a space nozzle, utilized in an upperstage of space mission, with an expander cycle engine. The study is carried out by creating design of experiment using Latin HypercubeSampling (LHS) with a consideration to number of design and simulation expense.A surrogate model based optimization with Multidisciplinary Design Optimization(MDO) method for two different approaches, Analytical Target Cascading (ATC) and Multidisciplinary Feasible (MDF) are used for comparison and emend the conclusion. In the optimization, three different limitations are being investigated, designspace limit, industrial limit and industrial limit with tolerance. Optimized results have shown an incompatibility between two optimization approaches, ATC and MDF which are expected to be similar, but for the two limitations, design space limit and industrial limit appear to be less agreeable. The ATC formalist in this case dictates by the main objective, where the children/subproblems only focus to find a solution that satisfies the main objective and its constraint. For the MDF, the main objective function is described as a single function and solved subject to all the constraints. Furthermore, the problem is not divided into subproblems as in the ATC. Surrogate model based optimization, its solution influences by the accuracy ofthe model, and this is being investigated with another DoE. A DoE of the full factorial analysis is created and selected to study in a region near the optimal solution.In such region, the result has evidently shown to be quite accurate for almost allthe surrogate models, except for max temperature, damage and strain at the hottestregion, with the largest common impact on inner wall thickness of the space nozzle. Results of the new structure of the space nozzle have shown an improvement of mass by ≈ 50%, ≈ 15% and ≈ -4%, for the three different limitations, design spacelimit, industrial limit and industrial limit with tolerance, relative to a reference value,and ≈ 10%, ≈ 35% and ≈ 25% cheaper to manufacture accordingly to the defined producibility model.
APA, Harvard, Vancouver, ISO, and other styles
19

Garg, Vikram Vinod 1985. "Coupled flow systems, adjoint techniques and uncertainty quantification." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-6034.

Full text
Abstract:
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods.<br>text
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Yu-Long, and 黃裕龍. "Conditioned Latin hypercube sampling in heavy metal sampling and spatial distribution simulation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/57227730891712490488.

Full text
Abstract:
碩士<br>臺灣大學<br>生物環境系統工程學研究所<br>98<br>Conditioned Latin hypercube sampling is a sampling method using heuristic algorithm to find out the data in the incumbent data space which conjoint the eigenspace of Latin hypercube sampling. Latin hypercube (LHS) is a stratified random sampling approach which can proceed the sampling technique with the original distribution. The research aims to resample the heavy metal in soil at Chang-Hua County by conditioned Latin hypercube sampling (LHS) technique, and with expectation to diminish the sampling number to lower the cost of laboratorial analysis for cupper (Cu), chromium (Cr), nickel (Ni), and zinc (Zn) with their original statistical distributions. In the meanwhile, there is no consideration in spatial aspect for sampling sites in conditioned Latin hypercube sampling (cLHS). So the incorporation of spatial data , which is regarded as the spatial cLHS, might be able to drive the data closer to their original spatial allocation. Afterwards, the sampling efficiency for LHS, cLHS, and spatial cLHS were fully examined. The spatial distribution and uncertainty of each technique, including original data without sampling, were evaluated by the sequential indicator simulation (SIS). The result showed that the spatial cLHS could better imitate the distribution and spatial allocation of the original data. And the result of SIS showed that the sampled data from cLHS could only preserve the risky area of pollution, but the ones from spatial cLHS could even lower the uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Yen-Yu, and 陳彥佑. "Applying conditioned Latin hypercube sampling combined with stratifications in initial soil sampling of heavy metals." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/12852683575950842950.

Full text
Abstract:
碩士<br>國立臺灣大學<br>生物環境系統工程學研究所<br>100<br>Soil pollution of heavy metals is one of the most important environment issues, and it is necessary to monitor and proceed remediation for contaminated area duo to the serious impact of heavy metal pollution to the environment and public health concerns. An efficient sampling strategy is needed to know the correct spatial distribution of soil pollutants to delineate the contaminated area, and reduce the follow-up sampling points to decrease the remediation costs. Conditioned Latin Hypercube Sampling (cLHS) is a sampling method using search algorithm based on heuristic rules combined with an annealing schedule, and it is demonstrated that cLHS could accurately reproduce the original distribution of the environmental covariates. In this study, the original 946 sampling data of Cr, Cu, Ni and Zn and correlated ancillary information in Chand-Hua County are used, and cLHS with two stratifications are applied to resample original data by selected ancillary variables. The errors of statistical and spatial of resampled data are calculated to discuss the influence of different sampling strategies and sample sizes on reproducibility. After proving that representative sampling could be selected by ancillary data, 1000 times sequential indicator simulation (SIS) are carried out in this study with original data, and the mean average error (MAE) is used to evaluate the efficiency of sampling strategies and sample sizes with ancillary information on the first time sampling. The results reveal that error of average value is not affected obviously by the change of sample size. However, the error of spatial variance decreases when the sample size increases. In conclusion, cLHS combined with irrigation stratification can efficiently preserve the statistic characteristics and spatial structure of the original data.
APA, Harvard, Vancouver, ISO, and other styles
22

Bo-TsenWang and 汪柏岑. "The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/64985352814819577362.

Full text
Abstract:
碩士<br>國立成功大學<br>資源工程學系<br>103<br>Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
APA, Harvard, Vancouver, ISO, and other styles
23

Jaffari, Javid. "Statistical Yield Analysis and Design for Nanometer VLSI." Thesis, 2010. http://hdl.handle.net/10012/5361.

Full text
Abstract:
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
APA, Harvard, Vancouver, ISO, and other styles
24

(8300103), Shams R. Rahmani. "Digital Soil Mapping of the Purdue Agronomy Center for Research and Education." Thesis, 2020.

Find full text
Abstract:
This research work concentrate on developing digital soil maps to support field based plant phenotyping research. We have developed soil organic matter content (OM), cation exchange capacity (CEC), natural soil drainage class, and tile drainage line maps using topographic indices and aerial imagery. Various prediction models (universal kriging, cubist, random forest, C5.0, artificial neural network, and multinomial logistic regression) were used to estimate the soil properties of interest.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography