To see the other types of publications on this topic, follow the link: Sensitivity calibration.

Dissertations / Theses on the topic 'Sensitivity calibration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sensitivity calibration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Monari, Filippo. "Sensitivity analysis and Bayesian calibration of building energy models." Thesis, University of Strathclyde, 2016. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=26897.

Full text
Abstract:
The current state of the art of Building Energy Simulation (BES) lacks of a rigorous framework for the analysis, calibration and diagnosis of BES models. This research takes this deficiency as an opportunity for proposing a strongly mathematically based methodology serving such purposes, providing: a better consideration of the modelling uncertainties, means to reduce BES model complexity without oversimplification, and methods to test and select different modelling hypotheses depending on field observations. Global Sensitivity Analysis (GSA),Gaussian Process Regression (GPR) in a quasi-Bayesian set up and Markov ChainMonte Carlo (MCMC) methods are the foundations upon which the proposed framework is built. It couples deterministic BES models and stochastic blackbox models, thus having the physical and probabilistic representation of real phenomena complementing each other. It comprises four phases: Uncertainty Analysis, Sensitivity Analysis, Calibration, Model Selection. The framework was tested on a series of trials having increasing difficulty. Relatively simple preliminary experiments were used to develop the methodology and investigate strengths and weaknesses. They showed its capabilities in treating measurement uncertainties and model deficiencies, but also that these aspects inuence the estimation of model parameters. More detailed experiments were used to fully test the efficacy of the method in analysing complex BESmodels. Novel techniques, based on Bootstrap and Smoothing with Roughness Penalty, for the determination of the uncertainties of multidimensional model inputs, were introduced. The framework was proven effective in adequately simplifying BES models, in precisely identifying parameters, causes of discrepancies and improvements, and in providing clear information about which model was the most suitable in describing the observed processes. This research delivers a powerful tool for the analysis, diagnosis and calibration of BES models, which substantially improves the current practice and that can be already applied to solve many practical problems, such as the investigation of energy conservation measures, model predictive control and fault detection.
APA, Harvard, Vancouver, ISO, and other styles
2

Barham, R. G. "Free-field reciprocity calibration of laboratory standard microphones." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jonfelt, Clara. "An evaluation of an MBBR anammox model - sensitivity analysis and calibration." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-312511.

Full text
Abstract:
This master thesis is about mathematical modelling of the anammox process with a moving bed biofilm reactor (MBBR) for a reject water application. Specifically, the aim of my research was to find out whether the model proposed by Erik Lindblom in (Lindblom et al. 2016) is a good model for this purpose and worth continuous research and optimization. The code for the model, implemented in Matlab/Simulink, was given; although not initially functioning in the given condition. Some modifications needed to be done to make it function properly. In order to confirm that the code was working and used in a correct way some results in (Lindblom et al. 2016) were reproduced. Before starting the evaluation of the model, some much-needed optimizations of the code were done, substantially reducing the run time. A sensitivity analysis was done, and the five most sensitive parameters were picked out to be used in the calibration. The calibration improved the total fit of the model to the available measurements, although one of the model outputs could not be calibrated satisfactorily. In short, I found that although there are still problems left to solve before the model can be stated to accurately model the anammox process with MBBR, it appears promising. Most importantly, more measurement data are needed in order to make a proper validation and to do a better calibration.
CONAN
APA, Harvard, Vancouver, ISO, and other styles
4

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Full text
Abstract:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
APA, Harvard, Vancouver, ISO, and other styles
5

Thoman, Glen W. "Continuous analysis methods in stormwater management practice, sensitivity, calibration and model development." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq29398.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Waterfield, James. "Optical calibration system for SNO+ and sensitivity to neutrinoless double-beta decay." Thesis, University of Sussex, 2017. http://sro.sussex.ac.uk/id/eprint/67570/.

Full text
Abstract:
The SNO+ experiment is primarily looking for neutrinoless double-beta decay, an unobserved, lepton number violating radioactive decay. This is achieved by loading liquid scintillator with tellurium whose isotope 130Te decays via double beta decay with a Q-value of 2527 keV. An optical calibration system, located outside the scintillator, has been developed to help meet the radiopurity requirements of the experiment. This thesis describes the hardware component of the optical calibration system which calibrates the timing and charge response of the photomultiplier tube array of SNO+. A set of quality assurance tests showed that the system was at the required standard for installation. Data taken with SNO+ and the optical calibration system showed that the system was stable enough for photomultiplier tube calibration, identified resolvable issues with the SNO+ data acquisition system and allowed measurement of single photoelectron spectra. Data quality checks have been developed to ensure data is of calibration standard. The sensitivity of SNO+ to neutrinoless double-beta decay with nearly 800 kg of 130Te and five years data taking is investigated with a comprehensive evaluation of systematic uncertainties. Two new methods for acquiring a greater sensitivity to neutrinoless double-beta decay were developed; a one dimensional fit in event energy and a multidimensional fit in event energy and position. A simple event counting analysis, developed previously by the collaboration, was shown to be sensitive to systematic uncertainties. A fit in an extended energy range was shown to constrain the systematics and achieve a half-life sensitivity of 9.30x1025 yr corresponding to a 5.6% improvement over the counting analysis which neglected systematic uncertainties. The multidimensional analysis with systematics included achieved a 20% improvement over the counting analysis with a half-life sensitivity of 1:06 x 1026 yr, corresponding to an effective Majorana mass between 52 to 125 meV.
APA, Harvard, Vancouver, ISO, and other styles
7

Dangwal, Chitra. "Electrochemical Model Calibration Process based on Sensitivity Analysis for Lithium-ion batteries." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587816603247479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jianbo. "Readout Circuits for a Z-axis Hall Sensor with Sensitivity Drift Calibration." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175785.

Full text
Abstract:
Hall effect magnetic sensors have gradually gained dominance in the market of magnetic sensors during the past decades. The compatibility of Hall sensors with conventional CMOS technologies makes monolithic Hall sensor microsystem possible and economic. An attractive application is the contactless current sensor by using Hall sensors to measure the magnetic field generated by the electrical current. However, Hall sensors exhibit several non-idealities, i.e., offset, noise and sensitivity drift, which limit their precision. Therefore, effective techniques to reduce these imperfections are desired. This thesis presents the design of a new readout scheme for Hall magnetic sensor with low offset, low noise and low sensitivity drift. The Hall sensor is realized in N-well as Hall plate and modeled in Verilog-A for the purpose of co-simulation with interface circuits. The self-calibrated system is composed of two identical Hall plates, preamplifiers and a first-order ΣΔ modulator, which can be fully integrated monolithically. Four-phase spinning current technique and chopper stabilization technique have been employed to reduce the offset and 1/fnoise of Hall platesand OTA, respectively. Integrated coils are used to generate the reference magnetic field for calibration. The preamplifiers amplify the signal and separate the Hall voltage and reference voltage. The ΣΔ modulator reduces the thermal drift by using Hall voltage as the modulator input and reference voltage as the DAC output. This new calibration technique also compensates the thermal drifts of the biasing current and readout circuits. The overall system is implemented in NXP140nm CMOS process with 1.8V supply. The Virtuoso/Spectre simulation results show residual drifts lower than 10ppm/ ̊C, which are 3-5 times lower than the state of the art. The input magnetic field and temperature range are ±100mT and -40 ̊C to 120 ̊C, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Pullins, Clayton Anthony. "High Temperature Heat Flux Measurement: Sensor Design, Calibration, and Applications." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/27789.

Full text
Abstract:
This effort is focused on the design, calibration, and implementation of a high temperature heat flux sensor for thermal systems research and testing. The High Temperature Heat Flux Sensor (HTHFS) was designed to survive in the harsh thermal environments typically encountered in hypersonic flight, combustion and propulsion research, and large-scale fire testing. The sensor is capable of continuous use at temperatures up to 1000 â ¦C. Two methods for steady-state calibration of the HTHFS at elevated temperatures have been developed as a result of this research. The first method employs a water-cooled heat flux sensor as a reference standard for the calibration. The second method utilizes a blackbody radiant source and a NIST calibrated optical pyrometer as the calibration standard. The HTHFS calibration results obtained from both methods compare favorably with the theoretical sensitivity versus temperature model. Implementation of the HTHFS in several types of transient thermal testing scenarios is also demonstrated herein. A new data processing technique is used to interpret the measurements made by the HTHFS. The Hybrid Heat Flux (HHF) method accounts for the heat flow through the sensor and the heat storage in the sensor, and thus renders the HTHFS virtually insensitive to the material on which it is mounted. The calibrated output of the HTHFS versus temperature ensures accuracy in the measurements made by the sensor at high operating temperatures.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Rivers, Thane Damian. "Development of an automated scanning monochromator for sensitivity calibration of the MUSTANG instrument." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Heredia, Guzman Maria Belen. "Contributions to the calibration and global sensitivity analysis of snow avalanche numerical models." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU028.

Full text
Abstract:
Une avalanche de neige est un danger naturel défini comme une masse de neige en mouvement rapide. Depuis les années 30, scientifiques conçoivent des modèles d'avalanche de neige pour décrire ce phénomène. Cependant, ces modèles dépendent de certains paramètres d'entrée mal connus qui ne peuvent pas être mesurés. Pour mieux comprendre les paramètres d'entrée du modèle et les sorties du modèle, les objectifs de cette thèse sont (i) de proposer un cadre pour calibrer les paramètres d'entrée et (ii) de développer des méthodes pour classer les paramètres d'entrée en fonction de leur importance dans le modèle en tenant compte la nature fonctionnelle des sorties. Dans ce cadre, nous développons des méthodes statistiques basées sur l'inférence bayésienne et les analyses de sensibilité globale. Nos développements sont illustrés sur des cas de test et des données réelles des avalanches de neige.D'abord, nous proposons une méthode d'inférence bayésienne pour récupérer la distribution des paramètres d'entrée à partir de séries chronologiques de vitesse d'avalanche ayant été collectées sur des sites de test expérimentaux. Nos résultats montrent qu'il est important d'inclure la structure d'erreur (dans notre cas l'autocorrélation) dans la modélisation statistique afin d'éviter les biais dans l'estimation des paramètres de frottement.Deuxièmement, pour identifier les paramètres d'entrée importants, nous développons deux méthodes basées sur des mesures de sensibilité basées sur la variance. Pour la première méthode, nous supposons que nous avons un échantillon de données et nous voulons estimer les mesures de sensibilité avec cet échantillon. Dans ce but, nous développons une procédure d'estimation non paramétrique basée sur l'estimateur de Nadaraya-Watson pour estimer les indices agrégés de Sobol. Pour la deuxième méthode, nous considérons le cadre où l'échantillon est obtenu à partir de règles d'acceptation/rejet correspondant à des contraintes physiques. L'ensemble des paramètres d'entrée devient dépendant du fait de l'échantillonnage d'acceptation-rejet, nous proposons donc d'estimer les effets de Shapley agrégés (extension des effets de Shapley à des sorties multivariées ou fonctionnelles). Nous proposons également un algorithme pour construire des intervalles de confiance bootstrap. Pour l'application du modèle d'avalanche de neige, nous considérons différents scénarios d'incertitude pour modéliser les paramètres d'entrée. Dans nos scénarios, la position et le volume de départ de l'avalanche sont les entrées les plus importantes.Nos contributions peuvent aider les spécialistes des avalanches à (i) prendre en compte la structure d'erreur dans la calibration du modèle et (ii) proposer un classementdes paramètres d'entrée en fonction de leur importance dans les modèles en utilisant des approches statistiques
Snow avalanche is a natural hazard defined as a snow mass in fast motion. Since the thirties, scientists have been designing snow avalanche models to describe snow avalanches. However, these models depend on some poorly known input parameters that cannot be measured. To understand better model input parameters and model outputs, the aims of this thesis are (i) to propose a framework to calibrate input parameters and (ii) to develop methods to rank input parameters according to their importance in the model taking into account the functional nature of outputs. Within these two purposes, we develop statistical methods based on Bayesian inference and global sensitivity analyses. All the developments are illustrated on test cases and real snow avalanche data.First, we propose a Bayesian inference method to retrieve input parameter distribution from avalanche velocity time series having been collected on experimental test sites. Our results show that it is important to include the error structure (in our case the autocorrelation) in the statistical modeling in order to avoid bias for the estimation of friction parameters.Second, to identify important input parameters, we develop two methods based on variance based measures. For the first method, we suppose that we have a given data sample and we want to estimate sensitivity measures with this sample. Within this purpose, we develop a nonparametric estimation procedure based on the Nadaraya-Watson kernel smoother to estimate aggregated Sobol' indices. For the second method, we consider the setting where the sample is obtained from acceptance/rejection rules corresponding to physical constraints. The set of input parameters become dependent due to the acceptance-rejection sampling, thus we propose to estimate aggregated Shapley effects (extension of Shapley effects to multivariate or functional outputs). We also propose an algorithm to construct bootstrap confidence intervals. For the snow avalanche model application, we consider different uncertainty scenarios to model the input parameters. Under our scenarios, the release avalanche position and volume are the most crucial inputs.Our contributions should help avalanche scientists to (i) account for the error structure in model calibration and (ii) rankinput parameters according to their importance in the models using statistical methods
APA, Harvard, Vancouver, ISO, and other styles
12

Griffiths, Michael Lee. "Multivariate calibration for ICP-AES." Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/1942.

Full text
Abstract:
The analysis of metals is now a major application area for ICP-AES, however, the technique suffers from both spectral and non-spectral interferences. This thesis details the application of univariate and multivariate calibration methods for the prediction of Pt, Pd, and Rh in acid-digested and of Au, Ag and Pd in fusion-digested autocatalyst samples. Of all the univariate calibration methods investigated matrix matching proved the most accurate method with relative root mean square errors (RRMSEs) for Pt, Pd and Rh of 2.4, 3.7, and 2.4 % for a series of synihelic lest solutions, and 12.0, 2.4, and 8.0 % for autocatalyst samples. In comparison, the multivariate calibration method (PLSl) yielded average relative errors for Pt, Pd, and RJi of 5.8, 3.0, and 3.5 % in the test solutions, and 32.0, 7.5, and 75.0 % in the autocatalyst samples. A variable selection procedure has been developed enabling multivariate models to be built using large parts of the atomic emission spectrum. The first stage identified and removed wavelengths whose PLS regression coefficients were equal to zero. The second stage ranked the remaining wavelengths according to their PLS regression coefficient and estimated standard error ratio. The algorithms were applied to the emission spectra for the determination of Pt, Pd and Rh in a synthetic matrix. For independent test samples variable selection gave RRMSEs of 5.3, 2.5 and 1.7 % for Pt, Pd and Rh respectively compared with 8.3, 7.0 and 3.1 % when using integrated atomic emission lines. Variable selection was then applied for the prediction of Au, Ag and Pd in independent test fusion digests. This resulted in RRMSEs of 74.2, 8.8 and 12.2 % for Au, Ag and Pd respectively which were comparable to those obtained using a more traditional univariate calibration approach. A preliminary study has shown that calibration drift can be corrected using Piecewise Direct Standardisation (PDS). The application of PDS to synthetic test samples analysed 10 days apart resulted in RRMSEs of 4.14, 3.03 and 1.88%, compared to 73.04, 44.39 and 28.06 % without correction, for Pt, Pd, and Rh respectively.
APA, Harvard, Vancouver, ISO, and other styles
13

Wan, Benny C. K. "Auto-calibration of SWMM runoff using sensitivity-based genetic algorithms (Storm Water Management Model)." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60204.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

CAVALCANTI, FABIO. "Desenvolvimento de um laser pulsado com emissão em 1053 nm para utilização na técnica de "Cavity Ring-Down Spectroscopy." reponame:Repositório Institucional do IPEN, 2014. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11790.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2014-11-10T10:46:11Z No. of bitstreams: 0
Made available in DSpace on 2014-11-10T10:46:11Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
15

Turley, Carole. "Calibration Procedure for a Microscopic Traffic Simulation Model." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1747.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sneesby, Ethan Paul. "Evaluation of a Water Budget Model for Created Wetland Design and Comparative Natural Wetland Hydroperiods." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/88836.

Full text
Abstract:
Wetland impacts in the Mid-Atlantic USA are frequently mitigated via wetland creation in former uplands. Regulatory approval requires a site-specific water budget that predicts the annual water level regime (hydroperiod). However, many studies of created wetlands indicate that post-construction hydroperiods frequently are not similar to impacted wetland systems. My primary objective was to evaluate a water budget model, Wetbud (Basic model), through comparison of model output to on-site water level data for two created forested wetlands in Northern Virginia. Initial sensitivity analyses indicated that watershed curve number and outlet height had the most leverage on model output. Addition of maximum depth of water level drawdown greatly improved model accuracy. I used Nash-Sutcliffe efficiency (NSE) and root mean squared error (RMSE) to evaluate goodness of fit of model output against site monitoring data. The Basic model reproduced the overall seasonal hydroperiod well once fully parameterized, despite NSE values ranging from -0.67 to 0.41 in calibration and from -4.82 to -0.26 during validation. For RMSE, calibration values ranged from 5.9 cm to 12.7 cm during calibration and from 8.2 cm to 18.5 cm during validation. My second objective was to select a group of "design target hydroperiods" for common Mid-Atlantic USA wetland types. From > 90 sites evaluated, I chose four mineral flats, three riverine wetlands, and one depressional wetland that met all selection criteria. Taken together, improved wetland water budget modeling procedures (like Wetbud) combined with the use of appropriate target hydroperiod information should improve the success of wetland creation efforts.
Master of Science
Wetlands in the USA are defined by the combined occurrence of wetland hydrology, hydric soils, and hydrophytic vegetation. Wetlands serve to retain floodwater, sediments and nutrients within their landscape. They may serve as a source of local groundwater recharge and are home to many endangered species of plants and animals. Wetland ecosystems are frequently impacted by human activities including road-building and development. These impacts can range from the destruction of a wetland to increased nutrient contributions from storm- or wastewater. One commonly utilized option to mitigate wetland impacts is via wetland creation in former upland areas. Regulatory approval requires a site-specific water budget that predicts the average monthly water levels (hydroperiod). A hydroperiod is simply a depiction of how the elevation of water changes over time. However, many studies of created wetlands indicate that post-construction hydroperiods frequently are not representative of the impacted wetland systems. Many software packages, called models, seek to predict the hydroperiod for different wetland systems. Improving and vetting these models help to improve our understanding of how these systems function. My primary objective was to evaluate a water budget model, Wetbud (Basic model), through comparison of model output to onsite water level data for two created forested wetlands in Northern Virginia. Initial analyses indicated that watershed curve number (CN) and outlet height had the most influence on model output. Addition of a maximum depth of water level drawdown below the ground surface greatly improved model accuracy. I used statistical analyses to compare model output to site monitoring data. The Basic model reproduced the overall seasonal hydroperiod well once inputs were set to optimum values (calibration). Statistical results for the calibration varied between excellent and acceptable for our selected measure of accuracy, the root mean squared error. My second objective was to select a grouping of “design target hydroperiods” for common Mid-Atlantic USA wetland types. From > 90 sites evaluated, I chose four mineral flats, three riverine wetlands, and one depressional wetland that met all selection criteria. Taken together, improved wetland water budget modeling procedures (like Wetbud) combined with the use of appropriate target hydroperiod information should improve the success of wetland creation efforts.
APA, Harvard, Vancouver, ISO, and other styles
17

Putnam, Jacob Breece. "Development, Calibration, and Validation of a Finite Element Model of the THOR Crash Test Dummy for Aerospace and Spaceflight Crash Safety Analysis." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50522.

Full text
Abstract:
Anthropometric test devices (ATDs), commonly referred to as crash test dummies, are tools used to conduct aerospace and spaceflight safety evaluations. Finite element (FE) analysis provides an effective complement to these evaluations. In this work a FE model of the Test Device for Human Occupant Restraint (THOR) dummy was developed, calibrated, and validated for use in aerospace and spaceflight impact analysis. A previously developed THOR FE model was first evaluated under spinal loading. The FE model was then updated to reflect recent updates made to the THOR dummy. A novel calibration methodology was developed to improve both kinematic and kinetic responses of the updated model in various THOR dummy certification tests. The updated THOR FE model was then calibrated and validated under spaceflight loading conditions and used to asses THOR dummy biofidelity. Results demonstrate that the FE model performs well under spinal loading and predicts injury criteria values close to those recorded in testing. Material parameter optimization of the updated model was shown to greatly improve its response. The validated THOR-FE model indicated good dummy biofidelity relative to human volunteer data under spinal loading, but limited biofidelity under frontal loading. The calibration methodology developed in this work is proven as an effective tool for improving dummy model response. Results shown by the dummy model developed in this study recommends its use in future aerospace and spaceflight impact simulations. In addition the biofidelity analysis suggests future improvements to the THOR dummy for spaceflight and aerospace analysis.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Minunno, Francesco. "On the use of the bayesian approach for the calibration, evaluation and comparison of process-based forest models." Doctoral thesis, ISA/UL, 2014. http://hdl.handle.net/10400.5/7350.

Full text
Abstract:
Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia
Forest ecosystems have been experiencing fast and abrupt changes in the environmental conditions, that can increase their vulnerability to extreme events such as drought, heat waves, storms, fire. Process-based models can draw inferences about future environmental dynamics, but the reliability and robustness of vegetation models are conditional on their structure and their parametrisation. The main objective of the PhD was to implement and apply modern computational techniques, mainly based on Bayesian statistics, in the context of forest modelling. A variety of case studies was presented, spanning from growth predictions models to soil respiration models and process-based models. The great potential of the Bayesian method for reducing uncertainty in parameters and outputs and model evaluation was shown. Furthermore, a new methodology based on a combination of a Bayesian framework and a global sensitivity analysis was developed, with the aim of identifying strengths and weaknesses of process-based models and to test modifications in model structure. Finally, part of the PhD research focused on reducing the computational load to take full advantage of Bayesian statistics. It was shown how parameter screening impacts model performances and a new methodology for parameter screening, based on canonical correlation analysis, was presented
APA, Harvard, Vancouver, ISO, and other styles
19

Yucel, Omer Burak. "Calibration Of The Finite Element Model Of A Long Span Cantilever Through Truss Bridge Using Artificial Neural Networks." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609850/index.pdf.

Full text
Abstract:
In recent years, Artificial Neural Networks (ANN) have become widely popular tools in various disciplines of engineering, including civil engineering. In this thesis, Multi-layer perceptron with back-propagation type of network is utilized in calibration of the finite element model of a long span cantilever through truss called Commodore Barry Bridge (CBB). The essence of calibration lies in the phenomena of comparing and correlating the structural response of an analytical model with experimental results as closely as possible. Since CBB is a very large structure having complex structural mechanisms, formulation of mathematical expressions representing the relation between dynamics of the structure and the structural parameters is very complicated. Furthermore, when the errors in the structural model and noise in the experimental data are taken into account, a calibration study becomes more tedious. At this point, ANNs are useful tools since they have the capability of learning with noisy data and ability to approximate functions. In this study, firstly sensitivity analyses are conducted such that variations in dynamic properties of the bridge are observed with the changes in its structural parameters. In the second part, inverse relation between sensitive structural parameters and modal frequencies of CBB is approximated by training of a neural network. This successfully trained network is then fed up with experimental frequencies to acquire the as-is structural parameters and model updating is achieved accordingly.
APA, Harvard, Vancouver, ISO, and other styles
20

Goodman, Corey William. "Cost effective, computer-aided analytical performance evaluation of chromosomal microarrays for clinical laboratories." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3301.

Full text
Abstract:
Many disorders found in humans are caused by abnormalities in DNA. Genetic testing of DNA provides a way for clinicians to identify disease-causing mutations in patients. Once patients with potentially disease-causing mutations are identified, they can be enrolled in treatment or preventative programs to improve the patients' long term quality of life. Array-based comparative genomic hybridization (aCGH) provides a high- resolution, genome-wide method for detecting chromosomal abnormalities. Using computer software, chromosome abnormalities, or copy number variations (CNVs) can be identified from aCGH data. The development of a software tool to analyze the performance of CGH microarrays is of great benefit to clinical laboratories. Calibration of parameters used in aCGH software tools can maximize the performance of these arrays in a clinical setting. According to the American College of Medical Genetics, the validation of a clinical chromosomal microarray platform should be performed by testing a large number (200-300) of well-characterized cases, each with unique CNVs located throughout the genome. Because of the Clinical Laboratory Improvement Amendment of 1988 and the lack of an FDA approved whole genome chromosomal microarray platform the ultimate responsibility for validating the performance characteristics of this technology falls to the clinical laboratory performing the testing. To facilitate this task, we have established a computational analytical validation procedure for CGH microarrays that is comprehensive, efficient, and low cost. This validation uses a higher resolution microarray to validate a lower resolution microarray with a receiver operating characteristic (ROC)-based analysis. From the results we are able to estimate an optimal log2 threshold range for determining the presence or absence (calling) of CNVs.
APA, Harvard, Vancouver, ISO, and other styles
21

Ben, Touhami Haythem. "Calibration Bayésienne d'un modèle d'étude d'écosystème prairial : outils et applications à l'échelle de l'Europe." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22444/document.

Full text
Abstract:
Les prairies représentent 45% de la surface agricole en France et 40% en Europe, ce qui montre qu’il s’agit d’un secteur important particulièrement dans un contexte de changement climatique où les prairies contribuent d’un côté aux émissions de gaz à effet de serre et en sont impactées de l’autre côté. L’enjeu de cette thèse a été de contribuer à l’évaluation des incertitudes dans les sorties de modèles de simulation de prairies (et utilisés dans les études d’impact aux changements climatiques) dépendant du paramétrage du modèle. Nous avons fait appel aux méthodes de la statistique Bayésienne, basées sur le théorème de Bayes, afin de calibrer les paramètres d’un modèle référent et améliorer ainsi ses résultats en réduisant l’incertitude liée à ses paramètres et, par conséquent, à ses sorties. Notre démarche s’est basée essentiellement sur l’utilisation du modèle d’écosystème prairial PaSim, déjà utilisé dans plusieurs projets européens pour simuler l’impact des changements climatiques sur les prairies. L’originalité de notre travail de thèse a été d’adapter la méthode Bayésienne à un modèle d’écosystème complexe comme PaSim (appliqué dans un contexte de climat altéré et à l’échelle du territoire européen) et de montrer ses avantages potentiels dans la réduction d’incertitudes et l’amélioration des résultats, en combinant notamment méthodes statistiques (technique Bayésienne et analyse de sensibilité avec la méthode de Morris) et outils informatiques (couplage code R-PaSim et utilisation d’un cluster de calcul). Cela nous a conduit à produire d’abord un nouveau paramétrage pour des sites prairiaux soumis à des conditions de sécheresse, et ensuite à un paramétrage commun pour les prairies européennes. Nous avons également fourni un outil informatique de calibration générique pouvant être réutilisé avec d’autres modèles et sur d’autres sites. Enfin, nous avons évalué la performance du modèle calibré par le biais de la technique Bayésienne sur des sites de validation, et dont les résultats ont confirmé l’efficacité de cette technique pour la réduction d’incertitude et l’amélioration de la fiabilité des sorties
Grasslands cover 45% of the agricultural area in France and 40% in Europe. Grassland ecosystems have a central role in the climate change context, not only because they are impacted by climate changes but also because grasslands contribute to greenhouse gas emissions. The aim of this thesis was to contribute to the assessment of uncertainties in the outputs of grassland simulation models, which are used in impact studies, with focus on model parameterization. In particular, we used the Bayesian statistical method, based on Bayes’ theorem, to calibrate the parameters of a reference model, and thus improve performance by reducing the uncertainty in the parameters and, consequently, in the outputs provided by models. Our approach is essentially based on the use of the grassland ecosystem model PaSim (Pasture Simulation model) already applied in a variety of international projects to simulate the impact of climate changes on grassland systems. The originality of this thesis was to adapt the Bayesian method to a complex ecosystem model such as PaSim (applied in the context of altered climate and across the European territory) and show its potential benefits in reducing uncertainty and improving the quality of model outputs. This was obtained by combining statistical methods (Bayesian techniques and sensitivity analysis with the method of Morris) and computing tools (R code -PaSim coupling and use of cluster computing resources). We have first produced a new parameterization for grassland sites under drought conditions, and then a common parameterization for European grasslands. We have also provided a generic software tool for calibration for reuse with other models and sites. Finally, we have evaluated the performance of the calibrated model through the Bayesian technique against data from validation sites. The results have confirmed the efficiency of this technique for reducing uncertainty and improving the reliability of simulation outputs
APA, Harvard, Vancouver, ISO, and other styles
22

MORCELLI, CLAUDIA P. R. "Determinacao de iridio em baixas concentracoes(sub ng gsup-1) em materiais geologicos por ativacao neutronica." reponame:Repositório Institucional do IPEN, 1999. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10741.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:43:31Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:09:49Z (GMT). No. of bitstreams: 1 06633.pdf: 3576512 bytes, checksum: 5fb55fd14239cf9ae62a4825e5902dc6 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
23

Milathianakis, Emmanouil. "Modelling and future performance assessment of Duvbacken wastewater treatment plant." Thesis, KTH, Mark- och vattenteknik (flyttat 20130630), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210704.

Full text
Abstract:
Duvbacken wastewater treatment plant in Gävle, Sweden, currently designed for 100,000 person equivalent (P.E.) is looking for a new permit for 120,000 P.E. due to the expected increase of the population in the community. Moreover, the recipient of the plant’s effluent water was characterized as eutrophic in 2009. The plant emissions are regulated regarding seven days biological oxygen demand (BOD7) and total phosphorus (Ptot) emissions. Yet, there is no available computer model to simulate the plant operations and investigate the emissions of the requested permit. However, it was uncertain if the available data would be sufficient for the development of a new model. A model of the plant was eventually developed in BioWin® software under a number of assumptions and simplifications. A sensitivity analysis was conducted and used conversely than in other studies. The sensitivity analysis was conducted for the uncalibrated model in order to indicate its sensitive parameters. The parameters of substrate half saturation constant for ordinary heterotrophic organisms (KS) and phosphorus/acetate release ratio for polyphosphate accumulating organisms (YP/acetic) were finally used for model calibration. Following, the model validation confirmed the correctness of the calibrated model and the ability to develop a basic model under data deficiency. The new model was used to investigate a loading scenario corresponding to 120,000 P.E. where plant emissions that meet the current permits were predicted. Some suggestions proposed were the installation of disc filters in order to further reduce the effluent phosphorus and BOD precipitation in cases of high influent concentrations. In case of the application of a nitrogen (N) permit, the installation of membrane bioreactors and a full-scale chemical P removal was proposed as an alternative that will require a smaller footprint expansion of the plant.
APA, Harvard, Vancouver, ISO, and other styles
24

Filipík, Adam. "KALIBRACE ULTRAZVUKOVÉHO PRŮZVUČNÉHO SYSTÉMU VÝPOČETNÍ TOMOGRAFIE." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233451.

Full text
Abstract:
Tato dizertace je zaměřena na medicínskou zobrazovací modalitu – ultrazvukovou počítačovou tomografii – a algoritmy zlepšující kvalitu zobrazení, zejména kalibraci USCT přístroje. USCT je novou modalitou kombinující ultrazvukový přenos signálů a principy tomografické rekonstrukce obrazů vyvíjených pro jiné tomografické systémy. V principu lze vytvořit kvantitativní 3D obrazové objemy s vysokým rozlišením a kontrastem. USCT je primárně určeno pro diagnózu rakoviny prsu. Autor spolupracoval na projektu Institutu Zpracování dat a Elektroniky, Forschungszentrum Karlsruhe, kde je USCT systém vyvíjen. Jeden ze zásadních problémů prototypu USCT v Karlsruhe byla absence kalibrace. Tisíce ultrazvukových měničů se liší v citlivosti, směrovosti a frekvenční odezvě. Tyto parametry jsou navíc proměnné v čase. Další a mnohem závažnější problém byl v pozičních odchylkách jednotlivých měničů. Všechny tyto aspekty mají vliv na konečnou kvalitu rekonstruovaných obrazů. Problém kalibrace si autor zvolil jako hlavní téma dizertace. Tato dizertace popisuje nové metody v oblastech rekonstrukce útlumových obrazů, kalibrace citlivosti měničů a zejména geometrická kalibrace pozic měničů. Tyto metody byly implementovány a otestovány na reálných datech pocházejících z prototypu USCT z Karlsruhe.
APA, Harvard, Vancouver, ISO, and other styles
25

Garambois, Pierre-André. "Etude régionale des crues éclair de l'arc méditerranéen français. Elaboration de méthodologies de transfert à des bassins versants non jaugés." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0102/document.

Full text
Abstract:
D’un point de vue climatique la région méditerranéenne est propice aux évènements pluvio-orageux intenses, particulièrement en automne. Ces pluies s’abattent sur des bassins versants escarpés. La promptitude des crues ne laisse qu’un temps très court pour la prévision. L’amplitude de ces crues dépend de la grande variabilité des pluies et des caractéristiques des bassins versants. Les réseaux d'observations ne sont habituellement pas adaptés à ces petites échelles spatiales et l'intensité des événements affecte souvent la fiabilité des données quand elles existent d’où l’existence de bassin non jaugés. La régionalisation en hydrologie s’attache à la détermination de variables hydrologiques aux endroits où ces données manquent. L’objectif de cette thèse est de contribuer à poser les bases d’une méthodologie adaptée à la transposition des paramètres d'un modèle hydrologique distribué dédié aux crues rapides de bassins versants bien instrumentés à des bassins versants non jaugés, et ce sur une large zone d’étude. L’outil utilisé est le modèle hydrologique distribué MARINE [Roux et al., 2011] dont l’une des originalités est de disposer d’un modèle adjoint permettant de mener à bien des calibrations et des analyses de sensibilité spatio-temporelles qui servent à améliorer la compréhension des mécanismes de crue et à l’assimilation de données en temps réel pour la prévision. L’étude des sensibilités du modèle MARINE aborde la compréhension des processus physiques. Une large gamme de comportements hydrologiques est explorée. On met en avant quelques types de comportements des bassins versants pour la région d’étude [Garambois et al., 2012a]. Une sélection des évènements de calibration et une technique de calibration multi évènements aident à l’extraction d’un jeu de paramètres par bassin versant. Ces paramétrisations sont testées sur des évènements de validation. Une méthode de décomposition de la variance des résultats conduit aux sensibilités temporelles du modèle à ses paramètres. Cela permet de mieux appréhender la dynamique des processus physiques rapides en jeu lors de ces crues [Garambois et al., 2012c]. Les paramétrisations retenues sont transférées à l’aide de similarités hydrologiques sur des bassins versants non jaugés, à des fins de prévision opérationnelle
Climate and orography in the Mediterranean region tend to promote intense rainfalls, particularly in autumn. Storms often hit steep catchments. Flood quickness only let a very short time lapse for forecasts. Peak flow intensity depends on the great variability of rainfalls and catchment characteristics. As a matter of facts, observation networks are not adapted to these small space-time scales and event severity often affects data fiability when they exist thus the notion of ungauged catchment emerges. Regionalization in hydrology seeks to determine hydrological variables at locations where these data lack. This work contributes to pose the bases of a methodology adapted to transpose parameterizations of a flash flood dedicated distributed hydrologic model from gauged catchments to ungauged ones, and for a large study area. The MARINE distributed hydrologic model is used [Roux et al., 2011], its originality lies in the automatically differentiated adjoint model able to perform calibrations and spatial-temporal sensitivity analysis, in order to improve understanding in flash flood generating mechanisms and real time data assimilation for hydrometeorological forecasts. MARINE sensitivity analysis addresses the question of physical process understanding. A large panel of hydrologic behaviours is explored. General catchment behaviours are highlighted for the study area [Garambois et al., 2012a]. Selected flood events and a multiple events calibration technique help to extract catchment parameter sets. Those parameterizations are tested on validation events. A variance decomposition method leads to parameter temporal sensitivity analysis. It enables better understanding in catching dynamics of physical processes involved in flash floods formation [Garambois et al., 2012c]. Parameterizations are then transfered from gauged catchments with hydrologic similarity to ungauged ones with a view to develop real time flood forecasting
APA, Harvard, Vancouver, ISO, and other styles
26

Verma, Meghna. "Modeling Host Immune Responses in Infectious Diseases." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/96019.

Full text
Abstract:
Infectious diseases caused by bacteria, fungi, viruses and parasites have affected humans historically. Infectious diseases remain a major cause of premature death and a public health concern globally with increased mortality and significant economic burden. Unvaccinated individuals, people with suppressed and compromised immune systems are at higher risk of suffering from infectious diseases. In spite of significant advancements in infectious diseases research, the control or treatment process faces challenges. The mucosal immune system plays a crucial role in safeguarding the body from harmful pathogens, while being constantly exposed to the environment. To develop treatment options for infectious diseases, it is vital to understand the immune responses that occur during infection. The two infectious diseases presented here are: i) Helicobacter pylori infection and ii) human immunodeficiency (HIV) and human papillomavirus (HPV) co-infection. H pylori, is a bacterium that colonizes the stomach and causes gastric cancer in 1-2% but is beneficial for protection against allergies and gastroesophageal diseases. An estimated 85% of H pylori colonized individuals show no detrimental effects. HIV is a virus that causes AIDS, one of the deadliest and most persistent epidemics. HIV-infected patients are at an increased risk of co-infection with HPV, and report an increased incidence of oral cancer. The goal of this thesis is to elucidate the host immune responses in infectious diseases via the use of computational and mathematical models. First, the thesis reviews the need for computational and mathematical models to study the immune responses in the course of infectious diseases. Second, it presents a novel sensitivity analysis method that identifies important parameters in a hybrid (agent-based/equation-based) model of H. pylori infection. Third, it introduces a novel model representing the HIV/HPV coinfection and compares the simulation results with a clinical study. Fourth, it discusses the need of advanced modeling technologies to achieve a personalized systems wide approach and the challenges that can be encountered in the process. Taken together, the work in this dissertation presents modeling approaches that could lead to the identification of host immune factors in infectious diseases in a predictive and more resource-efficient manner.
Doctor of Philosophy
Infectious diseases caused by bacteria, fungi, viruses and parasites have affected humans historically. Infectious diseases remain a major cause of premature death and a public health concern globally with increased mortality and significant economic burden. These infections can occur either via air, travel to at-risk places, direct person-to-person contact with an infected individual or through water or fecal route. Unvaccinated individuals, individuals with suppressed and compromised immune system such as that in HIV carriers are at higher risk of getting infectious diseases. In spite of significant advancements in infectious diseases research, the control and treatment of these diseases faces numerous challenges. The mucosal immune system plays a crucial role in safeguarding the body from harmful pathogens, while being exposed to the environment, mainly food antigens. To develop treatment options for infectious diseases, it is vital to understand the immune responses that occur during infection. In this work, we focus on gut immune system that acts like an ecosystem comprising of trillions of interacting cells and molecules, including membars of the microbiome. The goal of this dissertation is to develop computational models that can simulate host immune responses in two infectious diseases- i) Helicobacter pylori infection and ii) human immunodeficiency virus (HIV)-human papilloma virus (HPV) co-infection. Firstly, it reviews the various mathematical techniques and systems biology based methods. Second, it introduces a "hybrid" model that combines different mathematical and statistical approaches to study H. pylori infection. Third, it highlights the development of a novel HIV/HPV coinfection model and compares the results from a clinical trial study. Fourth, it discusses the challenges that can be encountered in adapting machine learning based computational technologies. Taken together, the work in this dissertation presents modeling approaches that could lead to the identification of host immune factors in infectious diseases in a predictive and more resourceful way.
APA, Harvard, Vancouver, ISO, and other styles
27

Ciric, Catalina. "Conception et développement d'un nouveau modèle d'écosystème aquatique adapté pour décrire la dynamique des espèces dans des mésocosmes lotiques." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10131.

Full text
Abstract:
Un des défis majeurs que doit relever aujourd'hui l'écotoxicologie est la protection des écosystèmes. La grande majorité des données utilisées dans l'évaluation du risque écologique provient des tests effectués en laboratoire, généralement sur des espèces isolées. Mais ce que l'on cherche à protéger, ce sont les populations et les écosystèmes. C'est pour cela que des méthodes d'extrapolation sont utilisées pour passer de l'échelle individuelle (avec des résultats obtenus in vitro) à celle plus complexe de la communauté. Mais ces méthodes ne tiennent pas compte des interactions qui existent inévitablement entre les espèces qui coexistent au sein des écosystèmes. La modélisation au niveau de l'écosystème apparaît comme un alternatif puissant, car elle permet de prendre en compte ces interactions et aussi de prédire les effets des contaminants sur des populations qui ne sont pas des cibles (effets indirects). L'objectif de cette thèse était le développement d'un modèle d'écosystème aquatique lotique qui permette de simuler le fonctionnement des mésocosmes de l'INERIS (rivières artificielles). Ce modèle a été élaboré à partir de données obtenues dans des mésocosmes témoins et il avait pour but d'arriver à modéliser la dynamique des espèces ou des groupes d'espèces qui peuplent ces rivières artificielles. C'est pourquoi le modèle conceptuel a été basé sur la structure trophique des mésocosmes. Après le développement du modèle conceptuel, les processus et les interactions présentes dans celui-ci ont été modélisés à l'aide d'équations mathématiques. Une analyse de sensibilité a été menée sur les paramètres du modèle pour identifier ceux qui étaient les moins influents sur les sorties. Ces paramètres les moins influents ont ensuite été fixés à des valeurs nominales tandis que les autres paramètres restants ont été calibrés afin que les sorties du modèle correspondent aux données obtenues dans les mésocosmes
Extrapolations from single-species effect data are usually used for assessing the potential effects of a chemical on ecosystems. However such extrapolation fails to account for the interactions that inevitably exist among the species that coexist within the ecosystem. The use of ecosystem models could be an alternative, because it allows considering between species interactions and predicting contaminant effects on populations of nontarget species (indirect effects). The aim of this PhD project was to develop a new compartment ecological model for an aquatic ecosystem. The compartments were defined based upon the trophic structure of flow-through mesocosms. The ecological processes were modeled by thoroughly chosen mathematical functions. A sensitivity analysis was conducted on the parameters of the model, in order to identify the non-influential ones. Once identified, these parameters were set to fixed values whereas the other parameters were calibrated in order to maximize the fit of model outputs with experimental data from the mesocosms
APA, Harvard, Vancouver, ISO, and other styles
28

Artiges, Nils. "De l'instrumentation au contrôle optimal prédictif pour la performance énergétique du bâtiment." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT003/document.

Full text
Abstract:
Face aux forts besoins de réduction de la consommation énergétique et de l’impact environnemental,le bâtiment d’aujourd’hui vise la performance en s’appuyant sur des sourcesd’énergie de plus en plus diversifiées (énergies renouvelables), une enveloppe mieux conçue(isolation) et des systèmes de gestion plus avancés. Plus la conception vise la basse consommation,plus les interactions entre ses composants sont complexes et peu intuitives. Seule unerégulation plus intégrée permettrait de prendre en compte cette complexité et d’optimiser lefonctionnement pour atteindre la basse consommation sans sacrifier le confort.Les techniques de commande prédictive, fondées sur l’utilisation de modèles dynamiqueset de techniques d’optimisation, promettent une réduction des consommations et de l’inconfort.Elles permettent en effet d’anticiper l’évolution des sources et des besoins intermittentstout en tirant parti de l’inertie thermique du bâtiment, de ses systèmes et autres élémentsde stockage. Cependant, dans le cas du bâtiment, l’obtention d’un modèle dynamique suffisammentprécis présente des difficultés du fait d’incertitudes importantes sur les paramètresdu modèle et les sollicitations du système. Les avancées récentes dans le domaine de l’instrumentationdomotique constituent une opportunité prometteuse pour la réduction de cesincertitudes, mais la conception d’un tel système pour une telle application n’est pas triviale.De fait, il devient nécessaire de pouvoir considérer les problématiques de monitoring énergétique,d’instrumentation, de commande prédictive et de modélisation de façon conjointe.Cette thèse vise à identifier les liens entre commande prédictive et instrumentation dansle bâtiment, en proposant puis exploitant une méthode générique de modélisation du bâtiment,de simulation thermique et de résolution de problèmes d’optimisation. Cette méthodologiemet en oeuvre une modélisation thermique multizone du bâtiment, et des algorithmesd’optimisation reposant sur un modèle adjoint et les outils du contrôle optimal. Elle a étéconcrétisée dans un outil de calcul permettant de mettre en place une stratégie de commandeprédictive comportant des phases de commande optimale, d’estimation d’état et decalibration.En premier lieu, nous étudions la formulation et la résolution d’un problème de commandeoptimale. Nous abordons les différences entre un tel contrôle et une stratégie de régulationclassique, entre autres sur la prise en compte d’indices de performance et de contraintes. Nousprésentons ensuite une méthode d’estimation d’état basée sur l’identification de gains thermiquesinternes inconnus. Cette méthode d’estimation est couplée au calcul de commandeoptimale pour former une stratégie de commande prédictive.Les valeurs des paramètres d’un modèle de bâtiment sont souvent très incertaines. Lacalibration paramétrique du modèle est incontournable pour réduire les erreurs de prédictionet garantir la performance d’une commande optimale. Nous appliquons alors notreméthodologie à une technique de calibration basée sur des mesures de températures in situ.Nous ouvrons ensuite sur des méthodes permettant d’orienter le choix des capteurs à utiliser(nombre, positionnement) et des paramètres à calibrer en exploitant les gradients calculéspar la méthode adjointe.La stratégie de commande prédictive a été mise en oeuvre sur un bâtiment expérimentalprès de Chambéry. Dans le cadre de cette étude, l’intégralité du bâtiment a été modélisé,et les différentes étapes de notre commande prédictive ont été ensuite déployées de mainière séquentielle. Cette mise en oeuvre permet d’étudier les enjeux et les difficultés liées àl’implémentation d’une commande prédictive sur un bâtiment réel.Cette thèse est issue d’une collaboration entre le CEA Leti, l’IFSTTAR de Nantes et leG2ELab, et s’inscrit dans le cadre du projet ANR PRECCISION
More efficient energy management of buildings through the use of Model Predictive Control(MPC) techniques is a key issue to reduce the environmental impact of buildings. Buildingenergy performance is currently improved by using renewable energy sources, a betterdesign of the building envelope (insulation) and the use of advanced management systems.The more the design aims for high performance, the more interactions and coupling effectsbetween the building, its environment and the conditions of use are important and unintuitive.Only a more integrated regulation would take in account this complexity, and couldhelp to optimize the consumption without compromising the comfort.Model Predictive Control techniques, based on the use of dynamic models and optimizationmethods, promise a reduction of consumption and discomfort. They can generate energysavings by anticipating the evolution of renewable sources and intermittent needs, while takingadvantage of the building thermal inertia and other storage items. However, in the caseof buildings, obtaining a good dynamic model is tough, due to important uncertainties onmodel parameters and system solicitations.Recent advances in the field of wireless sensor networks are fostering the deployment ofsensors in buildings, and offer a promising opportunity to reduce these errors. Nevertheless,designing a sensor network dedicated to MPC is not obvious, and energy monitoring,instrumentation, modeling and predictive control matters must be considered jointly.This thesis aims at establishing the links between MPC and instrumentation needs inbuildings. We propose a generic method for building modeling, thermal simulation andoptimization. This methodology involves a multi-zone thermal model of the building, andefficient optimization algorithms using an adjoint model and tools from the optimal controltheory. It was implemented in a specific toolbox to develop a predictive control strategywith optimal control phases, state estimation phases and model calibration.At first, we study the formulation and resolution of an optimal control problem. We discussthe differences between such a control and a conventional regulation strategy, throughperformance indicators. Then, we present a state estimation method based on the identificationof unknown internal gains. This estimation method is subsequently coupled with theoptimal control method to form a predictive control strategy.As the parameters values of a building model are often very uncertain, parametric modelcalibration is essential to reduce prediction errors and to ensure the MPC performance. Consequently,we apply our methodology to a calibration technique based on in situ temperaturemeasurements. We also discuss how our approach can lead to selection techniques in orderto choose calibrated parameters and sensors for MPC purposes.Eventually, the predictive control strategy was implemented on an experimental building,at CEA INES, near Chambéry. The entire building was modeled, and the different steps ofthe control strategy were applied sequentially through an online supervisor. This experimentgave us a useful feedback on our methodology on a real case.This thesis is the result of a collaboration between CEA Leti, IFSTTAR Nantes andG2ELab, and is part of the ANR PRECCISION project
APA, Harvard, Vancouver, ISO, and other styles
29

Chouquet, Julie. "Development of a method for building life cycle analysis at an early design phase Implementation in a tool - Sensitivity and uncertainty of such a method in comparison to detailed LCA software = Calibration of new flavor tagging algorithms using Bs oscillations /." [S.l. : s.n.], 2007. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000009290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hirtzel, Joanne. "Exploration prospective des mobilités résidentielles dans une agglomération urbaine au moyen d'un modèle de simulation multi-agents (MOBISIM)." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA1005/document.

Full text
Abstract:
Proposer une offre en logements adaptée aux différents besoins et préférences des ménages représente un enjeu important pour les acteurs publics de l’aménagement. Ces besoins et préférences dépendent des caractéristiques des ménages et des changements qu’ils peuvent connaître dans leur cycle de vie (mise en couple, naissance, séparation…). Les facteurs participant aux choix résidentiels sont nombreux (attributs du logement, caractéristiques de l’environnement résidentiel) et interviennent différemment selon les types de ménages. Les dynamiques résidentielles impliquent ainsi une grande variété d’éléments, en interaction les uns avec les autres, et les relations de cause à effet sont difficiles à identifier. Par conséquent, il n’est pas possible de prévoir le comportement résidentiel des ménages pas plus que leurs évolutions possibles sans outil adapté.Pour étudier les dynamiques résidentielles intra-urbaines, nous utilisons dans cette thèse un modèle de simulation des mobilités résidentielles (Mobisim-MR) intégré dans une plateforme de simulation LUTI individu-centrée : Mobisim. Mobisim-MR permet de déterminer, pour chaque année de simulation, les ménages qui déménagent et leur nouvelle localisation résidentielle. En amont de Mobisim-MR, un modèle de simulation des évolutions démographiques (Mobisim-Démo) a été créé au sein de la plateforme Mobisim. Il permet de reproduire de manière dynamique et individu-centrée l’évolution des ménages dans leur cycle de vie. Une partie de la thèse est dédiée au paramétrage ces deux modèles, étape préalable nécessaire à la simulation de scénarios. Un autre volet de la thèse concerne l’exploration du comportement du modèle Mobisim-MR pour évaluer la stabilité des résultats de simulation et leur cohérence (analyse de sensibilité). L’utilisation de modèles individu-centrés est relativement récente en géographie, d’où l’absence de protocole standard pour l’exploration de tels modèles. Un protocole spécifique a été conçu pour explorer le comportement de Mobisim-MR. Ce protocole tient compte de la nature des paramètres du modèle, des contraintes techniques de simulation et de l’objectif pour lequel le modèle a été conçu.Le dernier volet de la thèse consiste en des analyses thématiques visant à étudier l’impact de deux scénarios de politiques de construction de logements sur l’agglomération du Grand Besançon. Ces analyses montrent la capacité de Mobisim-MR à répondre à des questions concrètes d’aménagement et à apporter des éléments de discussion aux acteurs publics en charge des politiques de logement
To ensure that housing supply is suitable to households’ needs and preferences represents a major planning concern. These needs and preferences depend on the households’ characteristics and on their lifecycle changes (union, birth, divorce…). Residential choice factors are numerous (housing and residential environment characteristics) and their role is often different according to the types of households. Residential dynamics involve a great variety of elements, in interaction with each other, and the causal relationships are difficult to identify. Thus, it is not possible to predict the households’ residential behaviour, nor their possible evolutions, without a suitable tool. To study intra-urban residential dynamics, we use a residential mobility simulation model (Mobisim-MR), integrated in an agent-based LUTI simulation platform: Mobisim. For each simulated year, Mobisim-MR allows for determination of households which move and their new residential location. Prior to Mobisim-MR, we created a demographic microsimulation model (Mobisim-Démo) within the Mobisim platform. It allows reproducing households lifecycle evolutions in a dynamic and agent-based way. A part of the thesis is dedicated to the calibration of both models, a required stage preliminary to scenarios simulation. Another part of the thesis concerns the exploration of Mobisim-MR model behaviour, in order to assess the simulation results’ stability and their consistency (sensitivity analysis). Agent-based models use is quite recent in geography, explaining the lack of standard protocol to explore such models. A specific protocol has been designed to explore the behaviour of Mobisim-MR. This protocol takes into consideration the parameters characteristics, simulation technical constraints, and the initial design for which the model has been built.The last part of the thesis consists of thematic analyses aimed at studying the impact of two housing construction planning scenarios in the urban region of Besançon (named le Grand Besançon). These analyses highlight the ability of Mobisim-MR to answer concrete planning questions and to initiate discussion among urban planners
APA, Harvard, Vancouver, ISO, and other styles
31

Xenakis, Georgios. "Assessment of carbon sequestration and timber production of Scots pine across Scotland using the process-based model 3-PGN." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2038.

Full text
Abstract:
Forests are a valuable resource for humans providing a range of products and services such as construction timber, paper and fuel wood, recreation, as well as living quarters for indigenous populations and habitats for many animal and bird species. Most recent international political agreements such as the Kyoto Protocol emphasise the role of forests as a major sink for atmospheric carbon dioxide mitigation. However, forest areas are rapidly decreasing world wide. Thus, it is vital that efficient strategies and tools are developed to encourage sustainable ecosystem management. These tools must be based on known ecological principles (such as tree physiological and soil nutrient cycle processes), capable of supplying fast and accurate temporal and spatial predictions of the effects of management on both timber production and carbon sequestration. This thesis had two main objectives. The first was to investigate the environmental factors affecting growth and carbon sequestration of Scots pine (Pinus sylvestris L.) across Scotland, by developing a knowledge base through a statistical analysis of old and novel field datasets. Furthermore, the process-based ecosystem model 3-PGN was developed, by coupling the existing models 3-PG and ICBM. 3-PGN calibrated using a Bayesian approach based on Monte Carlo Markov Chain simulations and it was validated for plantation stands. Sensitivity and uncertainty analyses provided an understanding of the internal feedbacks of the model. Further simulations gave a detailed eco-physiological interpretation of the environmental factors affecting Scots pine growth and it provided an assessment of carbon sequestration under the scenario of sustainable, normal production and its effects from the environment. Finally, the study investigated the spatial and temporal patterns of timber production and carbon sequestration by using the spatial version of the model and applying advanced spatial analyses techniques. The second objective was to help close the gap between environmental research and forest management, by setting a strategic framework for a process-based tool for sustainable ecosystem management. The thesis demonstrated the procedures for a site classification scheme based on modelling results and a yield table validation procedure, which can provide a way forward in supporting policies for forest management and ensuring their continued existence in the face of the present and future challenges.
APA, Harvard, Vancouver, ISO, and other styles
32

Moretti, Paul. "Performances, modélisation et limites d'un procédé à lit fluidisé associant culture libre et fixée (IFAS) pour le traitement du carbone et de l'azote des eaux résiduaires." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10214/document.

Full text
Abstract:
Motivées par des normes de rejets en azote toujours plus sévères et par les besoins d'extension de certaines stations d'épuration, les agglomérations sont à la recherche de nouvelles technologies de traitement plus compactes et plus performantes. Dans ce sens, le procédé hybride, à lit fluidisé placé dans un réacteur de type boues activées (IFAS), est une nouvelle technologie de traitement du carbone et de l'azote très attractive. L'objectif de cette thèse est d'optimiser le dimensionnement du procédé IFAS en configuration trois bassins (anoxie/aérobie BA/aérobie IFAS) et d'apporter des recommandations sur la conduite du procédé (charge massique appliquée, température.). Pour cela, une double démarche expérimentale et numérique a été mise en place. Un pilote de 3 m3 alimenté en eau usée brute a été conçu, instrumenté et étudié pendant 2 ans au cours de 7 périodes stabilisées (entre 0,15 et 0,30 kgDBO5/kgMVSLM/j, température entre 10 et 22°C, et le séquençage de l'aération dans les bassins). La concentration en MES dans la liqueur mixte a été maintenue à 2,3 gMES/L et la concentration en oxygène entre 2 à 6 mgO2/L. Les capacités de nitrification du biofilm et de la liqueur mixte (NPRmax) ont été mesurées tous les 15 jours. Les performances d'élimination de l'azote (nitrification et dénitrification) et du carbone observées sont restées supérieur à 90% d'élimination pour une charge massique maximale de 0,30 kgDBO5/kgMVSLM/j entre 16 à 24°C. Le biofilm dispose d'une capacité de nitrification maximale de 0,90 gN/m2/j et tributaire des concentrations en oxygène dans la liqueur mixte (contraintes diffusionnelle). Le biofilm contribue en moyenne à hauteur de 60% du flux total nitrifié dans le réacteur IFAS pour des âges de boues < 5 jours à 16°C. La diminution du MLSRT en dessous de 4 jours a permis de limiter le développement des bactéries autotrophes dans la liqueur mixte (minimum 10% du flux total nitrifié par la liqueur mixte) mais pas de les supprimer totalement (apport de nitrifiante par détachement de biofilm)
Motivated by the increasingly demanding discharge consents and by the need to improve overall treatment capacity, water authorities are uninterruptedly examining better performing and more compact wastewater treatment technologies. Thanks to its compactness and to its capacity to treat both organic matter and nitrogen at an affordable cost, the IFAS process represents an attractive addition to improve retrofitting-activated sludge plants performance. The main objective of this thesis is to optimize IFAS process with regards to key operation parameters such as dimensioning, F/M ratio by combining experimental and mathematical modelling approaches. A 3 m3 pilot IFAS fed with raw wastewater was operated at the experimental hall of La Feyssine wastewater treatment plant, Villeurbanne, for a period of 2 years. The IFAS process was separated in 3 tanks to treat organic matter and total nitrogen separately (anoxic/aerobic, suspended/aerobic IFAS). The experimental study was divided in 7 periods with different steady state operation conditions each. The feasibility of nitrification at steady F/M ratios (between 0,1S to 0,30 kgBODS/kgMLVSS/d), at constant temperatures (between 10 - 22°C) and at different oxygen supply rates was investigated. TSS in mixed liquor were maintained at 2,3 gMLTSS/L and oxygen concentration between 2 to 6 mgO2/L. Biofilm mass and combined nitrification capacity of biofilm and mixed liquor (NPRmax) were measured on a weekly basis. The removal performance was up to 90% for nitrogen and carbon treatment with a maximal F/M ratio of 0,30 kgBODS/kgMLVSS/d between 16°C to 24 °C. The biofilm was able to nitrify 0,90 gN/m2/d (NPRmax) depending on the oxygen concentration in the mixed liquor (diffusional limitation). Under the operating conditions tested in this study, biofilm was responsible for 40 to 70% of NOx-N production in IFAS reactor during nitrification. Decreasing the MLSRT to less than 4 days limits the growth of autotrophic bacteria in the mixed liquor but does not halt it completely
APA, Harvard, Vancouver, ISO, and other styles
33

Hansson, Klas. "Water and Heat Transport in Road Structures : Development of Mechanistic Models." Doctoral thesis, Uppsala University, Department of Earth Sciences, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4822.

Full text
Abstract:

The coupled transport of water and heat, involving freezing and thawing, in the road structure and its immediate environment is important to consider for optimal design and maintenance of roads and when assessing solute transport, of e.g. de-icing salt, from roads. The objective of this study was to develop mechanistic models, and measurement techniques, suitable to describe and understand water flow and heat flux in road structures exposed to a cold climate.

Freezing and thawing was accounted for by implementing new routines in two numerical models (HYDRUS1D/2D). The sensitivity of the model output to changes in parameter values and operational hydrological data was investigated by uncertainty and sensitivity analyses. The effect of rainfall event characteristics and asphalt fractures on the subsurface flow pattern was investigated by scenario modelling. The performance of water content reflectometers (WCR), measuring water content, was evaluated using measurements in two road structure materials. A numerical model was used to simulate WCR sensor response. The freezing/thawing routines were stable and provided results in agreement with laboratory measurements. Frost depth, thawing period, and freezing-induced water redistribution in a model road was greatly affected by groundwater level and type of subgrade. The simulated subsurface flow patterns corresponded well with published field observations. A new method was successful in enabling the application of time domain reflectometer (TDR) calibration equations to WCR output. The observed distortion in sampling volume for one of the road materials could be explained by the WCR sensor numerical model. Soil physical, hydrological, and hydraulic modules proved successful in simulating the coupled transport of water and heat in and on the road structure. It was demonstrated in this thesis that numerical models can improve the interpretation and explanation of measurements. The HYDRUS model was an accurate and pedagogical tool, clearly useful in road design and management.

APA, Harvard, Vancouver, ISO, and other styles
34

Sund, Björn. "Economic evaluation, value of life, stated preference methodology and determinants of risks." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro universitet, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12557.

Full text
Abstract:
The first paper examines the value of a statistical life (VSL) for out-of-hospital cardiac arrest (OHCA) victims. We found VSL values to be higher for OHCA victims than for people who die in road traffic accidents and a lower-bound estimate of VSL for OHCA would be in the range of 20 to 30 million Swedish crowns (SEK). The second paper concerns hypothetical bias in contingent valuation (CV) studies. We investigate the link between the determinants and empirical treatment of uncertainty through certainty calibration and find that the higher the confidence of the respondents the more we can trust that stated WTP is correlated to actual WTP. The third paper investigates the performance of two communication aids (a flexible community analogy and an array of dots) in valuing mortality risk reductions for OHCA. The results do not support the prediction of expected utility theory, i.e. that WTP for a mortality risk reduction increases with the amount of risk reduction (weak scope sensitivity), for any of the communication aids. The fourth paper presents a cost-benefit analysis to evaluate the effects of dual dispatch defibrillation by ambulance and fire services in the County of Stockholm. The intervention had positive economic effects, yielding a benefit-cost ratio of 36, a cost per quality-adjusted life-year (QALY) of € 13 000 and the cost per saved life was € 60 000. The fifth paper explores how different response times from OHCA to defibrillation affect patients’ survival rates by using geographic information systems (GIS). The model predicted a baseline survival rate of 3.9% and reducing the ambulance response time by 1 minute increased survival to 4.6%. The sixth paper analyzes demographic determinants of incident experience and risk perception, and the relationship between the two, for eight different risk domains. Males and highly educated respondents perceive their risks lower than what is expected compared to actual incident experience.
APA, Harvard, Vancouver, ISO, and other styles
35

Figueiredo, Almeida Sofia José. "Synchronisation d'oscillateurs biologiques : modélisation, analyse et couplage du cycle cellulaire et de l’horloge circadienne." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4239/document.

Full text
Abstract:
Le cycle de division cellulaire et l'horloge circadienne sont deux processus fondamentaux de la régulation cellulaire qui génèrent une expression rythmique des gènes et des protéines. Dans les cellules mammifères, les mécanismes qui sous-tendent les interactions entre le cycle cellulaire et l'horloge restent très mal connus. Dans cette thèse, nous étudions ces deux oscillateurs biologiques, à la fois individuellement et en tant que système couplé, pour comprendre et reproduire leurs principales propriétés dynamiques, détecter les composants essentiels du cycle cellulaire et de l'horloge, et identifier les mécanismes de couplage. Chaque oscillateur biologique est modélisé par un système d'équations différentielles ordinaires non linéaires et ses paramètres sont calibrés par rapport à des données expérimentales: le modèle du cycle cellulaire se base sur les modifications post-traductionnelles du complexe Cdk1-CycB et mène à un oscillateur de relaxation dont la dynamique et la période sont contrôlés par les facteurs de croissance; le modèle de l'horloge circadienne reproduit l'oscillation antiphasique BMAL1/PER:CRY et l'adaptation de la durée des états d'activation et répression par rapport à deux signaux d’entrée hormonaux déphasés. Pour analyser les interactions entre les deux oscillateurs nous étudions la synchronisation des deux rythmes pour des régimes de couplage uni- ou bi-directionnels. Les simulations numériques reproduisent les ratios entre les périodes de l'horloge et du cycle cellulaire, tels que 1:1, 3:2 et 5:4. Notre étude suggère des mécanismes pour le ralentissement du cycle cellulaire avec des implications pour la conception de nouvelles chronothérapies
The cell division cycle and the circadian clock are two fundamental processes of cellular control that generate cyclic patterns of gene activation and protein expression, which tend to be synchronous in healthy cells. In mammalian cells, the mechanisms that govern the interactions between cell cycle and clock are still not well identified. In this thesis we analyze these two biological oscillators, both separately and as a coupled system, to understand and reproduce their main dynamical properties, uncover essential cell cycle and clock components, and identify coupling mechanisms. Each biological oscillator is first modeled by a system of non-linear ordinary differential equations and its parameters calibrated against experimental data: the cell cycle model is based on post-translational modifications of the mitosis promoting factor and results in a relaxation oscillator whose dynamics and period are controlled by growth factor; the circadian clock model is transcription-based, recovers antiphasic BMAL1/PER:CRY oscillation and relates clock phases to metabolic states. This model shows how the relative duration of activating and repressing molecular clock states is adjusted in response to two out-of-phase hormonal inputs. Finally, we explore the interactions between the two oscillators by investigating the control of synchronization under uni- or bi-directional coupling schemes. Simulations of experimental protocols replicate the oscillators’ period-lock response and recover observed clock to cell cycle period ratios such as 1:1, 3:2 and 5:4. Our analysis suggests mechanisms for slowing down the cell cycle with implications for the design of new chronotherapies
APA, Harvard, Vancouver, ISO, and other styles
36

Rusina, Michal. "Stanovení vlastností ultrazvukových sond." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221368.

Full text
Abstract:
This master thesis deals with the measurement properties of ultrasound probes. Ultrasound probes and their parameters significantly affect the quality of the final image. Values of pa-rameters of the probes may change due to their use, because probes may be damaged and the final image may no longer be correct. For these reasons the measurements of parameters of probes are very important. In this master thesis there are described and implemented the possibility of measuring the spatial resolution, focal zone, the sensitivity of the probe and measuring the length of the dead zone. Two ultrasonic phantoms were used for measuring. In the practical part there was created the program called Mereni_parametru, which allows to determine the value of four parameters from captured images of the phantom. Further, there are listed and described measured values for five ultrasonic probes. Results for two of these probes are then compared with the parameters given by the manufacturers.
APA, Harvard, Vancouver, ISO, and other styles
37

Lepine, Paul. "Recalage stochastique robuste d'un modèle d'aube de turbine composite à matrice céramique." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD051/document.

Full text
Abstract:
Les travaux de la présente thèse portent sur le recalage de modèles dynamiques d’aubes de turbinecomposites à matrice céramique. Ils s’inscrivent dans le cadre de la quantification d’incertitudes pour la validation de modèles et ont pour objectif de fournir des outils d’aide à la décision pour les ingénieurs desbureaux d’études. En effet, la dispersion importante observée lors des campagnes expérimentales invalidel’utilisation des méthodes de recalage déterministe. Après un état de l’art sur la relation entre les incertitudeset la physique, l’approche de Vérification & Validation a été introduite comme approche permettantd’assurer la crédibilité des modèles numériques. Puis, deux méthodes de recalages stochastiques, permettantde déterminer la distribution statistique des paramètres, ont été comparées sur un cas académique. La priseen compte des incertitudes n’élude pas les potentielles compensations entre paramètres. Par conséquent, desindicateurs ont été développés afin de détecter la présence de ces phénomènes perturbateurs. Ensuite, lathéorie info-gap a été employée en tant que moyen de modéliser ces méconnaissances. Une méthode derecalage stochastique robuste a ainsi été proposée, assurant un compromis entre la fidélité du modèle auxessais et la robustesse aux méconnaissances. Ces outils ont par la suite été appliqués sur un modèle éléments
This work is focused on the stochastic updating of ceramic matrix composite turbine blade model. They arepart of the uncertainty quantification framework for model validation. The aim is to enhance the existing toolused by the industrial decision makers. Indeed, consequent dispersion was measured during the experimentalcampaigns preventing the use of deterministic approaches. The first part of this thesis is dedicated to therelationship between mechanical science and uncertainty. Thus, Verification and Validation was introduced asthe processes by which credibility in numerical models is established. Then two stochastic updatingtechniques, able to handle statistic distribution, were compared through an academic example. Nevertheless,taking into account uncertainties doesn’t remove potential compensating effects between parameters.Therefore, criteria were developed in order to detect these disturbing phenomena. Info-gap theory wasemployed as a mean to model these lack of knowledge. Paired with the stochastic updating method, a robuststochasticapproach has been proposed. Results demonstrate a trade-off relationship between the model’sfidelity and robustness. The developed tools were applied on a ceramic matrix composite turbine blade finiteelement model
APA, Harvard, Vancouver, ISO, and other styles
38

Nayyerloo, Mostafa. "Real-time Structural Health Monitoring of Nonlinear Hysteretic Structures." Thesis, University of Canterbury. Department of Mechanical Engineering, 2011. http://hdl.handle.net/10092/6581.

Full text
Abstract:
The great social and economic impact of earthquakes has made necessary the development of novel structural health monitoring (SHM) solutions for increasing the level of structural safety and assessment. SHM is the process of comparing the current state of a structure’s condition relative to a healthy baseline state to detect the existence, location, and degree of likely damage during or after a damaging input, such as an earthquake. Many SHM algorithms have been proposed in the literature. However, a large majority of these algorithms cannot be implemented in real time. Therefore, their results would not be available during or immediately after a major event for urgent post-event response and decision making. Further, these off-line techniques are not capable of providing the input information required for structural control systems for damage mitigation. The small number of real-time SHM (RT-SHM) methods proposed in the past, resolve these issues. However, these approaches have significant computational complexity and typically do not manage nonlinear cases directly associated with relevant damage metrics. Finally, many available SHM methods require full structural response measurement, including velocities and displacements, which are typically difficult to measure. All these issues make implementation of many existing SHM algorithms very difficult if not impossible. This thesis proposes simpler, more suitable algorithms utilising a nonlinear Bouc-Wen hysteretic baseline model for RT-SHM of a large class of nonlinear hysteretic structures. The RT-SHM algorithms are devised so that they can accommodate different levels of the availability of design data or measured structural responses, and therefore, are applicable to both existing and new structures. The second focus of the thesis is on developing a high-speed, high-resolution, seismic structural displacement measurement sensor to enable these methods and many other SHM approaches by using line-scan cameras as a low-cost and powerful means of measuring structural displacements at high sampling rates and high resolution. Overall, the results presented are thus significant steps towards developing smart, damage-free structures and providing more reliable information for post-event decision making.
APA, Harvard, Vancouver, ISO, and other styles
39

Vo, Ngoc Duong. "Modélisation hydrologique déterministe pour l'évaluation des risques d'inondation et le changement du climat en grand bassin versant. Application au bassin versant de Vu Gia Thu Bon, Viet Nam." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4056/document.

Full text
Abstract:
Le changement climatique dû à l'augmentation des émissions de gaz à effet de serre est considéré comme l'un des principaux défis pour les êtres humains dans 21ème siècle. Il conduira à des changements dans les précipitations, l'humidité atmosphérique, augmentation de l'évaporation et probablement augmenter la fréquence des événements extrêmes. Les conséquences de ces phénomènes auront une influence sur de nombreux aspects de la société humaine. Donc, il y a une nécessité d'avoir une estimation robuste et précise de la variation des facteurs naturels dus au changement climatique, au moins dans les événements de cycle et d'inondation hydrologiques pour fournir une base solide pour atténuer les impacts du changement climatique et s'adapter à ces défis. Le but de cette étude est de présenter une méthodologie pour évaluer les impacts de différents scénarios de changement climatique sur une zone inondable du bassin de la rivière côtière dans la région centrale du Viet Nam - bassin versant de Vu Gia Thu Bon. Les simulations hydrologiques sont basées sur un modèle hydrologique déterministe validé qui intègre la géologie, les sols, la topographie, les systèmes fluviaux et les variables climatiques. Le climat de la journée présente, sur la période de 1991-2010 a été raisonnablement simulée par le modèle hydrologique. Climat futur (2091-2100) information a été obtenue à partir d'une réduction d'échelle dynamique des modèles climatiques mondiaux. L'étude analyse également les changements dans la dynamique des inondations de la région de l'étude, le changement hydrologique et les incertitudes du changement climatique simulation
Climate change due to the increase of greenhouse gas emissions is considered to be one of the major challenges to mankind in the 21st century. It will lead to changes in precipitation, atmospheric moisture, increase in evaporation and probably a higher frequency of extreme events. The consequences of these phenomena will have an influence on many aspects of human society. Particularly at river deltas, coastal regions and developing countries, the impacts of climate change to socio-economic development become more serious. So there is a need for a robust and accurate estimation of the variation of natural factors due to climate change, at least in the hydrological cycle and flooding events to provide a strong basis for mitigating the impacts of climate change and to adapt to these challenges. The aim of this study is to present a methodology to assess the impacts of different climate change scenarios on a flood prone area of a coastal river basin in the central region of Viet Nam – Vu Gia Thu Bon catchment. The hydrological simulations are based on a validated deterministic hydrological model which integrates geology, soil, topography, river systems and climate variables. The present day climate, over the period of 1991-2010 was reasonably simulated by the hydrological model. Future climate (2091-2100) information was obtained from a dynamical downscaling of the global climate models. The study also analyzes the changes in the flood dynamics of the study region, the hydrological shift and the uncertainties of climate change simulation
APA, Harvard, Vancouver, ISO, and other styles
40

Luttmann, Michel. "Ellipsométrie spectroscopique à angle variable : applications à l'étude des propriétés optiques de semi-conducteurs II-VI et à la caractérisation de couches à gradient d'indice." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10232.

Full text
Abstract:
Ce travail apporte une contribution a la caracterisation des couches minces par ellipsomerie spectroscopique a angle variable. Nous presentons les differentes modifications apportees a l'ellipsometre d'origine et decrivons les procedures d'etalonnage utilisees. La reduction des erreurs systematiques et aleatoires est egalement traitee. Une etude originale a ete menee sur les derivees partielles des angles ellipsometriques psi et delta par rapport aux differents parametres de l'echantillon (epaisseurs, indices, angle d'incidence). Celle-ci nous a conduit a introduire le concept de sensibilite integrale relative (sir) qui s'est avere tres utile pour localiser les zones angulaires les plus interessantes et pour comparer entre elles les sensibilites de la mesure ellipsometrique aux divers parametres de l'echantillon. L'interet de mesures spectroscopiques a plusieurs angles d'incidence est discute. Deux applications principales ont ete traitees dans ce memoire: la premiere concerne la mesure des indices de semi-conducteurs ii-vi a grands gaps. L'etude realisee porte sur des substrats massifs de cdmnte et sur des couches epitaxiees de cdmgte. Une loi d'indice permettant de decrire le comportement de la fonction dielectrique du cdmgte sur l'ensemble du domaine spectral est proposee. Dans la zone transparente, deux lois de sellmeier donnant les indices du cdmnte et du cdmgte pour toute concentration de manganese ou de magnesium, ont ete etablies. La seconde porte sur la caracterisation de couches a gradient d'indice. Une methode permettant d'analyser des couches de profil d'indice a priori quelconque est proposee. Elle a ete validee sur des couches inhomogenes de gaalas et d'oxy-nitrure de silicium. L'ellipsometrie s'est revelee etre une technique bien adaptee a ce type de caracterisation puisque des profils polynomiaux du quatrieme degre ont pu etre mis en evidence sur des couches d'oxy-nitrure de silicium a fort gradient d'indice
APA, Harvard, Vancouver, ISO, and other styles
41

Friberg, Annika. "Interaktionskvalitet - hur mäts det?" Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20810.

Full text
Abstract:
Den tekniska utvecklingen har lett till att massiva mängder av information sänds, i högahastigheter. Detta flöde måste vi lära oss att hantera. För att maximera nyttan av de nyateknikerna och undkomma de problem som detta enorma informationsflöde bär med sig, börinteraktionskvalitet studeras. Vi måste anpassa gränssnitt efter användaren eftersom denneinte har möjlighet att anpassa sig till, och sortera i för stora informationsmängder. Vi måsteutveckla system som gör människan mer effektiv vid användande av gränssnitt.För att anpassa gränssnitten efter användarens behov och begränsningar krävs kunskaperom den mänskliga kognitionen. När kognitiv belastning studeras är det viktigt att en såflexibel, lättillgänglig och icke-påträngande teknik som möjligt används för att få objektivamätresultat, samtidigt som pålitligheten är av största vikt. För att kunna designa gränssnittmed hög interaktionskvalitet krävs en teknik att utvärdera dessa. Målet med uppsatsen är attfastställa en mätmetod väl lämpad för mätning av interaktionskvalitet.För mätning av interaktionskvalitet rekommenderas en kombinering av subjektiva ochfysiologiska mätmetoder, detta innefattar en kombination av Functional near-infraredspecroscopy; en fysiologisk mätmetod som mäter hjärnaktiviteten med hjälp av ljuskällor ochdetektorer som fästs på frontalloben, Electrodermal activity; en fysiologisk mätmetod sommäter hjärnaktiviteten med hjälp av elektroder som fästs över skalpen och NASA task loadindex; en subjektiv, multidimensionell mätmetod som bygger på kortsortering och mäteruppfattad kognitiv belastning i en sammanhängande skala. Mätning med hjälp av dessametoder kan resultera i en ökad interaktionskvalitet i interaktiva, fysiska och digitalagränssnitt. En uppskattning av interaktionskvalitet kan bidra till att fel vid interaktionminimeras, vilket innebär en förbättring av användares upplevelse vid interaktion.
Technical developments have led to the broadcasting of massive amounts of information, athigh velocities. We must learn to handle this flow. To maximize the benefits of newtechnologies and avoid the problems that this immense information flow brings, interactionquality should be studied. We must adjust interfaces to the user because the user does nothave the ability to adapt and sort overly large amounts of information. We must developsystems that make the human more efficient when using interfaces.To adjust the interfaces to the user needs and limitations, knowledge about humancognitive processes is required. When cognitive workload is studied it is important that aflexible, easily accessed and non assertive technique is used to get unbiased results. At thesame time reliability is of great importance. To design interfaces with high interaction quality,a technique to evaluate these is required. The aim of this paper is to establish a method that iswell suited for measurement of interaction quality.When measuring interaction quality, a combination of subjective and physiologicalmethods is recommended. This comprises a combination of Functional near-infraredspectroscopy; a physiological measurement which measures brain activity using light sourcesand detectors placed on the frontal lobe, Electrodermal activity; a physiological measurementwhich measures brain activity using electrodes placed over the scalp and NASA task loadindex; a subjective, multidimensional measurement based on card sorting and measures theindividual perceived cognitive workload on a continuum scale. Measuring with these methodscan result in an increase in interaction quality in interactive, physical and digital interfaces.An estimation of interaction quality can contribute to eliminate interaction errors, thusimproving the user’s interaction experience.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Ying. "A Study of Predicted Energy Savings and Sensitivity Analysis." Thesis, 2013. http://hdl.handle.net/1969.1/151215.

Full text
Abstract:
The sensitivity of the important inputs and the savings prediction function reliability for the WinAM 4.3 software is studied in this research. WinAM was developed by the Continuous Commissioning (CC) group in the Energy Systems Laboratory at Texas A&M University. For the sensitivity analysis task, fourteen inputs are studied by adjusting one input at a time within ± 30% compared with its baseline. The Single Duct Variable Air Volume (SDVAV) system with and without the economizer has been applied to the square zone model. Mean Bias Error (MBE) and Influence Coefficient (IC) have been selected as the statistical methods to analyze the outputs that are obtained from WinAM 4.3. For the saving prediction reliability analysis task, eleven Continuous Commissioning projects have been selected. After reviewing each project, seven of the eleven have been chosen. The measured energy consumption data for the seven projects is compared with the simulated energy consumption data that has been obtained from WinAM 4.3. Normalization Mean Bias Error (NMBE) and Coefficient of Variation of the Root Mean Squared Error (CV (RMSE)) statistical methods have been used to analyze the results from real measured data and simulated data. Highly sensitive parameters for each energy resource of the system with the economizer and the system without the economizer have been generated in the sensitivity analysis task. The main result of the savings prediction reliability analysis is that calibration improves the model’s quality. It also improves the predicted energy savings results compared with the results generated from the uncalibrated model.
APA, Harvard, Vancouver, ISO, and other styles
43

Kung, Tzu-Wen, and 龔子文. "Techniques Analysis for Probe Calibration and Sensitivity Improvement in IC-EMI Detection." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/20308353089068113149.

Full text
Abstract:
碩士
大葉大學
電信工程學系碩士在職專班
95
ABSTRACT Electromagnetic interferences have become more severe due to the rapid development of digital technology. Electromagnetic interferences have been regulated by every developed nation in the world. In order to comply with the regulations of EMI requirement, the concept of EMI must be introduced from the beginning stage of the product designs. The manufacturers of electronic components must also take into consideration the regulations of EMC requirements in the component design. Though the present of EMI are not clear cut in IC due to its small-scale dimension, IC will expect to be a major EMI source for the electronic component system. Further, the fast switching of digital signals is also one of main cause for generating emission. In view of the extensive usage of IC lay-out in the modern day products, electromagnetic emission generated from IC will increase exponentially. Therefore, the design of components and semi-finished goods must incorporate with the solutions for EMC. It is generally recognized internationally that the EMC management processes and history for all products can be categorized into 3 stages. The 1st stage is for system part (products like information, home appliances…..etc). The 2nd stage is for module certification (components like network card, optical disk…etc) and the 3rd stage is for single electronic component, SoC, SiP and IC. Hence, International Electrotechnical Commission (IEC) issues and introduces a series of standard guidelines IEC 61967 for the monitor and measurement of conductive and radiative emission from IC. This work designed, analyzed and calibrated a magnetic probe based on section 6 (Magnetic Probe Method) of the standard guidelines IEC 61967. The designed probe was used to detect the signals from the component’s IC input/output ends, power input source, and RF current of grounding pin. The information will then be utilized to predict the EMI characteristics of the electronic components so that shielding and suppression against IC’s radiactive emission can be incorporated in the initial stage of the IC design.
APA, Harvard, Vancouver, ISO, and other styles
44

Shih, Yu-Yang, and 施宇陽. "Design of High Sensitivity near-field probe and Analysis of probe Calibration." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/73759780342956080650.

Full text
Abstract:
碩士
逢甲大學
產業研發碩士班
98
Many extremely susceptible components with low-voltage operation or high sensitivity may be affected by EMI noises and degrade their performance. As a consequence, the EMI phenomena from IC have become an issue for semiconductor industrial, such as IC design, IC packaging, and manufacturing foundry. In order to achieve the intra-system electromagnetic compatibility for optimal circuit layout and component placement, the EMI characteristics from critical integrated circuit must be taken into account during system design stage. In this thesis, we have designed a high sensitivity magnetic field probe with good spatial resolution using a semi-rigid co-axial coil configuration. In order to produce a more accurate measurement of the noise source location and the corresponding frequency band, a multi-tier coils is being implemented to improve the magnetic flux. A Balun and matching circuit are introduced to facilitate a better impedance matching to the specific band. The superior performance of the designed probe has been tested and compared with the existing commercially available product.
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Kuo-Wei, and 李國威. "Design, Calibration and Tests of a High-Sensitivity Extended-Range Bonner Cylinder Spectrometer." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/35374092700436211360.

Full text
Abstract:
博士
國立清華大學
核子工程與科學研究所
104
Bonner spheres are widely used to determine the energy spectrum of a neutron field. With the introduction of high-Z metal in moderator as a neutron multiplier, the effective energy range of Bonner spheres can be extended to GeV neutrons. Because of small active volume of the central probe, the detection efficiency of Bonner spheres is limited and impractical in certain field measurement applications. In this study, a set of Bonner cylinders was fabricated based on a high-efficiency cylindrical 3He proportional counter. The cylindrical arrangement substantially improved the detection efficiency of the spectrometer system, but inevitably yielded an angular dependence on the detector responses. Using a series of calculations and measurements, this study presents a systematic comparison between Bonner spheres and cylinders in terms of their response functions, detection efficiencies, angular dependences and spectrum unfolding. Besides, neutron dosemeters used for radiation protection purpose are commonly calibrated with 252Cf neutron sources and are used in various workplaces. In this study, the effect of the neutron spectrum on the accuracy of dose measurements was investigated. A set of neutron spectra representing various neutron environments was selected to study the dose responses of a series of Bonner spheres, including standard and extended-range spheres. By comparing 252Cf-calibrated dose responses with reference values based on fluence-to-dose conversion coefficients, this study presents recommendations for neutron field characterization and appropriate correction factors for responses of conventional neutron dosemeters used in environments with high-energy neutrons.
APA, Harvard, Vancouver, ISO, and other styles
46

Grimson, W. Eric L. "Why Stereo Vision is Not Always About 3D Reconstruction." 1993. http://hdl.handle.net/1721.1/5947.

Full text
Abstract:
It is commonly assumed that the goal of stereovision is computing explicit 3D scene reconstructions. We show that very accurate camera calibration is needed to support this, and that such accurate calibration is difficult to achieve and maintain. We argue that for tasks like recognition, figure/ground separation is more important than 3D depth reconstruction, and demonstrate a stereo algorithm that supports figure/ground separation without 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
47

Cathey, Anna M. "The calibration, validation, and sensitivity analysis of DoSag an in-stream dissolved oxygen model /." 2005. http://purl.galileo.usg.edu/uga%5Fetd/cathey%5Fanna%5Fm%5F200505%5Fms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tang, Yong. "Advancing hydrologic model evaluation and identification using multiobjective calibration, sensitivity analysis, and parallel computation." 2007. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1753/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Chin-Yang, and 張進揚. "Study of Touch Sensitivity Calibration of Resistive Touch Panel for Industrial Smart Handheld Devices." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/45931247523962538225.

Full text
Abstract:
碩士
國立彰化師範大學
電機工程學系
100
This thesis studied the sensitivity calibration of resistive touch panel of industrial smart handheld device. In the development of industrial smart handheld device, resistive touch panels have to undergo testing over and over again to set the touch sensitivity. This study designed a low constant current power supply to find out the suitable resistance range of the 4-wire resistive touch panel and import the resistance range to the industrial smart handheld device through a resistive touch panel simulator. By adjusting the sensitivity calibration of the touch controller and the touch resistance, the reasonable range of touch sensitivity can be found. Based on the results, the process can reduce 50% of the testing time. It not only reduces the test cycles in developing stage but also saves labor.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Huang-Hsien, and 陳煌憲. "The Analysis of Calibration and Sensitivity Study of Air Flow in AMCA Wind Tunnel." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88755423689052622892.

Full text
Abstract:
碩士
國立清華大學
工程與系統科學系
98
Taiwan plays an important role in the whole world on the fan and forced draught blower market. Taiwan must set the standard for its flow of fan to make the dependable basis. AMCA wind-tunnel generally used in such fan function tests of heat dissipation as CPU of IT industry,etc. the correcting of the flow meter usually send to Inspection Bureau of the national standard but there are some disadvantages in high cost. This thesis studies the feasible method to correct instrument and equipment. In the small flow, we use the device with the compressibility factor Theory (PV =ZmRT) to correct the amount of air flow. In order to calculate the total amount of air through wind tunnel, we precisely,we design a tank. After a while, the air passing through wind tunnel is selected into the tank. We observe the difference of height in water level in U monometer and calculate the pressure in the tank; In higher flow, we find other methods ,for example, wind velocity meter. In base of ideal gas equation, we calculate error between the flow of predicting and the flow measured by the wind-tunnel. Finally, we discuss the pressure difference in different point.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography