To see the other types of publications on this topic, follow the link: Calibration methods.

Dissertations / Theses on the topic 'Calibration methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Calibration methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andersson, Greger. "Novel nonlinear multivariate calibration methods /." Stockholm : Tekniska högsk, 1998. http://www.lib.kth.se/abs98/ande0528.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weining, Wang. "Adaptive methods for risk calibration." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2012. http://dx.doi.org/10.18452/16585.

Full text
Abstract:
Dieser Artikel enthält vier Kapitel. Das erste Kapitel ist berechtigt, '''' lokalen Quantil Regression"und seine Zusammenfassung: Quantil Regression ist eine Technik, bedingte Quantil Kurven zu schätzen. Es bietet ein umfassendes Bild über ein Antwort-Kontingent auf erklärenden Variablen. In einem Rahmen flexible Modellierung ist eine besondere Form der bedingten Quantil-Kurve nicht von vornherein festgelegt. Dies motiviert eine lokale parametrische anstatt einer globalen feste Modell passend Ansatz. Eine nichtparametrische Glättung Schätzung der bedingte Quantil Kurve erfordert, zwischen lokalen Krümmung und stochastische auszugleichen Variabilität. In den ersten Essay empfehlen wir eine lokale Modellauswahl Technik, die eine adaptive Schätzung der bedingte bietet Quantil-Regression-Kurve bei jedem Entwurf-Punkt. Theoretische Ergebnisse behaupten, dass das vorgeschlagene adaptive Verfahren als führt gut als Orakel die würde das Risiko der lokalen Abschätzung für die Aufgabenstellung minimieren. Wir veranschaulichen die Leistung der Trolle.
This article includes four chapters. The first chapter is entitled ``Local Quantile Regression", and its summary: Quantile regression is a technique to estimate conditional quantile curves. It provides a comprehensive picture of a response contingent on explanatory variables. In a flexible modeling framework, a specific form of the conditional quantile curve is not a priori fixed. This motivates a local parametric rather than a global fixed model fitting approach. A nonparametric smoothing estimate of the conditional quantile curve requires to balance between local curvature and stochastic variability. In the first essay, we suggest a local model selection technique that provides an adaptive estimate of the conditional quantile regression curve at each design point. Theoretical results claim that the proposed adaptive procedure performs as good as an oracle which would minimize the local estimation risk for the problem at hand. We illustrate the performance of the procedure by an extensive simulation study and consider a couple of applications: to tail dependence analysis for the Hong Kong stock market and to analysis of the distributions of the risk factors of temperature dynamics.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Seon Joo Pollefeys Marc. "Radiometric calibration methods from image sequences." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2019.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Feb. 17, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
4

Esquivel, Sandro [Verfasser]. "Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel." Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wiegand, Michael J. "Comparison of unconstrained and constrained calibration methods." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26935.

Full text
Abstract:
Approved for public release: Distribution is unlimited
The idea of using a passive end point motion constraint to calibrate robot manipulators is of particular interest because no measurement equipment is required. The accuracy attained using this method is compared to the accuracy attained by an unconstrained calibration using computer simulated measurements. A kinematic model is established for each configuration using the Denavit- Hartenberg methodology. The kinematic equations are formulated and are used in the computer simulated calibration to determine the actual kinematic parameters of the manipulator. The results are discussed in terms of the effect of measurement noise and the number of experimental observations on the accuracy of parameter identification. Robot calibration
APA, Harvard, Vancouver, ISO, and other styles
6

Ward, Matthew. "Automatic-calibration methods for internal combustion engines." Thesis, University of Bath, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Uudelepp, Oscar. "Positional calibration methods for linear pipetting robot." Thesis, Uppsala universitet, Institutionen för elektroteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414666.

Full text
Abstract:
This thesis aims to investigate and develop two positional calibration methods that can be applied to a linear pipetting robot. The goal of the calibration is to detect displacements that have been made to objects that are located in the the robot’s reference system and try to estimate their new position. One of the methods utilizes the pressure system that is mounted on the robot’s arm. The pressure system was able to detect surfaces by blowing air through a pipette against desired surfaces. Positional information of targeted objects are acquired by using the surface detection feature against an extruded square landmark that acts as a reference for estimating displacements.  The other method uses a barcode scanning camera by using its images to detect and retrieve positional information on Aruco markers. Estimation of the targeted object is done by tracking the movement of the Arucos position and orientation. Tests were made in order to analyse the performance of both methods and to verify that the requirement of 0.1 mm accuracy and precision could be obtained. The tests were limited to analysing the methods performance on stationary targets to guarantee that the methods did not detect incorrect displacements. It was found that the camera method could fulfill the requirement when it came to estimating XY-coordinates  by using multiple images and placing the Aruco marker within a reasonable distance to the targeted object. However, the camera method was not accurate when it came to estimating the Z-coordinates of objects. As for the pressure method, it was able to fulfill the requirement when it came to estimating Z-coordinates, but its ability to estimate the XY-coordinates of an object was not sufficient. A recommendation would be combine both methods so that they can compensate each other by using the camera method for estimating the XY-coordinates and the pressure method for estimating the Z-coordinates.
APA, Harvard, Vancouver, ISO, and other styles
8

Ostrowski, Kamil. "Optimal dynamic calibration methods for powertrain controllers." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2014401/.

Full text
Abstract:
Emission legislation for passenger cars has become more stringent and the increasing demand for reduced fuel consumption has resulted in the introduction of complex new engine and after-treatment technologies involving significantly more control parameters. Vehicle manufacturers employ a time consuming engine parameter calibration process to optimise vehicle performance through the development of engine management system control maps. The traditional static calibration methods require an exponential increase in calibration time with additional calibration parameters and control objectives. To address this issue, this thesis develops and investigates a novel Inverse Optimal Behaviour Based Dynamic Calibration methodology and its application to diesel engines. This multi-stage methodology is based on dynamic black-box modelling and dynamic system optimisation. Firstly the engine behaviour is characterized by black-box models, based on data obtained in a rapid data collection process, for accurate dynamic representation of a subject engine. Then constrained dynamic optimisation is employed to find the optimal input-output behaviour. Finally the optimal input-output behaviour is used to identify feedforward dynamic controllers. The current study applies the methodology to an industrial state-of-the-art WAVERT model of a 1.5 litre Turbo EU6.1 Diesel engine acting as a virtual engine. The approach directly yields a feedforward controller in a nonlinear polynomial structure which can either be directly implemented in the engine-management system or converted to a dynamic or static look-up table format. The results indicate that the methodology is superior to the conventional static calibration approach in both computing efficiency and control performance. A low-cost Transient Testing Platform is presented in this work to carry out transient data collection experiments on a steady-state dynamometer with application to non-linear engine and emissions modelling using State Space Neural Networks. This modelling technique is shown to be superior to the polynomial models and achieves similar performance to non-linear autoregressive with exogenous input neural (NARMAX) network models. Numerical Dynamic Programming is investigated in a simplified engine calibration problem for a virtual engine to potentially improve the dynamic calibration optimisation stage. In a second study the novel dynamic calibration methodology is applied to the airpath control of a 3.0L Jaguar Land Rover (JLR) turbocharged Diesel engine utilizing a direct optimisation approach and State Space Neural Network models. A complete experimental application of the methodology is demonstrated in a vehicle where the vehicle-implemented calibration is obtained in a one-shot process solely from data obtained from the fast dynamic dynamometer testing. The results obtained demonstrate the potential of this methodology for the rapid development of efficient dynamic feedforward controllers based on limited data from the engine test bed.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodríguez, Cuesta Mª José. "Limit of detection for second-order calibration methods." Doctoral thesis, Universitat Rovira i Virgili, 2006. http://hdl.handle.net/10803/9013.

Full text
Abstract:
Analytical chemistry can be split into two main types, qualitative and quantitative. Most modern analytical chemistry is quantitative. Popular sensitivity to health issues is aroused by the mountains of government regulations that use science to, for instance, provide public health information to prevent disease caused by harmful exposure to toxic substances. The concept of the minimum amount of an analyte or compound that can be detected or analysed appears in many of these regulations (for example, to discard the presence of traces of toxic substances in foodstuffs) generally as a part of method validation aimed at reliably evaluating the validity of the measurements.

The lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) is called the detection limit or limit of detection (LOD). Traditionally, in the context of simple measurements where the instrumental signal only depends on the amount of analyte, a multiple of the blank value is taken to calculate the LOD (traditionally, the blank value plus three times the standard deviation of the measurement). However, the increasing complexity of the data that analytical instruments can provide for incoming samples leads to situations in which the LOD cannot be calculated as reliably as before.

Measurements, instruments and mathematical models can be classified according to the type of data they use. Tensorial theory provides a unified language that is useful for describing the chemical measurements, analytical instruments and calibration methods. Instruments that generate two-dimensional arrays of data are second-order instruments. A typical example is a spectrofluorometer, which provides a set of emission spectra obtained at different excitation wavelengths.

The calibration methods used with each type of data have different features and complexity. In this thesis, the most commonly used calibration methods are reviewed, from zero-order (or univariate) to second-order (or multi-linears) calibration models. Second-order calibration models are treated in details since they have been applied in the thesis.

Concretely, the following methods are described:
- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)

Analytical methods should be validated. The validation process typically starts by defining the scope of the analytical procedure, which includes the matrix, target analyte(s), analytical technique and intended purpose. The next step is to identify the performance characteristics that must be validated, which may depend on the purpose of the procedure, and the experiments for determining them. Finally, validation results should be documented, reviewed and maintained (if not, the procedure should be revalidated) as long as the procedure is applied in routine work.

The figures of merit of a chemical analytical process are 'those quantifiable terms which may indicate the extent of quality of the process. They include those terms that are closely related to the method and to the analyte (sensitivity, selectivity, limit of detection, limit of quantification, ...) and those which are concerned with the final results (traceability, uncertainty and representativity) (Inczédy et al., 1998). The aim of this thesis is to develop theoretical and practical strategies for calculating the limit of detection for complex analytical situations. Specifically, I focus on second-order calibration methods, i.e. when a matrix of data is available for each sample.

The methods most often used for making detection decisions are based on statistical hypothesis testing and involve a choice between two hypotheses about the sample. The first hypothesis is the "null hypothesis": the sample is analyte-free. The second hypothesis is the "alternative hypothesis": the sample is not analyte-free. In the hypothesis test there are two possible types of decision errors. An error of the first type occurs when the signal for an analyte-free sample exceeds the critical value, leading one to conclude incorrectly that the sample contains a positive amount of the analyte. This type of error is sometimes called a "false positive". An error of the second type occurs if one concludes that a sample does not contain the analyte when it actually does and it is known as a "false negative". In zero-order calibration, this hypothesis test is applied to the confidence intervals of the calibration model to estimate the LOD as proposed by Hubaux and Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970).

One strategy for estimating multivariate limits of detection is to transform the multivariate model into a univariate one. This strategy has been applied in this thesis in three practical applications:
1. LOD for PARAFAC (Parallel Factor Analysis).
2. LOD for ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD for MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)

In addition, the thesis includes a theoretical contribution with the proposal of a sample-dependent LOD in the context of multivariate (PLS) and multi-linear (N-PLS) Partial Least Squares.
La Química Analítica es pot dividir en dos tipus d'anàlisis, l'anàlisi quantitativa i l'anàlisi qualitativa. La gran part de la química analítica moderna és quantitativa i fins i tot els govern fan ús d'aquesta ciència per establir regulacions que controlen, per exemple, nivells d'exposició a substàncies tòxiques que poden afectar la salut pública. El concepte de mínima quantitat d'un analit o component que es pot detectar apareix en moltes d'aquestes regulacions, en general com una part de la validació dels mètodes per tal de garantir la qualitat i la validesa dels resultats.

La mínima quantitat d'una substància que pot ser diferenciada de l'absència d'aquesta substància (el que es coneix com un blanc) s'anomena límit de detecció (limit of detection, LOD). En procediments on es treballa amb mesures analítiques que són degudes només a la quantitat d'analit present a la mostra (situació d'ordre zero) el LOD es pot calcular com un múltiple de la mesura del blanc (tradicionalment, 3 vegades la desviació d'aquesta mesura). Tanmateix, l'evolució dels instruments analítics i la complexitat creixent de les dades que generen, porta a situacions en les que el LOD no es pot calcular fiablement d'una forma tan senzilla. Les mesures, els instruments i els models de calibratge es poden classificar en funció del tipus de dades que utilitzen. La Teoria Tensorial s'ha utilitzat en aquesta tesi per fer aquesta classificació amb un llenguatge útil i unificat. Els instruments que generen dades en dues dimensions s'anomenen instruments de segon ordre i un exemple típic és l'espectrofluorímetre d'excitació-emissió, que proporciona un conjunt d'espectres d'emissió obtinguts a diferents longituds d'ona d'excitació.

Els mètodes de calibratge emprats amb cada tipus de dades tenen diferents característiques i complexitat. En aquesta tesi, es fa una revisió dels models de calibratge més habituals d'ordre zero (univariants), de primer ordre (multivariants) i de segon ordre (multilinears). Els mètodes de segon ordre estan tractats amb més detall donat que són els que s'han emprat en les aplicacions pràctiques portades a terme.

Concretament es descriuen:

- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)

Com s'ha avançat al principi, els mètodes analítics s'han de validar. El procés de validació inclou la definició dels límits d'aplicació del procediment analític (des del tipus de mostres o matrius fins l'analit o components d'interès, la tècnica analítica i l'objectiu del procediment). La següent etapa consisteix en identificar i estimar els paràmetres de qualitat (figures of merit, FOM) que s'han de validar per, finalment, documentar els resultats de la validació i mantenir-los mentre sigui aplicable el procediment descrit.

Algunes FOM dels processos químics de mesura són: sensibilitat, selectivitat, límit de detecció, exactitud, precisió, etc. L'objectiu principal d'aquesta tesi és desenvolupar estratègies teòriques i pràctiques per calcular el límit de detecció per problemes analítics complexos. Concretament, està centrat en els mètodes de calibratge que treballen amb dades de segon ordre.

Els mètodes més emprats per definir criteris de detecció estan basats en proves d'hipòtesis i impliquen una elecció entre dues hipòtesis sobre la mostra. La primera hipòtesi és la hipòtesi nul·la: a la mostra no hi ha analit. La segona hipòtesis és la hipòtesis alternativa: a la mostra hi ha analit. En aquest context, hi ha dos tipus d'errors en la decisió. L'error de primer tipus té lloc quan es determina que la mostra conté analit quan no en té i la probabilitat de cometre l'error de primer tipus s'anomena fals positiu. L'error de segon tipus té lloc quan es determina que la mostra no conté analit quan en realitat si en conté i la probabilitat d'aquest error s'anomena fals negatiu. En calibratges d'ordre zero, aquesta prova d'hipòtesi s'aplica als intervals de confiança de la recta de calibratge per calcular el LOD mitjançant les fórmules d'Hubaux i Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970)

Una estratègia per a calcular límits de detecció quan es treballa amb dades de segon ordre es transformar el model multivariant en un model univariant. Aquesta estratègia s'ha fet servir en la tesi en tres aplicacions diferents::
1. LOD per PARAFAC (Parallel Factor Analysis).
2. LOD per ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD per MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)

A més, la tesi inclou una contribució teòrica amb la proposta d'un LOD que és específic per cada mostra, en el context del mètode multivariant PLS i del multilinear N-PLS.
APA, Harvard, Vancouver, ISO, and other styles
10

Charbachi, Peter, and Filippo Ferrario. "Methods for Automatic Hydraulics Calibration in Construction Equipment." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-40341.

Full text
Abstract:
In this thesis we investigate the problem of automatic calibration and control of hydraulic components in the domain of construction equipment. Methods that are able to remove a costly manual approach in favour of an automatic one are investigated and evaluated. The thesis aims to investigate what methods are available in achieving this goal as well as evaluate the performance and applicability of such methods in the domain of construction equipment. The literature indicates that a great focus is put on learning a model of the plant at run time in order to provide accurate control. Common approaches to the problem are the Recursive Least Square method and PID controllers for non-linear systems, but other methods are also present, such as the Nodal Link Perceptron Network (NLPN). The methods chosen to be compared are the existing method of manually calibrating two set points for start and end current and interpolating between them; the use of a PI controller with a static line inverse model; a PI controller with a static curve inverse model; a PI controller with an NLPN adaptive inverse model and lastly, a completely NLPN based control strategy. The methods were implemented in Matlab Simulink and evaluated in simulations based on data collected from real wheel loaders in the construction equipment domain, produced by Volvo CE. The simulations are performed on data from three machines and were evaluated twice for the adaptive methods in order to evaluate how well the methods improved. The results were then evaluated in terms of average absolute error, as well as a discussion of the behaviour shown in the plots. The evaluations indicate that the most effective method for control is the PI controller using a static line inverse model. The method produces the smallest average error of both actions evaluated, lifting and lowering of the boom, while the complete NLPN solution provide the worst results.
APA, Harvard, Vancouver, ISO, and other styles
11

Eriksson, Hans. "Output Power Calibration Methods for an EGPRS Mobile Platform." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2023.

Full text
Abstract:

This thesis deals with output power calibration of a mobile platform that supports EGPRS.Two different topics are examined. First some different measurement methods are compared concerning cost efficiency, accuracy, and speed and later measurements are carried out on a mobile platform.

The output power from the mobile platform is controlled by three parameters and the influence on the output power when varying those parameters is investigated and presented. Furthermore, two methods of improving the speed of the calibration are presented.

The first one aims to decrease the number of bursts to average over as much as possible. The conclusion is that 10-20 bursts are enough for GMSK modulation and about five bursts for 8PSK modulation. The purpose of the second investigation is to examine the possibility to measure the output power in one modulation and frequency band, and then calculate the output power in the other bands. The conclusion in this case is that, based on the units investigated, it is possible for some values of the parameters and in some frequency bands. However, more units need to be included in the basic data for decision-making and it is possible that the hardware variation is too large.

APA, Harvard, Vancouver, ISO, and other styles
12

Cibere, Joseph John. "Calibration transfer methods for feedforward neural network based instruments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ63857.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Biggar, Stuart Frick. "In-flight methods for satellite sensor absolute radiometric calibration." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185069.

Full text
Abstract:
Three methods for the in-flight absolute radiometric calibration of satellite sensors are presented. The Thematic Mapper (TM) on the Landsat satellites and the HRV on the SPOT satellite have been calibrated using the three methods at the White Sands Missile Range in New Mexico. Ground and airborne measurements of ground reflectance, radiance, atmospheric, and weather parameters are made coincident with satellite image acquisition. The data are analyzed to determine inputs to radiative transfer codes. The codes compute the radiance at the sensor entrance pupil which is compared to the average digital count from the measured ground area. The three methods are the reflectance-based, radiance-based and irradiance-based methods. The relevant theory of radiative transfer through an atmosphere is reviewed. The partition of extinction optical depth into Rayleigh, aerosol and absorption optical depths is discussed. The reflectance-based method is described along with the assumptions made. The reflectance-based method accuracy is no better than the measurement of the ground reflectance which is made in reference to a standard of spectral reflectance. The radiance-based method is described. The standard for the radiance method is a standard of spectral irradiance used to calibrate a radiometer. The calibration of a radiometer is discussed along with the use of radiative transfer computations to correct for the residual atmosphere above the radiometer. The irradiance-based method is described. It uses the measurement of the downward direct and total irradiance at ground level to determine the apparent reflectance seen by a sensor. This method uses an analytic approximation to compute the reflectance without the use of an "exact" radiative transfer code. The direct-to-total irradiance ratio implicitly gives the description of the scattering normally calculated from the size distribution and assumption of Mie scattering by the aerosols. The three methods give independent results which should allow for the detection of possible systematic errors in any of the methods. All three methods give results within the estimated errors of each method on most calibration dates. We expect the results of our sensor calibrations are within five percent of the actual value.
APA, Harvard, Vancouver, ISO, and other styles
14

Gong, Zitong. "Calibration of expensive computer models using engineering reliability methods." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028587/.

Full text
Abstract:
The prediction ability of complex computer models (also known as simulators) relies on how well they are calibrated to experimental data. History Matching (HM) is a form of model calibration for computationally expensive models. HM sequentially cuts down the input space to find the fitting input domain that provides a reasonable match between model output and experimental data. A considerable number of simulator runs are required for typical model calibration. Hence, HM involves Bayesian emulation to reduce the cost of running the original model. Despite this, the generation of samples from the reduced domain at every iteration has remained an open and complex problem: current research has shown that the fitting input domain can be disconnected, with nontrivial topology, or be orders of magnitude smaller than the original input space. Analogous to a failure set in the context of engineering reliability analysis, this work proposes to use Subset Simulation - a widely used technique in engineering reliability computations and rare event simulation - to generate samples on the reduced input domain. Unlike Direct Monte Carlo, Subset Simulation progressively decomposes a rare event, which has a very small probability of occurrence, into sequential less rare nested events. The original Subset Simulation uses a Modified Metropolis algorithm to generate the conditional samples that belong to intermediate less rare events. This work also considers different Markov Chain Monte Carlo algorithms and compares their performance in the context of expensive model calibration. Numerical examples are provided to show the potential of the embedded Subset Simulation sampling schemes for HM. The 'climb-cruise engine matching' illustrates that the proposed HM using Subset Simulation can be applied to realistic engineering problems. Considering further improvements of the proposed method, a classification method is used to ensure that the emulation on each disconnected region gets updated. Uncertainty quantification of expert-estimated correlation matrices helps to identify a mathematically valid (positive semi-definite) correlation matrix between resulting inputs and observations. Further research is required to explicitly address the model discrepancy as well as to take the correlation between model outputs into account.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Yousheng. "Model calibration methods for mechanical systems with local nonlinearities." Doctoral thesis, Linnéuniversitetet, Institutionen för maskinteknik (MT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-57638.

Full text
Abstract:
Most modern product development utilizes computational models. With increasing demands on reducing the product development lead-time, it becomes more important to improve the accuracy and efficiency of simulations. In addition, to improve product performance, a lot of products are designed to be lighter and more flexible, thus more prone to nonlinear behaviour. Linear finite element (FE) models, which still form the basis of numerical models used to represent mechanical structures, may not be able to predict structural behaviour with necessary accuracy when nonlinear effects are significant. Nonlinearities are often localized to joints or boundary conditions. Including nonlinear behaviour in FE-models introduces more sources of uncertainty and it is often necessary to calibrate the models with the use of experimental data. This research work presents a model calibration method that is suitable for mechanical systems with structural nonlinearities. The methodology concerns pre-test planning, parameterization, simulation methods, vibrational testing and optimization. The selection of parameters for the calibration requires physical insights together with analyses of the structure; the latter can be achieved by use of simulations. Traditional simulation methods may be computationally expensive when dealing with nonlinear systems; therefore an efficient fixed-step state-space based simulation method was developed. To gain knowledge of the accuracy of different simulation methods, the bias errors for the proposed method as well as other widespread simulation methods were studied and compared. The proposed method performs well in comparison to other simulation methods. To obtain precise estimates of the parameters, the test data should be informative of the parameters chosen and the parameters should be identifiable. Test data informativeness and parameter identifiability are coupled and they can be assessed by the Fisher information matrix (FIM). To optimize the informativeness of test data, a FIM based pre-test planning method was developed and a multi-sinusoidal excitation was designed. The steady-state responses at the side harmonics were shown to contain valuable information for model calibration of FE-models representing mechanical systems with structural nonlinearities. In this work, model calibration was made by minimizing the difference between predicted and measured multi-harmonic frequency response functions using an efficient optimization routine. The steady-state responses were calculated using the extended multi-harmonic balance method. When the parameters were calibrated, a k-fold cross validation was used to obtain parameter uncertainty. The proposed model calibration method was validated using two test-rigs, one with a geometrical nonlinearity and one with a clearance type of nonlinearity. To attain high quality data efficiently, the amplitude of the forcing harmonics was controlled at each frequency step by an off-line force feedback algorithm. The applied force was then measured and used in the numerical simulations of the responses. It was shown in the validation results that the predictions from the calibrated models agree well with the experimental results. In summary, the presented methodology concerns both theoretical and experimental aspects as it includes methods for pre-test planning, simulations, testing, calibration and validation. As such, this research work offers a complete framework and contributes to more effective and efficient analyses on mechanical systems with structural nonlinearities.
APA, Harvard, Vancouver, ISO, and other styles
16

Mobley, Paul R. "Use of a priori information to produce more effective, automated chemometrics methods /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pisano, William James. "Computationally efficient control methods and hardware for autonomous antenna calibration." Diss., Connect to online resource, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McKnight, Patrick Everett 1966. "Calibration of psychological measures: An illustration of three quantitative methods." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282437.

Full text
Abstract:
The scores or metrics from psychological measures are rarely interpretable. Uninterpretable metrics result in poorly understood psychological research findings. In response to this problem, several methods are proposed that render metrics more meaningful. The methods employed are calibration procedures. Three calibration procedures are illustrated that prove to be extremely powerful in making the metrics of two related measures more understandable. Establishing the behavioral implications of the scores, computing just noticeable differences, and calibrating between measures are the three procedures described and illustrated. For the purposes of illustration, two measures of Attention Deficit Hyperactivity Disorder (ADHD) are used in the calibration procedures. These two measures are often used interchangeably without regard to their relationship with one another. The three procedures and the results of each are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
19

Anderson, Karen. "Temporal variability in calibration target reflectance : methods, models and applications." Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cirovic, Dragan A. "Evaluation os some multivariate calibration methods and their chemometric applications." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Long, Alexander. "Calibration methods of the neutron detector at Florida State University." Tallahassee, Fla. : Florida State University, 2009. http://purl.fcla.edu/fsu/lib/digcoll/undergraduate/honors-theses/244588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Keyetieu, Nlowe Rabine Manel. "Calibration of multi-beam echo sounder systems by inverse methods." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0012.

Full text
Abstract:
Les sondeurs multifaisceaux sont des appareils qui permettent d’effectuer des mesures de profondeurs bathymétriques. Le principe consiste à émettre une onde acoustique vers le fond marin, une fois réfléchie, elle est réécoutée par le sondeur. Le temps de trajet aller/retour et la vitesse du son dans les milieux marins sont utilisés pour déterminer les profondeurs marines. Les systèmes de mesure sont généralement constitués d’un système GNSS pour connaitre la position de la plateforme de mesure, d’une centrale inertielle pour déterminer les mouvements d’attitude et d’un sondeur proprement dit. La fusion des données de ces différents appareils permet d’avoir des mesures de sondes géoréférencées dans un repère global. Pour améliorer la qualité du résultat du traitement relatif à la mise en commun des données, communément appelée géoréférencement, les alignements géométriques et une synchronisation temporelle doivent être effectués entre les appareils du système de mesure. Ces opérations sont dénommées calibration. Il existe trois principaux paramètres à déterminer durant ce processus de calibration : la latence entre les horloges des capteurs utilisés (GNSS, centraleinertielle et sondeur), les bras de levier entre le centre acoustique du sondeur et le point de référence de positionnement et le troisième principal paramètre est le désalignement angulaire entre les repères de référence du sondeur et de la centrale inertielle. Cette thèse propose de nouvelles méthodes qui permettent d’estimer ces paramètres. À la différence des méthodes de calibration existantes dites traditionnelles, les méthodes proposées sont automatiques, rigoureuses et ne sont pas subjectives
Multi-beam echo sounders are devices which are used to compute the bathymetric depth. The principle consists in sending to the seabed an acoustic signal, then this signal is reflected and received by the sounder which records the two-way travel time. This two-way travel time combined with the speed of sound are used to derive the depth. Measurement systems are generally constituted of a GNSS which gives the survey platform position, an Inertial measurement unit which gives the orientation and a sounder itself. The data merge of these different sensors enables to have the georeferenced soundings in a global frame. For improving the quality of process pertaining to the merging of data, what is commonly termed georeferencing, geometrical alignment and a temporal synchronization have to be performed between devices of the measurement system. These operations are referred to as calibration. There exist three main parameters to determineduring this process of calibration: the latency between the reference time-tag clocks of each used sensor (GNSS, inertial measurement unit, sounder), the lever arms between the sounder acoustic center and the positioning reference point of the survey platform, and the third principal parameter is the angular misalignment between the sounder and the inertial unit reference frames. This thesis proposes new methods for the estimation of these parameters. In contrast toexisting methods of calibration, referred to as traditional, the proposed methods are automatic, rigorous and do not depend on the user
APA, Harvard, Vancouver, ISO, and other styles
23

Woodward, Steven T. "Springback Calibration of Sheet Metal Components Using Impulse Forming Methods." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306683543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Grigorová, Eva. "Spectroscopic methods for concentration measurements and calibration of reactive gases." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10152/document.

Full text
Abstract:
Le sujet de ma thèse concerne l´application des méthodes spectroscopiques utilisées pour la détection et la mesure de concentration de différentes espèces intéressantes du point de vue de leur réactivité et pour leur rôle dans des domaines très variés comme en chimie atmosphérique, en astrochimie ou encore dans le corps humain. Pour la première fois l´analyse de la bande de vibration asymétrique v4 du radical FCO2 a été effectuée ainsi que celle de la bande v2 . FCO2 est un des produits intermédiaires importants dans les processus de dégradation des carbures d´halogène. Ce travail à conduit à la détermination précise des constantes moléculaires (y compris celles de structure fine) des deux bandes de vibration.Nous avons également effectué l´identification de l´ion radicalaire CS+ par spectroscopie millimétrique, dans la gamme de fréquence 414 - 622 GHz. L´analyse globale avec les données infrarouges disponibles a permis de déterminer précisément les valeurs de la constante de rotation et de la constante de structure fine. J’ai ensuite effectué la mesure des spectres des cyanures BrCN et CH3CN par spectroscopie infrarouge à transformée de Fourier. J´ai aussi étudié la réactivité de ces molécules ainsi que leurs produits de décomposition dans un plasma à basse température. Finalement, j´ai étudié l´impact écologique de l´ammoniaque sur l´environnement ainsi que le rôle des arbres sur la concentration en ammoniaque atmosphérique. Une cuve opto-acoustique a ainsi été conçue pour mesurer les concentrations de l´ammoniaque et d´autres gaz présents en très faible quantité
Description of the work is divided into four thematic parts describing four independently performed experiments. The analyses of the asymmetrical vibration ν4 band and the symmetrical ν2 band of the FCO2 radical, that belong to significant intermediate products of degradation processes of halogen hydrocarbons, were performed for the first time within this work. The detailed analysis led to the determination of the rotational constants, centrifugal distortion constants and fine structure constants for both bands. For the first time we performed the unambiguous identification of the radical ion CS+ by high resolution millimetrewave spectroscopy in the frequency range 414 - 622 GHz. The complex analysis allowed us to accurately determine the values of the rotational constant as well as the fine structure constant. Experiments were also perform to measure spectra of cyan BrCN and CH3CN using time resolved Fourier transform infrared spectroscopy. These molecules as well as their disintegration products in the low temperature plasma environment were carefully studied. Finally, I have studied the ecological impact of ammonia (NH3) on the environment and the influence of trees on the amount of ammonia in the air. For this purpose, an optoacoustic cell was designed and assembled to measure trace amounts of ammonia and other gaseous substances
APA, Harvard, Vancouver, ISO, and other styles
25

Herty, Andreas. "Micron precision calibration methods for alignment sensors in particle accelerators." Thesis, Nottingham Trent University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Elfeky, Ahmed. "Methods of calibration for different functions of a SCR-system." Thesis, KTH, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226592.

Full text
Abstract:
The goal of this research is to try and compare different methods ofcalibration in order to tune the parameters of the pumping and tankheater monitoring functions of the AdBlue Delivery Module of a SelectiveCatalytic Reduction (SCR) system. The goal of the SCR systemis to reduce the emission of NOx gases, which are considered as greenhousegases.In a first step, while calibrating the parameters of the pumping function,a real-time calibration method has been used. The advantage inthis process is that a detailed model of the system is not needed totune it. Then, the tank heater monitoring function has been calibratedthrough simulations. The understanding of the system is better in thiscase, which could help tuning it more effectively.The results shows that both methods should ensure the proper functioningof the system. However, the parameters found in this studycould not be totally approved without being tested on vehicle, in reallifeconditions. Moreover, as the priority is to avoid the malfunctionof the system, the chosen parameters might not be the optimal ones interms of performance.With these two methods, most of the systems could be calibrated. Thechoice of the method should be done according to the initial level ofknowledge of the object of study
Målet med denna forskning är att försöka och jämföra olika metoderför kalibrering för att ställa in parametrarna för pumpen och övervakningsfunktionenhos tankvärmaren i AdBlue-leveransmodulen i en selektivKatalytisk reduktion (SCR) system. Målsättningen med SCRsystemetär att minska utsläppen av NOx-gaser, vilka betraktas somväxthus gaser.I ett första steg under kalibrering av parametrarna för pumpfunktionenhar en realtidskalibreringsmetod använts. Fördelen i denna processär att en detaljerad modell av systemet inte behövs för att justeradet. Sedan har övervakningsfunktionen för tankvärmaren kalibreratsgenom simuleringar. Systemets förståelse är bättre i detta fall, vilketkan hjälpa till att stämma ut det mer effektivt.Resultaten visar att båda metoderna bör säkerställa att systemet funkarbra. Däremot kunde parametrarna inte godkännas helt utan attprovas på fordon i verkliga förhållanden. Dessutom prioritet är attundvika funktionsfel av systemet, därför kanske de valda parametrarnainte är de optimala avseende prestanda. De valet av metoden börgöras enligt den ursprungliga nivån på kunskap om studieobjektet.
APA, Harvard, Vancouver, ISO, and other styles
27

Ghassemian, Alireza. "Robust Statistical Methods for Measurement Calibration in Large Electric Power Systems." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/40527.

Full text
Abstract:
The Objective of the Remote Measurements Calibration (RMC) method is to minimize systematic errors through an appropriate scaling procedure. A new method for RMC has been developed. This method solves the problems of observability, multiplicity of solutions, and ambiguity of reference points associated with the method proposed by Adibi et. al. [6-9]. The new algorithm uses the simulated annealing technique together with the matroid method to identify and minimize the number of RTUs (Remote Terminal Units) required to observe the system. After field calibration, these RTUs provide measurements that are used to estimate the whole state of the system. These estimates are then returned as a reference for remotely calibrating the remaining RTUs. The calibration coefficients are estimated by means of highly robust estimator, namely the Least Median of Squares (LMS) estimator. The calibration method is applicable to large systems by means of network tearing and dynamic programming. The number of field calibrations can be decreased further whenever multiple voltage measurements at the same buses are available. The procedure requires that the measurement biases are estimated from recorded metered values when buses, or lines, or transformers are disconnected. It also requires the application of a robust comparative voltage calibration method. To this end, a modified Friedman test has been developed and its robustness characteristics investigated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Soumah, Lucile. "Development, analysis and calibration methods for the dielectric characterization of biomaterials." Thesis, Uppsala universitet, Fasta tillståndets elektronik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Garbuno, Inigo A. "Stochastic methods for emulation, calibration and reliability analysis of engineering models." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3026757/.

Full text
Abstract:
This dissertation examines the use of non-parametric Bayesian methods and advanced Monte Carlo algorithms for the emulation and reliability analysis of complex engineering computations. Firstly, the problem lies in the reduction of the computational cost of such models and the generation of posterior samples for the Gaussian Process' (GP) hyperparameters. In a GP, as the flexibility of the mechanism to induce correlations among training points increases, the number of hyperparameters increases as well. This leads to multimodal posterior distributions. Typical variants of MCMC samplers are not designed to overcome multimodality. Maximum posterior estimates of hyperparameters, on the other hand, do not guarantee a global optimiser. This presents a challenge when emulating expensive simulators in light of small data. Thus, new MCMC algorithms are presented which allow the use of full Bayesian emulators by sampling from their respective multimodal posteriors. Secondly, in order for these complex models to be reliable, they need to be robustly calibrated to experimental data. History matching solves the calibration problem by discarding regions of input parameters space. This allows one to determine which configurations are likely to replicate the observed data. In particular, the GP surrogate model's probabilistic statements are exploited, and the data assimilation process is improved. Thirdly, as sampling- based methods are increasingly being used in engineering, variants of sampling algorithms in other engineering tasks are studied, that is reliability-based methods. Several new algorithms to solve these three fundamental problems are proposed, developed and tested in both illustrative examples and industrial-scale models.
APA, Harvard, Vancouver, ISO, and other styles
30

Cui, Chenhao. "Nonlinear multiple regression methods for spectroscopic analysis : application to NIR calibration." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10058694/.

Full text
Abstract:
Chemometrics has been applied to analyse near-infrared (NIR) spectra for decades. Linear regression methods such as partial least squares (PLS) regression and principal component regression (PCR) are simple and widely used solutions for spectroscopic calibration. My dissertation connects spectroscopic calibration with nonlinear machine learning techniques. It explores the feasibility of applying nonlinear methods for NIR calibration. Investigated nonlinear regression methods include least squares support vec- tor machine (LS-SVM), Gaussian process regression (GPR), Bayesian hierarchical mixture of linear regressions (HMLR) and convolutional neural networks (CNN). Our study focuses on the discussion of various design choices, interpretation of nonlinear models and providing novel recommendations and insights for the con- struction nonlinear regression models for NIR data. Performances of investigated nonlinear methods were benchmarked against traditional methods on multiple real-world NIR datasets. The datasets have differ- ent sizes (varying from 400 samples to 7000 samples) and are from various sources. Hypothesis tests on separate, independent test sets indicated that nonlinear methods give significant improvements in most practical NIR calibrations.
APA, Harvard, Vancouver, ISO, and other styles
31

Sjölin, Martin. "Methods of image acquisition and calibration for x-ray computed tomography." Doctoral thesis, KTH, Medicinsk bildfysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195024.

Full text
Abstract:
X-ray computed tomography (CT) is a common medical imaging device for acquiring high-resolution 3D images of the interior of the human body. The images are formed by mathematical reconstruction from hundreds of planar x-ray images that have been acquired during less then a second. Photon-counting spectral detectors are seen by many as the next big step in the development of medical CT. The potential benefits include: quantitative CT, ultra-low dose imaging and optimal contrast-to-noise performance. The current aim for the research pursued by the Physics of Medical Imaging Group at KTH is to develop, and commercialize, a photon-counting spectral detector using silicon wafers in edge-on geometry. With the introduction of a new detector comes many challenges, some of which this Thesis aims to address. Efficient calibration schemes will be an essential part of the realization of photon-counting spectral detectors in clinical CT. In the first part of the Thesis, three calibration methods are presented: two methods for calibration of the energy thresholds on multi-bin spectral detectors and one method for geometric calibration of edge-on detectors that are mounted in a CT gantry. The CT image acquisition produces large amounts of data that have to be transported out of the system, preferably in real-time. Already today, fewer samples are acquired when operating at very high rotation speeds due to bandwidth limitations. For photon-counting spectral detectors, the amount of data will be even larger due to the additional energy information and the generally smaller pixels, and it is therefore desirable to minimize the number of angular samples acquired per revolution. In the second part of the Thesis, two methods for relaxing the angular sampling requirement are presented. The first method uses the built-in redundancy of multi-layer detectors to increase the angular sampling rate via a temporal offset between the detector layers. The second method uses decimation in the view (angular) direction as a means for compression of CT sinogram data. The compression can be performed on the CT gantry and thus lower the required bandwidth of the data transfer. Although the overall aim of this work has been to develop methods that facilitate the introduction of photon-counting spectral detectors for medical CT, the presented methods are also applicable in the broader context of calibration of x-ray detectors and CT image acquisition.
Datortomografi (CT) är en vanligt förekommande medicinsk avbildningsteknik som används för att ta högupplösta 3D bilder av människans inre. Bilderna rekonstrueras matematiskt från hundratals 2D röntgenbilder som har tagits under mindre än en sekund. Introduktionen av spektrala fotonräknande röntgendetektorer anses vara nästa stora steg i utvecklingen av medicinsk CT. De potentiella fördelarna innefattar: kvantitativ CT, avbildning vid ultra-låg dos och optimalt kontrast-brus förhållande. Målet för det arbete som utförs av gruppen för Medicinsk Bildfysik på KTH är att utveckla och kommersialisera en spektral fotonräknande detektor baserad på kiselskivor som är monterade ”edge-on” (med kanten pekandes mot röntgenkällan). Den här avhandlingen adresserar några utav de utmaningar som följer införandet av denna nya typ av detektor. Tillgången till effektiva kalibreringstekniker kommer att vara nödvändig för realisationen av spektrala fotonräknande detektorer i medicinsk CT. I den första delen av avhandlingen presenteras tre kalibreringsmetoder, varav två relaterar till kalibrering av energitrösklarna på spektrala röntgendetektorer och en relaterar till geometrisk kalibrering av ”edge-on” detektorer monterade i en CT scanner. Bildtagningen i CT producerar stora mängder data som måste transporteras ut ur systemet, gärna i realtid. Redan idag tvingas man ofta ha färre mätpunkter när man använder höga rotationshastigheter på grund av begränsningar i utläsningens bandbredd eller mätelektronikens hastighet. För spektrala fotonräknande detektorer kommer mängden data att öka på grund av den extra energiinformationen och de generellt mindre pixlarna. Därför är det önskvärt att minimera antalet mätpunkter i vinkelled per varv. I den andra delen av avhandlingen presenteras två metoder som minskar kravet på antalet mätpunkter per varv. Den första metoden använder den inbyggda redundansen hos detektorer med flera lager för att ökan antalet mätpunkter i vinkelled genom att förskjuta mätpunkterna för dom olika lagerna i tiden. Den andra metoden använder decimering i vinkelled för att komprimera CT data. Kompressionen kan utföras på CT scannern och kan användas för att minska kravet på datautläsningens bandbredd. Det övergripande målet för arbetet som utgör avhandling har varit att utveckla metoder som möjliggör introduktionen av spektrala fotonräknande detektorer för medicinsk CT. De presenterade metoderna är emellertid även användbara i den mer generella kontexten av kalibrering av röntgendetektorer och bildtagning i CT.

QC 20161104

APA, Harvard, Vancouver, ISO, and other styles
32

Vieira, Rodriguez Cristian. "Calibration of Electrical Methods for Detecting Gas Injection in Porous Media." Paris, Institut de physique du globe, 2013. http://www.theses.fr/2013GLOB1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Najafi, Mohammad. "New methods for calibration and tool tracking in ultrasound-guided interventions." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51776.

Full text
Abstract:
Ultrasound is a safe, portable, inexpensive and real-time modality that can produce 2D and 3D images. It is a valuable intra-operative imaging modality to guide surgeons aiming to achieve higher accuracy of the intervention and improve patient outcomes. In all the clinical applications that use tracked ultrasound, one main challenge is to precisely locate the ultrasound image pixels with respect to a tracking sensor on the transducer. This process is called spatial calibration and the objective is to determine the spatial transformation between the ultrasound image coordinates and a coordinate system defined by the tracking sensor on the transducer housing. Another issue in ultrasound guided interventions is that tracking surgical tools (for example an epidural needle) usually requires expensive, large optical trackers or low accuracy magnetic trackers and there is a need for a low-cost, easy-to-use and accurate solution. In this thesis, for the first problem I have proposed two novel complementary methods for ultrasound calibration that provide ease of use and high accuracy. These methods are based on my differential technique which enables high measurement accuracy. I developed a closed-form formulation that makes it possible to achieve high accuracy with using a low number of images. For the second problem, I developed a method to track surgical tools (epidural needles in particular) using a single camera mounted on the ultrasound transducer to facilitate ultrasound guided interventions. The first proposed ultrasound calibration method achieved an accuracy of 0.09 ± 0.39 mm. The second method with a much simpler phantom yet achieved similar accuracy compared to the N-wire method. The proposed needle tracking method showed high accuracy of 0.94 ± 0.46 mm.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
34

Witman, Sandra Lynn 1958. "RADIOMETRIC CALIBRATION OF THE THEMATIC MAPPER 48-INCH DIAMETER SPHERICAL INTEGRATING SOURCE (48-SIS) USING TWO DIFFERENT CALIBRATION METHODS." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Backlund, Ludvig, Anna Martin, and Gustaf Svantesson. "Methods for calibration of the vibration measurement system EVME used on the JAS 39 Gripen engine." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-226959.

Full text
Abstract:
This project investigates methods for calibrating and functional testing an Engine Vibration Measurement Equipment. The equipment uses accelerometers to measure vibrations on the JAS 39 Gripen aircraft engine to ensure that the engine is correctly installed. A model for emulating the electronic signals generated by the accelerometers was created in Simulink and different ways for utilizing the model as a part of a complete test system were examined. Upon examination of different calibration methods it was found that a complete test system needs to have the ability to both send electrical signals directly into the EVME as well as a way of testing the accelerometers mechanically. Both short- and long-term solutions fulfilling these requirements were prepared.
APA, Harvard, Vancouver, ISO, and other styles
36

Thoman, Glen W. "Continuous analysis methods in stormwater management practice, sensitivity, calibration and model development." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq29398.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Korostelev, Michael. "Performance Evaluation for Full 3D Projector Calibration Methods in Spatial Augmented Reality." Master's thesis, Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/213116.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
Spatial Augment Reality (SAR) has presented itself to be an interesting tool for not only interesting ways to visualize information but to develop creative works in performance arts. The main challenge is to determine accurate geometry of a projection space and determine an efficient and effective way to project digital media and information to create an augmented space. In our previous implementation of SAR, we developed a projector-camera calibration approach using infrared markers. However, the projection suffers severe distortion due to the lack of depth information in the projection space. For this research, we propose to develop a RGBD sensor - projector system to replace our current projector-camera SAR system. Proper calibration between the camera or sensor and projector links vision to projection, answering the question of which point in camera space maps to what point in the space of projection. Calibration will resolve the problem of capturing the geometry of the space and allow us to accurately augment the surfaces of volumetric objects and features. In this work three calibration methods are examined for performance and accuracy. Two of these methods are existing adaptations of 2D camera - projector calibrations (calibration using arbitrary planes and ray-plane intersection) with our third proposed novel technique which utilizes point cloud information from the RGBD sensor directly. Through analysis and evaluation using re-projection error, results are presented, identifying the proposed method as practical and robust.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
38

Baret, Monique. "Multivariate calibration methods for analysis of halide ions using an ise array." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE10148.

Full text
Abstract:
Une procedure de calibration a ete developpee pour l'analyse d'ions halogenure a l'aide d'un ensemble d'electrodes selectives ioniques (ise). Differents niveaux d'interference ont ete consideres et, en particulier, des cas difficiles en milieu d'interferences entre chlorures et bromures. Plusieurs methodes de calibration ont ete comparees : des methodes lineaire univariees et multivariees et des reseaux neuronaux. Dans le cas ou les interferences sont faibles ou nulles, les methodes univariees sont suffisantes pour determiner les concentrations mais dans le cas de tres fortes interferences, il n'y a pas de determination possible. Dans le cas d'interferences moderees, les methodes multivariees peuvent etre employees mais la calibration par reseaux neuronaux est montree comme etant la meilleure des methodes etudiees pour calibrer l'ensemble d'ise pour la determination des concentrations ioniques.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Jee Yun. "Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85210.

Full text
Abstract:
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty.
Master of Science
A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Xuyuan. "Statistical validation and calibration of computer models." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39478.

Full text
Abstract:
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Weining, Wang Verfasser], Wolfgang [Akademischer Betreuer] [Härdle, and Vladimir [Akademischer Betreuer] Spokoiny. "Adaptive methods for risk calibration / Wang Weining. Gutachter: Wolfgang Karl Härdle ; Vladimir Spokoiny." Berlin : Humboldt Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2012. http://d-nb.info/1026475252/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dawson, Debra. "Evaluation and calibration of functional network modeling methods based on known anatomical connections." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114238.

Full text
Abstract:
Recent studies have identified large scale brain networks based on the spatio-temporal structure of spontaneous fluctuations in resting-state fMRI data. It is expected that functional connectivity based on resting-state data is reflective of- but not identical to the underlying anatomical connectivity. However, which functional connectivity analysis methods reliably predict the network structure remains unclear. Here we tested and compared network connectivity analysis methods by applying them to fMRI resting-state time-series obtained from the human visual cortex. The methods evaluated here are those previously tested against simulated data in Smith et al. (Neuroimage, 2011).To this end, we defined regions within retinotopic visual areas V1, V2, and V3 according to their eccentricity in the visual field, delineating central, intermediate, and peripheral eccentricity regions of interest (ROIs). These ROIs served as nodes in the models we study. We based our evaluation on the 'ground-truth', thoroughly studied retinotopically organized anatomical connectivity in the monkey visual cortex. For each evaluated method, we computed the fractional rate of detecting connections known to exist ('c-sensitivity'), while using a threshold of the 95th percentile of the distribution of interaction magnitudes of those connections not expected to exist. Under optimal conditions, including session duration of 68 minutes, a relatively small network consisting of 9 nodes and artifact-free regression of the global effect, each of the top methods predicted the expected connections with 75%-83% c-sensitivity. Partial Correlation performed best (PCorr; 83%), followed by Regularized Inverse Covariance (ICOV; 79%), Bayesian Network methods (BayesNet; 77%), Correlation (75%), and General Synchronization measures (75%). With decreased session duration, these top methods saw decreases in c-sensitivities, achieving 66%-78% and 60%-70% for 34 and 17 minute sessions, respectively. With a short resting-state fMRI scan of 8.5 minutes (TR = 2s), none of the methods predicted the real network well, with ICOV (53%) and PCorr (51%) performing best. With increased complexity of the network from 9 to 36 nodes, multivariate methods including PCorr and BayesNet saw a decrease in performance. However, this decrease became small when using data from a long (68 minutes) session. Artifact-free regression of the global effect significantly increased the c-sensitivity of all top-performing methods. In an overall evaluation across all tests we performed, PCorr, ICOV and BayesNet set themselves somewhat above all other methods. We propose that data-based calibration based on known anatomical connections be integrated into future network studies, in order to maximize sensitivity and reduce false positives.
Des études récentes ont identifié des réseaux du cerveau à hautes-échelles basés sur la structure spatio-temporelle des fluctuations spontanées dans les donnés IRMf à l'état de repos. Il est prévu que la connectivité fonctionnelle basée sur des donnés de l'état de repos soit reflétée, sans être identique à la connectivité anatomique. Cependant, quelles méthodes analytiques de la connectivité fonctionnelle prédisent de manière fiable la structure du réseau n'est pas encore clair. Ici, nous avons testé et comparé des méthodes analytiques de la connectivité de réseaux en leur appliquant à des séries de temps IRMf de l'état de repos provenant du cortex visuel humain. Les méthodes évaluées ici sont celles qui ont été testées avec des données simulées dans l'article Smith et al. (Neuroimage, 2011). À cette fin, nous avons défini des sous-régions contenues dans les régions visuelles V1, V2, et V3 selon leur excentricité dans le champ visuel, en créant une région d'intérêt (RI) centrale, intermédiaire, et périphérique. Ces RI ont servi comme nœuds dans les modèles que nous étudions. Nous avons basé notre évaluation sur la « vérité » de l'anatomie du cortex visuel des singes macaques qui a déjà été étudié en profondeur. Pour chaque méthode évaluée, nous avons calculé le taux fractionnel de la détection des connections que nous savons existantes ('c-sensitivité'), en utilisant un seuil étant défini comme le 95e percentile de la distribution des magnitudes d'interactions des connections que nous savons inexistantes.Sous conditions optimales, incluant une longueur de session de 68 minutes, un réseau de 9 nœuds et la régression sans artéfact de l'effet global, les méthodes les plus efficaces ont prédit les connections attendues avec une c-sensitivité de 75%-83%. La corrélation partielle était meilleure (PCorr; 83%), suivi par la Covariance Inverse Régularisé (ICOV; 79%), les méthodes de Réseau Bayesien (BayesNet; 77%), la corrélation (75%), et la Synchronisation Générale (GenSynch; 75%). Avec des sessions plus courtes, les meilleures méthodes ont vus des abaissements en c-sensitivité, réussissant seulement 66%-78% et 60%-70% pour des sessions de 34 et 17 minutes respectivement. Avec des sessions IRMf à l'état de repos courtes de 8.5 minutes (TR=2s), aucune méthode n'a prédit notre réseau avec succès; ICOV (53%) et PCorr (51%) ont été meilleurs. Quand nous avons augmenté la complexité du système en regardant 18 ou 36 nœuds en même temps, les méthodes multi-variantes incluant PCorr et BayesNet ont moins bien réussi qu'avec seulement 9 nœuds. La régression sans artéfact de l'effet global améliore significativement la c-sensitivité de toutes les meilleures méthodes. Dans une évaluation générale à travers tous nos tests, PCorr, ICOV, et BayesNet se séparent des autres en étant les meilleurs.Nous proposons qu'une calibration basée sur les donnés utilisant des connections anatomiques connues soit intégrée dans les futures études de réseaux pour maximiser la sensitivité des résultats et réduire la présence de faux positifs dans les réseaux prédits.
APA, Harvard, Vancouver, ISO, and other styles
43

Gómez, Cortés Verónica. "Sequential injection analysis using second-order calibration for the development of analytical methods." Doctoral thesis, Universitat Rovira i Virgili, 2007. http://hdl.handle.net/10803/9019.

Full text
Abstract:
Uno de los ámbitos de mayor interés en el campo de la química analítica es el análisis medioambiental, ya que hay que asegurar y mantener la calidad, ya sea del aire o del agua para que su composición no comporte ningún peligro para la salud de los seres vivos. Los diferentes procesos industriales han contribuido a mejorar la calidad de vida, pero pueden producir subproductos que si son introducidos directa o indirectamente en el agua causan serios problemas de contaminación. De este modo, es necesario el tratamiento de una gran cantidad de desechos industriales, los cuales requieren una continua minimización.

Los avances en el campo del análisis medioambiental se han dirigido al desarrollo de nuevas técnicas que sean de fácil uso, que permitan minimizar la manipulación de la muestra, los costes y el tiempo de análisis, que no precisen del uso de disolventes orgánicos, que puedan ser fácilmente automatizadas y que sean rápidas. Entre las técnicas analíticas que presentan las características citadas se encuentran las técnicas de inyección de flujo.

La instrumentación analítica actual permite generar datos de diferente dimensionalidad: los denominados datos de orden cero, cuando la señal obtenida por el detector para cada muestra es un escalar y este valor se relaciona con la concentración mediante un calibrado univariante; los datos de primer orden son aquellos que corresponden a un vector por muestra, y se relacionan con la concentración mediante una calibración multivariante o de primer orden; y datos de segundo orden, que son aquellos que corresponden a una matriz por muestra y aplican una calibración de segundo orden para relacionar la señal con la concentración de los analitos de interés. Los datos de orden cero son útiles en los casos en los que se tiene una respuesta única y específica para el analito de interés, mientras que los datos de primer orden permiten determinar la concentración de un analito en presencia de interferentes siempre que éstos estén contemplados en las muestras de calibrado. En el caso de tener interferentes no definidos o que éstos no se puedan reproducir en las muestras de calibrado, se emplean datos de segundo orden y calibraciones de segundo orden.

Esta Tesis Doctoral se ha enmarcado en el desarrollo de nuevas metodologías analíticas para la determinación de subproductos de la industria de curtidos de pieles mediante un sistema de inyección de flujo, Análisis por inyección secuencial (SIA) y calibración de datos de segundo orden.

SIA, Sequential Injection Analysis, es un sistema de inyección de flujo, introducido por el profesor Jaromir Ruzicka el 1990, en el que la muestra y los reactivos son aspirados de forma secuencial, se mezclan por difusión en el reactor y se transportan mediante un flujo hacia el detector.

Un sistema SIA acoplado con un detector espectrofotométrico de diodos en fila (DAD) permite obtener datos de segundo orden, de forma que se recoge el espectro completo en un intervalo de longitudes de onda, a diferentes tiempos durante la elución del pico SIA. De esta forma se obtiene una matriz de datos (m x n), donde m son los diferentes tiempos y n las longitudes de onda de medida, para el análisis de cada una de las muestras.

En este trabajo, se presentan diferentes aplicaciones mediante análisis por inyección secuencial y calibración de datos de segundo orden con resolución de curvas multivariante con mínimos cuadrados alternados (MCR-ALS) para la determinación y especiación de cromo y para la determinación de tres colorantes simultáneamente. También se presentan dos revisiones bibliográficas críticas acerca de la determinación de cromo y del análisis multicomponente en sistemas de flujo. Además se desarrolló una aplicación de Cromatografía por Inyección Secuencial (SIC) con resolución de curvas multivariante con mínimos cuadrados alternados para la determinación de fenoles.

El cromo es un elemento ampliamente usado en esta industria. Su determinación tiene gran importancia medioambiental debido a la gran toxicidad de la especie de Cr(VI) como agente cancerígeno, mientras que el Cr(III) es un elemento esencial. En esta Tesis se presentan cuatro trabajos referidos a la determinación de cromo. En el primero, Use of multivariate curve resolution for determination of chromium in tanning samples using sequential injection analysis, Analytical and Bioanalytical Chemistry 382 (2005) 328-334 se basa en los fundamentos para permitir la determinación de cromo con el sistema SIA-MCR-ALS. Dependiendo de la capacidad de reacción del Cr(III) se puede diseñar una secuencia analítica en el sistema SIA de modo que se obtenga un sistema en evolución para obtener una matriz de datos como señal analítica. Cr(III) fue oxidado a Cr(VI) fuera del sistema SIA para aumentar la sensibilidad del Cr(III). De este modo, en el sistema SIA se produjo un gradiente de pH para provocar una conversión entre dos especies de Cr(VI), cromato y dicromato. El segundo artículo, Factorial design for optimizing chromium determination in tanning wastewater, Microchemical Journal 83 (2006) 98-104 consiste, tal y como el título sugiere, en la optimización del método desarrollado previamente para incorporar en una única etapa las dos etapas principales (oxidación de Cr(III) y evolución del Cr(VI)). El tercer trabajo, Chromium speciation using sequential injection analysis and multivariate curve resolution, Analytica Chimica Acta 571 (2006) 129-135 consiste en la determinación simultánea de Cr(III) y Cr(VI) debido a que la especiación de cromo es de gran interés, sobretodo en el campo medioambiental. Se llevó a cabo una derivatización previa de Cr(III), formando un complejo con ácido etilendiaminotetraacético (EDTA) para aumentar la sensibilidad del Cr(III). El sistema en evolución en el sistema SIA se generó mediante un gradiente de pH. El cuarto trabajo, Chromium determination and speciation since 2000, Trends in Analytical Chemistry, 25 (2006) 1006-1015 consiste en un trabajo bibliográfico que describe las posibilidades de determinación y especiación de cromo.

Otros analitos de interés en este ámbito son los colorantes de los baños residuales, debido a su elevada toxicidad y a su baja biodegradabilidad. Por tanto, tiene gran relevancia el estudio de la proporción de colorantes en las muestras y su efecto al largo del tiempo. Tanto el cromo como los colorantes son compuestos de interés en otros ámbitos como por ejemplo, alimentos, artes gráficas, etc. Se presentan cuatro trabajos científicos en este ámbito con dos objetivos prácticos. Por un lado, controlar la cantidad de colorante que permanece en solución después del proceso de teñido, de modo que se puedan optimizar algunas de las etapas del proceso, y por otro lado, estudiar diferentes estrategias para reducir el porcentaje de colorantes en aguas residuales. El primer trabajo Sequential injection analysis with second-order treatment for the determination of dyes in the exhaustion process of tanning effluents, Talanta 71 (2007) 1393-1398 consiste en el desarrollo de un método analítico basado en SIA y calibración de datos de segundo orden (MCR-ALS) para determinar simultáneamente tres colorantes en muestras de deshecho de la industria de curtidos. Este método se aplicó a muestras curtidas con sales de cromo o con especies vegetales. El segundo trabajo Matrix effect in second-order data. Determination of dyes in a tanning process using vegetable tanning agents, Analytica Chimica Acta 600 (2007) 233-239 conlleva la aplicación del método a muestras curtidas con especies vegetales, ya que éstas son especies altamente absorbentes y provocan el conocido efecto matriz, por lo que se propusieron estrategias para tratar de corregir estos efectos. Para alcanzar el segundo objetivo se consideró la adsorción de colorantes en carbón activo como una estrategia atractiva para eliminar los colorantes. El tercer trabajo, Kinetic and adsorption study of acid dye removal using activated carbon, Chemosphere 69 (2007) 1151-1158 estudia la adsorción y los parámetros cinéticos, tanto para los colorantes individuales como para las mezclas. El cuarto trabajo, Experimental designs for optimizing and fitting the adsorption of dyes onto activated carbon, submitted, pretende obtener una superficie de respuesta de forma que, dependiendo de las concentraciones de colorantes, se puedan establecer las condiciones experimentales para obtener una predeterminada adsorción de colorantes en carbón activo.

En la última etapa de la Tesis se exploraron diferentes posibilidades para aumentar la capacidad de los análisis simultáneos mediante sistemas de flujo y métodos de calibración de segundo orden. De este estudio se obtuvieron dos trabajos. El primero, Multicomponent analysis in flow systems, Trends in Analytical Chemistry, 26 (2007) 767-774 muestra una visión general de las determinaciones de múltiples analitos en sistemas de flujo. El segundo trabajo, Coupling of sequential injection chromatography with multivariate curve resolution alternating least squares for enhancement of peak capacity, Analytical Chemistry, 10.1021/ac071202h es el resultado de la combinación de dos estrategias propuestas para el análisis multicomponente, la cromatografía de inyección secuencial (SIC) y la calibración de datos de segundo orden con MCR-ALS. Este estudio se llevó a cabo en colaboración con el Grupo de Química Analítica, Automatización y Medioambiente de la Universidad de las Islas Baleares (UIB), en donde se realizó la parte experimental.
Environmental analysis is a field with high interest in the analytical chemistry community. It is very important to assure and maintain the quality, from air as well as from water, to avoid that its composition cause any risk for the living organisms. Different industrial processes have contributed to improve health quality, but they can produce intermediates that if are introduced direct or indirectly to the waters can cause important problems of pollution. In that way, it is necessary to treat a big amount of industrial wastes, which should be continuously minimized.

Advances in the environmental field have been focusing on the development of new techniques of easy use, with low sample manipulation, low costs and short analysis times that could be easily automatised and fast. Flow analysis techniques can be found in this classification.

The current analytical instruments can generate data with different dimensionality. The analytical signal can be a scalar (a single absorbance measure), a vector (single absorbance measurement along time), or a data matrix (spectrum recorded along time) per each analysed sample. These data have been classified, as zero-order data when the signal is a scalar, first-order data when is a vector, and second-order data when the signal is a matrix. Zero-order data are useful for cases in which we have a unique and specific response for the analyte of interest, meanwhile first-order data allow quantification of an analyte in the presence of interferents but they should be contained in the calibration samples. When the interferents are not known and they could not be present in the calibration samples, second-order data and second-order calibrations are used.

This doctoral thesis has been carried out for developing new analytical methodologies for determining subproducts of the tanning industry using sequential injection analysis (SIA) and second-order calibration.

SIA, Sequential Injection Analysis, is a flow injection system introduced in 1990 by Professor Jaromir Ruzicka, where the sample and reagents are introduced sequentially into the system, are mixed by diffusion process in the reactor and then they are pumped through the detector.
A SIA system using a diode array spectrophotometer can generate second-order data, it means that the analytical signal is a data matrix for each sample on which we have absorbances in a wavelength interval in one axis and in a time interval in the other axis.

In this project, we present different practical applications using sequential injection analysis and second-order calibration using multivariate curve resolution alternating least squares (MCR-ALS) for determining and speciating chromium and for determining three acid dyes simultaneously. Furthermore we present two bibliographic overviews about chromium determination and multicomponent analysis in flow systems. We also developed an application of sequential injection chromatography (SIC) using multivariate curve resolution alternating least squares (MCR-ALS) for determining phenolic derivatives.

Chromium is an element widely used in the industry. Its determination presents high environmental interest because the toxicity of the Cr(VI) species as a carcinogenic agent, meanwhile Cr(III) is an essential element. In this Thesis we present four papers relative to chromium determination. The first paper Use of multivariate curve resolution for determination of chromium in tanning samples using sequential injection analysis, Analytical and Bioanalytical Chemistry 382 (2005) 328-334 is based on the basis to permit quantification of chromium with the system SIA-MCR-ALS. Depending on the capacity of reaction of Cr(III), different analytical sequences in the SIA system can be designed. Cr(III) was oxidized into Cr(VI) outside of the SIA system to increase Cr(III) sensibility. In that way, in the SIA system we induced a pH gradient to see the conversion of the two species of Cr(VI), chromate and dichromate. The second paper Factorial design for optimizing chromium determination in tanning wastewater, Microchemical Journal 83 (2006) 98-104 presents an automatic system for total chromium determination using different experimental designs to optimize the overall process. The third paper, Chromium speciation using sequential injection analysis and multivariate curve resolution, Analytica Chimica Acta 571 (2006) 129-135 is a paper in which the two main species of chromium, Cr(III) and Cr(VI), are determined simultaneously in a single analysis in the presence of interferents. The fourth paper, Chromium determinations and speciation since 2000, Trends in Analytical Chemistry, 25 (2006) 1006-1015 is a bibliographic study that describes the possibilities available for chromium determination and speciation.

Other analytes of interest in this field are dyes from wastewater due to its high toxicity and its low biodegradability. That is why studying the proportion of dyes in samples and its effect through time presents high relevance. Chromium and dyes are compounds of high interest in other application fields, such as, foods, printing and graphic design, etc. We present four scientific papers in this field with two practical objectives. On the one hand, control the amount of dyes that remains in solution after the dyeing process has been done. On the other hand, study different strategies to reduce the percentage of dyes in wastewater. The first paper, Sequential injection analysis with second-order treatment for the determination of dyes in the exhaustion process of tanning effluents Talanta 71 (2007) 1393-1398, describes the developed method for determining three acid dyes in a single step. This method was applied to water samples of leathers tanned with chromium salts and with vegetal agents. The second paper Matrix effect in second-order data. Determination of dyes in a tanning process using vegetable tanning agents Analytica Chimica Acta 600 (2007) 233-239 presents strategies to use when a sample presents matrix effects using second-order data. To solve the second objective, we studied the behaviour of dyes on activated carbon and present the results in the third paper Kinetic and adsorption study of acid dye removal using activated carbon Chemosphere 69 (2007) 1151-1158 where we studied the adsorption and the kinetic parameters of dyes being alone in solution or in a mixture of them. The fourth paper, Experimental designs for optimizing and fitting the adsorption of dyes onto activated carbon Submitted, we described a sequential methodology to obtain a response surface and optimize the adsorption process as a way of eliminating dyes in wastewater samples from the tanning industry.

In the last period of the Thesis, we explored different strategies to increase the capacity of simultaneous analysis using flow systems and second-order calibration. From this study, we obtained two papers. The first, Multicomponent analysis in flow systems, Trends in Analytical Chemistry, 26 (2007) 767-774, shows a general vision of multiple determinations in flow systems. The second paper, Coupling of sequential injection chromatography with multivariate curve resolution alternating least squares for enhancement of peak capacity, Analytical Chemistry 79 (2007) 7767-7774 shows a combination of two strategies proposed for multicomponent analysis, sequential injection chromatography (SIC) and second-order calibration with MCR-ALS. This study was carried out in collaboration with the Analytical Chemistry, Automation and Environment group of the University of the Balearic Islands.
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Jungnam. "A comparison of calibration methods and proficiency estimators for creating IRT vertical scales." Diss., University of Iowa, 2007. http://ir.uiowa.edu/etd/163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Parada, Robert John 1970. "In-flight absolute calibration of radiometric sensors over dark targets using vicarious methods." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282297.

Full text
Abstract:
The ability to conduct in-flight, absolute radiometric calibrations of ocean color sensors will determine their usefulness in the decade to come. On-board calibration systems are often integrated into the overall design of such sensors and have claimed uncertainty levels below 5%. Independent means of system calibration are needed to confirm that the sensor is accurately calibrated. Vicarious (i.e. ground-referencing) methods are an attractive way to conduct this verification. This research describes the development of in-flight, absolute radiometric calibration methods which reference dark (i.e. low-reflectance) sites. The high sensitivity of ocean color sensors results in saturation over bright surfaces. Low-reflectance targets, such as water bodies, are therefore required for their vicarious calibration. Sensitivity analyses of the reflectance-based and radiance-based techniques, when applied to a water target, are performed. Uncertainties in atmospheric parameters, surface reflectance measurements, and instrument characterization are evaluated for calibrations of a representative ocean color sensor. For a viewing geometry near the sun glint region, reflectance-based uncertainties range between 1.6% and 2.3% for visible and near-IR wavelengths; radiance-based uncertainties range between 6.8% and 20.5%. These studies indicate that better characterization of aerosol parameters is desired and that radiometer pointing accuracy must be improved to make the radiance-based method useful. The uncertainty estimates are evaluated using data from a field campaign at Lake Tahoe in June, 1995. This lake is located on the California-Nevada border and has optical characteristics similar to oceanic waters. Aircraft-based radiance data and surface measurements of water reflectance are used to calibrate visible and near infrared bands of the Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS). The vicariously-derived calibration coefficients are compared to those obtained from a preflight calibration of AVIRIS. The results agree at the 0.3-7.7% level for the reflectance-based technique, which is within the believed method uncertainties. Finally, as a consequence of this research, the testing and refinement of radiative transfer codes applicable to oceanic environments is accomplished. These modifications lead to an improvement in the prediction of top-of-atmosphere radiances over water targets.
APA, Harvard, Vancouver, ISO, and other styles
46

Becker, David [Verfasser], Matthias [Akademischer Betreuer] Becker, and Rene [Akademischer Betreuer] Forsberg. "Advanced Calibration Methods for Strapdown Airborne Gravimetry / David Becker ; Matthias Becker, René Forsberg." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2016. http://d-nb.info/1117135195/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yassin-Kassab, Abdullah. "Entropy-based inference and calibration methods for civil engineering system models under uncertainty." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lindholm, Love. "Numerical methods for the calibration problem in finance and mean field game equations." Doctoral thesis, KTH, Numerisk analys, NA, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214082.

Full text
Abstract:
This thesis contains five papers and an introduction. The first four of the included papers are related to financial mathematics and the fifth paper studies a case of mean field game equations. The introduction thus provides background in financial mathematics relevant to the first four papers, and an introduction to mean field game equations related to the fifth paper. In Paper I, we use theory from optimal control to calibrate the so called local volatility process given market data on options. Optimality conditions are in this case given by the solution to a Hamiltonian system of differential equations. Regularization is added by mollifying the Hamiltonian in this system and we solve the resulting equation using a trust region Newton method. We find that our resulting algorithm for the calibration problem is both accurate and robust. In Paper II, we solve the local volatility calibration problem using a technique that is related to - but also different from - the Hamiltonian framework in Paper I. We formulate the optimization problem by means of a Lagrangian multiplier and add a Tikhonov type regularization directly on the parameter we are trying to estimate. The resulting equations are solved with the same trust region Newton method as in Paper II, and again we obtain an accurate and robust algorithm for the calibration problem. Paper III formulates the problem of calibrating a local volatility process to option prices in a way that differs entirely from what is done in the first two papers. We exploit the linearity of the Dupire equation governing the prices to write the optimization problem as a quadratic programming problem. We illustrate by a numerical example that method can indeed be used to find a local volatility that gives good match between model prices and observed market prices on options. Paper IV deals with the hedging problem in finance. We investigate if so called quadratic hedging strategies formulated for a stochastic volatility model can generate smaller hedging errors than obtained when hedging with the standard Black-Scholes framework. We thus apply the quadratic hedging technique as well as the Black-Scholes hedging to observed option prices written on an equity index and calculate the empirical errors in the two cases. Our results indicate that smaller errors can be obtained with quadratic hedging in the models used than with hedging in the Black-Scholes framework. Paper V describes a model of an electricity market consisting of households that try to minimize their electricity cost by dynamic battery usage. We assume that the price process of electricity depends on the aggregated momentaneous electricity consumption. With this assumption, the cost minimization problem of each household is governed by a system of mean field game equations. We also provide an existence and uniqueness result for these mean field game equations. The equations are regularized and the approximate equations are solved numerically. We illustrate how the battery usage affects the electricity price.
Den här avhandlingen innehåller fyra artiklar och en introduktion. De första fyra av de inkluderade artiklarna är relaterade till finansmatematik och den femte artikeln studerar ett fall av medelfältsekvationer. Introduktionen ger bakgrund i finansmatematik som har relevans för de fyra första artiklarna och en introduktion till medelfältsekvationer relaterad till den femte artikeln. I Artikel I använder vi teori från optimal styrning för att kalibrera den så kallade lokala volatilitetsprocessen givet marknadsdata för optionspriser. Optimalitetsvillkor ges i det här fallet av lösningen till ett Hamiltonskt system av differentialekvationer. Vi regulariserar problemet genom att släta ut systemets Hamiltonian och vi löser den resulterande ekvationen med en trust region Newtonmetod. Den resulterande algoritmen är både noggrann och robust i att lösa kalibreringsproblemet. I Artikel II löser vi kalibreringsproblemet för lokal volatilitet med en teknik som är besläktad med - men också skiljer sig från - det Hamiltonska ramverket i Artikel I. Vi formulerar optimeringsproblemet med en Lagrangemultiplikator och använder en Tikhonovregularisering direkt på den parameter vi försöker uppskatta. De resulterande ekvationerna löses med samma trust region Newtonmetod som i Artikel II. Även i detta fall erhåller vi en noggrann och robust algoritm för kalibreringsproblemet. Artikel III formulerar problemet att kalibrera en lokal volatilitet till optionspriser på att sätt som skiljer sig helt från vad som görs i de två första artiklarna. Vi utnyttjar linjäriteten hos Dupires ekvation som ger optionspriserna och kan skriva optimieringsproblemet som ett kvadratiskt programmeringsproblem. Vi illusterar genom ett numeriskt exempel att metoden kan användas för att hitta en lokal volatilitet som ger en bra anpassning av modellpriser till observerade marknadspriser på optioner. Artikel IV behandlar hedgingproblemet i finans. Vi undersöker om så kallad kvadratiska hedgingstrategier formulerade för en stokastisk volatilitetsmodell kan generera mindre hedgingfel än vad som erhålls med hedging i den standardmässiga Black-Scholes modellen. Vi tillämpar således teorin för kvadratisk hedging så väl som hedging med Black-Scholes modell på observerade priser för optioner skrivna på ett aktieindex, och beräknar de empiriska felen i båda fallen. Våra resultat indikerar att mindre fel kan erhållas med kvadratisk hedging med de använda modellerna än med hedging genom Black-Scholes modell. Artikel V beskriver en modell av en elmarknad som består av hushåll som försöker minimera sin elkostnad genom dynamisk batterianvändning. Vi antar att prisprocessen för el beror på den aggregerade momentana elkonsumtionen. Med detta antagande kommer kostnadsminimeringen för varje hushåll att styras av ett system av medelfältsekvationer. Vi ger också ett existens- och entydighetsresultat för dessa medelfältsekvationer. Ekvationerna regulariseras och de approximerade ekvationerna löses numeriskt. Vi illustrerar hur batterianvändningen påverkar elpriset.

QC 20170911

APA, Harvard, Vancouver, ISO, and other styles
49

Tabataba, Farzaneh Sadat. "On the 3 M's of Epidemic Forecasting: Methods, Measures, and Metrics." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89646.

Full text
Abstract:
Over the past few decades, various computational and mathematical methodologies have been proposed for forecasting seasonal epidemics. In recent years, the deadly effects of enormous pandemics such as the H1N1 influenza virus, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The improvement and variety of these prediction methods are undeniable. Nevertheless, many challenges remain unresolved in the path of forecasting the outbreaks using surveillance data. Obtaining the clean real-time data has always been an obstacle. Moreover, the surveillance data is usually noisy and handling the uncertainty of the observed data is a major issue for forecasting algorithms. Correct modeling assumptions regarding the nature of the infectious disease is another dilemma. Oversimplified models could lead to inaccurate forecasts, whereas more complicated methods require additional computational resources and information. Without those, the model may not be able to converge to a unique optimum solution. Through the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. However, the lack of standard, well-defined evaluating metrics impedes a fair judgment on the proposed methods. This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. Our approach uses Bayesian statistics to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. An individual-based model with different intervention strategies could result in a large number of unknown parameters that should be properly calibrated. As particle filter could collapse in very large-scale systems (curse-of-dimensionality problem), achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer in the state dynamics unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process. The second part of this dissertation focuses on providing standard evaluation measures for evaluating epidemic forecasts. We present an end-to-end framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods.
PHD
APA, Harvard, Vancouver, ISO, and other styles
50

Grasreiner, Sebastian. "Combustion modeling for virtual SI engine calibration with the help of 0D/3D methods." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2012. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-90518.

Full text
Abstract:
Spark ignited engines are still important for conventional as well as for hybrid power trains and are thus objective to optimization. Today a lot of functionalities arise from software solutions, which have to be calibrated. Modern engine technologies provide an extensive variability considering their valve train, fuel injection and load control. Thus, calibration efforts are really high and shall be reduced by introduction of virtual methods. In this work a physical 0D combustion model is set up, which can cope with a new generation of spark ignition engines. Therefore, at first cylinder thermodynamics are modeled and validated in the whole engine map with the help of a real-time capable approach. Afterwards an up to date turbulence model is introduced, which is based on a quasi-dimensional k-epsilon-approach and can cope with turbulence production from large scale shearing. A simplified model for ignition delay is implemented which emphasizes the transfer from laminar to turbulent flame propagation after ignition. The modeling is completed with the calculation of overall heat release rates in a 0D entrainment approach with the help of turbulent flame velocities. After validation of all sub-models, the 0D combustion prediction is used in combination with a 1D gas exchange analysis to virtually calibrate the modern engine torque structure and the ECU function for exhaust gas temperature with extensive simulations
Moderne Ottomotoren spielen heute sowohl in konventionellen als auch hybriden Fahrzeugantrieben eine große Rolle. Aktuelle Konzepte sind hochvariabel bezüglich Ventilsteuerung, Kraftstoffeinspritzung und Laststeuerung und ihre Optimierungspotentiale erwachsen zumeist aus neuen Softwarefunktionen. Deren Applikation ist zeit- und kostenintensiv und soll durch virtuelle Methoden unterstützt werden. In der vorliegenden Arbeit wird ein physikalisches 0D Verbrennungsmodell für Ottomotoren aufgebaut und bis zur praktischen Anwendung geführt. Dafür wurde zuerst die Thermodynamik echtzeitfähig modelliert und im gesamten Motorenkennfeld abgeglichen. Der Aufbau eines neuen Turbulenzmodells auf Basis der quasidimensionalen k-epsilon-Gleichung ermöglicht anschließend, die veränderlichen Einflüsse globaler Ladungsbewegung auf die Turbulenz abzubilden. Für den Brennverzug wurde ein vereinfachtes Modell abgeleitet, welches den Übergang von laminarer zu turbulenter Flammenausbreitung nach der Zündung in den Vordergrund stellt. Der restliche Brennverlauf wird durch die physikalische Ermittlung der turbulenten Brenngeschwindigkeit in einem 0D Entrainment-Ansatz dargestellt. Nach Validierung aller Teilmodelle erfolgt die virtuelle Bedatung der Momentenstruktur und der Abgastemperaturfunktion für das Motorsteuergerät
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography