To see the other types of publications on this topic, follow the link: Continuous estimation.

Dissertations / Theses on the topic 'Continuous estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Continuous estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gonzalez, Olivares Daniel. "Estimation of cointegrated systems in continuous time." Thesis, University of Essex, 2017. http://repository.essex.ac.uk/21093/.

Full text
Abstract:
In this thesis we derive exact discrete time representation models that correspond to cointegrated systems in continuous time. At the same time, for the parameters of those models, estimation procedures are outlined. The representations are applicable for data observed as both stock or flow variables and with the use of some simulated data, the performance of the estimation procedure is assessed. More importantly, with the aim of analysing the costs, if there are any, of ignoring aggregation in the specification, the results of our estimation procedure are also compared with the ones we would have obtained by applying instead Johansen’s estimation methodology. In the first part (Chapter 2), we detail the analysis for a first- order stochastic differential equation system, as a result, baseline finding are outlined. In the second part (Chapter 3) the analysis is generalized and not only includes higher order specifications in the system but also incorporates deterministic components on it. Finally, in the last part (Chapter 4) of this thesis, three applications of that estimation procedure are presented. In the results, when the system is entirely comprised by stock variables and the specification follows a first order system, both Johansen’s methodology and ours perform very well, with virtually identical estimates and, for the simulated data, improvements as the sample size increases. However, when the variables of interest are flows or the specification follows a higher order system, given that our exact discrete time representation includes moving average components in the error term, Johansen’s estimates show a persistent bias in estimation, consequently, they reflected the cost of ignoring aggregation in the specification.
APA, Harvard, Vancouver, ISO, and other styles
2

Sinisalo, Jukka. "Estimation of greenhouse impacts of continuous regional emissions /." Espoo : Technical Research Centre of Finland, 1998. http://www.vtt.fi/inf/pdf/publications/1998/P338.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rahgozar, Mandana Seyed. "Estimation of evapotranspiration using continuous soil moisture measurement." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mujica, Fernando Alberto. "Spatio-temporal continuous wavelet transform for motion estimation." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Aimin. "Estimation of distribution algorithms for continuous multiobjective optimization." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gillberg, Jonas. "Methods for frequency domain estimation of continuous-time models /." Linköping : univ, 2004. http://www.control.isy.liu.se.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sanyang, Momodou Lamin. "Large scale estimation of distribution algorithms for continuous optimisation." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7847/.

Full text
Abstract:
Modern real world optimisation problems are increasingly becoming large scale. However, searching in high dimensional search spaces is notoriously difficult. Many methods break down as dimensionality increases and Estimation of Distribution Algorithm (EDA) is especially prone to the curse of dimensionality. In this thesis, we device new EDA variants that are capable of searching in large dimensional continuous domains. We in particular (i) investigated heavy tails search distributions, (ii) we clarify a controversy in the literature about the capabilities of Gaussian versus Cauchy search distributions, (iii) we constructed a new way of projecting a large dimensional search space to low dimensional subspaces in a way that gives us control of the size of covariance of the search distribution and we develop adaptation techniques to exploit this and (iv) we proposed a random embedding technique in EDA that takes advantage of low intrinsic dimensional structure of problems. All these developments avail us with new techniques to tackle high dimensional optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Woodson, David. "Precipitation Estimation Methods in Continuous, Distributed Urban Hydrologic Modeling." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90373.

Full text
Abstract:
Quantitative precipitation estimation (QPE) remains a key area of uncertainty in hydrological modeling, particularly in small, urban watersheds which respond rapidly to precipitation and can experience significant spatial variability in rainfall fields. Few studies have compared QPE methods in small, urban watersheds, and studies which have examined this topic only compared model results on an event basis using a small number of storms. This study sought to compare the efficacy of multiple QPE methods when simulating discharge in a small, urban watershed on a continuous basis using an operational hydrologic model and QPE forcings. The Research Distributed Hydrologic Model (RDHM) was used to model a basin in Roanoke, Virginia, USA forced with QPEs from four methods: mean field bias (MFB) correction of radar data, kriging of rain gauge data, uncorrected radar data, and a basin-uniform estimate from a single gauge inside the watershed. Based on comparisons between simulated and observed discharge at the basin outlet for a 6-month period in 2018, simulations forced with the uncorrected radar QPE had the highest accuracy, as measured by root mean square error (RMSE) and peak flow relative error, despite systematic underprediction of the mean areal precipitation (MAP). Simulations forced with MFB corrected radar data consistently and significantly overpredicted discharge but had the highest accuracy in predicting the timing of peak flows.<br>Master of Science<br>Estimating the amount of rain that fell during a precipitation event remains a key source of error when predicting how much stormwater runoff will be produced, particularly in small, urban watersheds which respond rapidly to precipitation and can experience significant spatial variability in rainfall distribution. Rainfall estimation in small, urban watersheds has received relatively little attention, and studies which have examined this topic have generally only examined a small number of discrete storm events. This study sought to compare the efficacy of multiple precipitation estimation methods when simulating discharge in a small, urban watershed on a continuous basis using an operational hydrologic model and precipitation inputs. The Research Distributed Hydrologic Model (RDHM), commonly used by the National Weather Service, was used to model a basin in Roanoke, Virginia, USA forced with rainfall estimates from four methods: mean field bias (MFB) correction of radar data, kriging of rain gauge data, uncorrected radar data, and a basin-uniform estimate from a single gauge inside the watershed. Based on comparisons between simulated and observed discharge at the basin outlet for a 6-month period in 2018, simulations forced with the uncorrected radar QPE had the highest accuracy, as measured by several performance statistics, despite systematic underprediction of actual precipitation. Simulations forced with MFB corrected radar data consistently and significantly overpredicted discharge but had the highest accuracy in predicting the timing of peak flows.
APA, Harvard, Vancouver, ISO, and other styles
9

VARINI, ELISA. "Sequential estimation methods in continuous time state place models." Doctoral thesis, Università Bocconi, 2006. http://hdl.handle.net/11565/4050376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Invernizzi, C. "QUANTUM ESTIMATION DISCRIMINATION IN CONTINUOUS VARIABLE AND FERMIONIC SYSTEMS." Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/158085.

Full text
Abstract:
In this PhD thesis we address the problem of characterizing quantum states and parameters of systems that are of particular interest for quantum technologies. In the first part we consider continuous variable systems and in particular Gaussian states; we address the estimation of quantities characterizing single-mode Gaussian states as the displacement and squeezing parameter and we study the improvement in the parameter estimation by introducing a Kerr nonlinearity. Moreover, we address the discrimination of noisy channels by means of Gaussian states as probe states considering two problems: the detection of a lossy channel against the alternative hypothesis of an ideal lossless channel and the discrimination of two Gaussian noisy channels. In the last part of the thesis, we consider the one dimensional quantum Ising model in a transverse magnetic field. We exploit the recent results about the geometric approach to quantum phase transitions to derive the optimal estimator of the coupling constant of the model at zero and finite temperature in both cases of few spins and in the thermodynamic limit. We also analyze the effects of temperature and the scaling properties of the estimator of the coupling constant. Finally, we consider the discrimination problem for two ground states or two thermal states of the model.
APA, Harvard, Vancouver, ISO, and other styles
11

Georgios, Mourikas. "Modelling, estimation and optimisation of polymerisation processes." Thesis, University of Newcastle Upon Tyne, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Myung Suk. "Statistical testing and estimation in continuous time interest rate models." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/4189.

Full text
Abstract:
The shape of drift function in continuous time interest rate models has been investigated by many authors during the past decade. The main concerns have been whether the drift function is linear or nonlinear, but no convincing conclusions have been seen. In this dissertation, we investigate the reason for this problem and test several models of the drift function using a nonparametric test. Furthermore, we study some related problems, including the empirical properties of the nonparametric test. First, we propose regression models for the estimation of the drift function in some continuous time models. The limiting distribution of the parameter estimator in the proposed regression model is derived under certain conditions. Based on our analyses, we conclude that the effect of drift function for some U.S. Treasury Bill yields data is negligible. Therefore, neither linear nor nonlinear modeling has a significant effect. Second, parametric linear and nonlinear proposed regression models are applied and the correctness of those models is examined using the consistent nonparametric model specification test introduced by Li (1994) and Zheng (1996), henceforth the Jn test. The test results indicate that there is no strong statistical evidence against the assumed drift models. Furthermore, the constant drift model is not rejected either. Third, we compare the Jn and generalized likelihood ratio (GLR) tests through Monte Carlo simulation studies concerning whether the sizes of tests are stable over a range of bandwidth values, which is an important indicator to measure the usefulness of nonparametric tests. The GLR test was applied to testing the linear drift function in continuous time models by Fan and Zhang (2003). Our simulation study shows that the GLR test does not provide stable sizes over a grid of bandwidth values in testing the drift function of some continuous time models, whereas the Jn test usually does.
APA, Harvard, Vancouver, ISO, and other styles
13

Pigorsch, Christian. "Estimation of continuous-time financial models using high-frequency data." Diss., [S.l.] : [s.n.], 2007. http://edoc.ub.uni-muenchen.de/archive/00007113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Elerian, Ola. "Simulation estimation of continuous-time models with applications to finance." Thesis, University of Oxford, 1999. https://ora.ox.ac.uk/objects/uuid:9538382d-5524-416a-8a95-1b820dd795e1.

Full text
Abstract:
Over recent years, we have witnessed a rapid development in the body of economic theory with applications to finance. It has had great success in finding theoretical explanations to economic phenomena. Typically, theories are employed that are defined by mathematical models. Finance in particular has drawn upon and developed the theory of stochastic differential equations. These produce elegant and tractable frameworks which help us to better understand the world. To directly apply such theories, the models must be assessed and their parameters estimated. Implementation requires the estimation of the model's elements using statistical techniques. These fit the model to the observed data. Unfortunately, existing statistical methods do not work satisfactorily when applied to many financial models. These methods, when applied to complex models often yield inaccurate results. Consequently, simpler analytical models are often preferred, but these are typically unrealistic representations of the underlying process, given the stylised facts reported in the literature. In practical applications, data is observed at discrete intervals and a discretisation is typically used to approximate the continuous-time model. This can lead to biased estimates, since the true underlying model is assumed continuous. This thesis develops new methods to estimate these types of models, with the objective of obtaining more accurate estimates of the underlying parameters present. The methods are applicable to general models. As the solution to the true continuous process is rarely known for these applications, the methods developed rely on building an Euler-Maruyama approximate model and using simulation techniques to obtain the distribution of the unknown quantities of interest. We propose to simulate the missing paths between the observed data points to reduce the bias from the approximate model. Alternatively, one could use a more sophisticated scheme to discretise the process. Unfortunately, their implementation with simulation methods require us to simulate from the density and evaluate the density at any given point. This has until now only been possible for the Euler-Maruyama scheme. One contribution of the thesis is to show the existence of a closed form solution from use of the higher order Milstein scheme. The likelihood based method is implemented within the Bayesian paradigm, as in the context of these models, Bayesian methods are often analytically easier. Concerning the estimation methodology, emphasis is placed on simulation efficiency; design and implementation of the method directly affects the accuracy and stability of the results. In conjunction with estimation, it is important to provide inference and diagnostic procedures. Meaningful information from simulation results must be extracted and summarised. This necessitates developing techniques to evaluate the plausibility and hence the fit of a particular model for a given dataset. An important aspect of model evaluation concerns the ability to compare model fit across a range of possible alternatives. The advantage with the Bayesian framework is that it allows comparison across non-nested models. The aim of the thesis is thus to provide an efficient estimation method for these continuous-time models, that can be used to conduct meaningful inference, with their performance being assessed through the use of diagnostic tools.
APA, Harvard, Vancouver, ISO, and other styles
15

McCrorie, James Roderick. "Some topics in the estimation of continuous time econometric models." Thesis, University of Essex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Smith, Robert Anthony. "A General Model for Continuous Noninvasive Pulmonary Artery Pressure Estimation." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/3189.

Full text
Abstract:
Elevated pulmonary artery pressure (PAP) is a significant healthcare risk. Continuous monitoring for patients with elevated PAP is crucial for effective treatment, yet the most accurate method is invasive and expensive, and cannot be performed repeatedly. Noninvasive methods exist but are inaccurate, expensive, and cannot be used for continuous monitoring. We present a machine learning model based on heart sounds that estimates pulmonary artery pressure with enough accuracy to exclude an invasive diagnostic operation, allowing for consistent monitoring of heart condition in suspect patients without the cost and risk of invasive monitoring. We conduct a greedy search through 38 possible features using a 109-patient cross-validation to find the most predictive features. Our best general model has a standard estimate of error (SEE) of 8.28 mmHg, which outperforms the previous best performance in the literature on a general set of unseen patient data.
APA, Harvard, Vancouver, ISO, and other styles
17

Morton, Alexander Stuart. "Spectral analysis of irregularly sampled time series data using continuous time autoregressions." Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ganesan, Aravind. "Capacity estimation and code design principles for continuous phase modulation (CPM)." [College Station, Tex. : Texas A&M University, 2003. http://hdl.handle.net/1969.1/53.

Full text
Abstract:
Thesis (M.S.)--Texas A&M University, 2003.<br>"Major Subject: Electrical Engineering" Title from author supplied metadata (record created on Jul. 18, 2005.) Vita. Abstract. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
19

Deshmukh, Wiwek. "A Physical Estimation based Continuous Monitoring Scheme for Wireless Sensor Networks." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_theses/45.

Full text
Abstract:
Data estimation is emerging as a powerful strategy for energy conservation in sensor networks. In this thesis is reported a technique, called Data Estimation using Physical Method (DEPM), that efficiently conserves battery power in an environment that may take a variety of complex manifestations in real situations. The methodology can be ported easily with minor changes to address a multitude of tasks by altering the parameters of the algorithm and ported on any platform. The technique aims at conserving energy in the limited energy supply source that runs a sensor network by enabling a large number of sensors to go to sleep and having a minimal set of active sensors that may gather data and communicate the same to a base station. DEPM rests on solving a set of linear inhomogeneous algebraic equations which are set up using well-established physical laws. The present technique is powerful enough to yield data estimation at an arbitrary number of point-locations, and provides for easy experimental verification of the estimated data by using only a few extra sensors.
APA, Harvard, Vancouver, ISO, and other styles
20

Fenzi, Michele [Verfasser]. "Feature regression for continuous pose estimation of object categories / Michele Fenzi." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/112203217X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chambers, Marcus James. "Durability and consumers' demand : Gaussian estimation and some continuous time models." Thesis, University of Essex, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Full text
Abstract:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
23

Sigal, Leonid. "Continuous-state graphical models for object localization, pose estimation and tracking." View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Azevedo, Glauco Gomes de. "Distance estimation for mixed continuous and categorical data with missing values." reponame:Repositório Institucional do FGV, 2018. http://hdl.handle.net/10438/24742.

Full text
Abstract:
Submitted by Glauco Gomes de Azevedo (glaucogazevedo@gmail.com) on 2018-08-28T20:54:50Z No. of bitstreams: 1 dissertacao_glauco_azevedo.pdf: 1909706 bytes, checksum: 6636e75aa9da1db2615932f064fd1138 (MD5)<br>Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2018-09-10T19:38:08Z (GMT) No. of bitstreams: 1 dissertacao_glauco_azevedo.pdf: 1909706 bytes, checksum: 6636e75aa9da1db2615932f064fd1138 (MD5)<br>Made available in DSpace on 2018-09-12T17:39:51Z (GMT). No. of bitstreams: 1 dissertacao_glauco_azevedo.pdf: 1909706 bytes, checksum: 6636e75aa9da1db2615932f064fd1138 (MD5) Previous issue date: 2018-06-04<br>Neste trabalho é proposta uma metodologia para estimar distâncias entre pontos de dados mistos, contínuos e categóricos, contendo dados faltantes. Estimação de distâncias é a base para muitos métodos de regressão/classificação, tais como vizinhos mais próximos e análise de discriminantes, e para técnicas de clusterização como k-means e k-medoids. Métodos clássicos para manipulação de dados faltantes se baseiam em imputação pela média, o que pode subestimar a variância, ou em métodos baseados em regressão. Infelizmente, quando a meta é a estimar a distância entre observações, a imputação de dados pode performar de modo ineficiente e enviesar os resultados na direção do modelo. Na proposta desse trabalho, estima-se a distância dos pares diretamente, tratando os dados faltantes como aleatórios. A distribuição conjunta dos dados é aproximada utilizando um modelo de mistura multivariado para dados mistos, contínuos e categóricos. Apresentamentos um algoritmo do tipo EM para estimar a mistura e uma metodologia geral para estimar a distância entre observações. Simulações mostram que um método proposto performa tanto dados simulados, como reais.<br>In this work we propose a methodology to estimate the pairwise distance between mixed continuous and categorical data with missing values. Distance estimation is the base for many regression/classification methods, such as nearest neighbors and discriminant analysis, and for clustering techniques such as k-means and k-medoids. Classical methods for handling missing data rely on mean imputation, that could underestimate the variance, or regression-based imputation methods. Unfortunately, when the goal is to estimate the distance between observations, data imputation may perform badly and bias the results toward the data imputation model. In this work we estimate the pairwise distances directly, treating the missing data as random. The joint distribution of the data is approximated using a multivariate mixture model for mixed continuous and categorical data. We present an EM-type algorithm for estimating the mixture and a general methodology for estimating the distance between observations. Simulation shows that the proposed method performs well in both simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Yan. "Statistical inference for capture-recapture studies in continuous time /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23501765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Sanghoon. "Econometrics of jump-diffusion processes : approximation, estimation and forecasting." Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bederman, S. Samuel. "Estimation methods in random coefficient regression for continuous and binary longitudinal data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0004/MQ29203.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Irshad, Yasir. "On some continuous-time modeling and estimation problems for control and communication." Doctoral thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-26129.

Full text
Abstract:
The scope of the thesis is to estimate the parameters of continuous-time models used within control and communication from sampled data with high accuracy and in a computationally efficient way.In the thesis, continuous-time models of systems controlled in a networked environment, errors-in-variables systems, stochastic closed-loop systems, and wireless channels are considered. The parameters of a transfer function based model for the process in a networked control system are estimated by a covariance function based approach relying upon the second order statistical properties of input and output signals. Some other approaches for estimating the parameters of continuous-time models for processes in networked environments are also considered. The multiple input multiple output errors-in-variables problem is solved by means of a covariance matching algorithm. An analysis of a covariance matching method for single input single output errors-in-variables system identification is also presented. The parameters of continuous-time autoregressive exogenous models are estimated from closed-loop filtered data, where the controllers in the closed-loop are of proportional and proportional integral type, and where the closed-loop also contains a time-delay. A stochastic differential equation is derived for Jakes's wireless channel model, describing the dynamics of a scattered electric field with the moving receiver incorporating a Doppler shift.<br><p>The thesis consists of five main parts, where the first part is an introduction- Parts II-IV are based on the following articles:</p><p><strong>Part II</strong> - Networked Control Systems</p><p>1. Y. Irshad, M. Mossberg and T. Söderström. <em>System identification in a networkedenvironment using second order statistical properties</em>.</p><p>A versionwithout all appendices is published as Y. Irshad, M. Mossberg and T. Söderström. <em>System identification in a networked environment using second order statistical properties</em>. Automatica, 49(2), pages 652–659, 2013.</p><p>Some preliminary results are also published as M. Mossberg, Y. Irshad and T. Söderström. <em>A covariance function based approachto networked system identification.</em> In Proc. 2nd IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages 127–132, Annecy,France, September 13–14, 2010</p><p>2. Y. Irshad and M. Mossberg. <em>Some parameters estimation methods applied tonetworked control systems</em>.A journal submission is made. Some preliminary results are published as Y. Irshad and M. Mossberg.<em> A comparison of estimation concepts applied to networked control systems</em>. In Proc. 19th Int. Conf. on Systems, Signals andImage Processing, pages 120–123, Vienna, Austria, April 11–13, 2012.</p><p><strong>Part III</strong> - Errors-in-variables Identification</p><p>3. Y. Irshad and M. Mossberg. <em>Continuous-time covariance matching for MIMOEIV system identification</em>. A journal submission is made.</p><p>4. T. Söderström, Y. Irshad, M. Mossberg and W. X. Zheng. <em>On the accuracy of acovariance matching method for continuous-time EIV identification. </em>Provisionally accepted for publication in Automatica.</p><p>Some preliminary results are published as T. Söderström, Y. Irshad, M. Mossberg, and W. X. Zheng. <em>Accuracy analysis of a covariance matching method for continuous-time errors-in-variables system identification</em>. In Proc. 16th IFAC Symp. System Identification, pages 1383–1388, Brussels, Belgium, July 11–13, 2012.</p><p><strong>Part IV</strong> - Wireless Channel Modeling</p><p>5. Y. Irshad and M. Mossberg.<em> Wireless channel modeling based on stochasticdifferential equations .</em>Some results are published as M. Mossberg and Y. Irshad.<em> A stochastic differential equation forwireless channelsbased on Jakes’s model with time-varying phases,</em> In Proc. 13th IEEEDigitalSignal Processing Workshop, pages 602–605, Marco Island, FL, January4–7, 2009.</p><p><strong>Part V</strong> - Closed-loop Identification</p><p>6. Y. Irshad and M. Mossberg. Closed-loop identification of P- and PI-controlledtime-delayed stochastic systems.Some results are published as M. Mossberg and Y. Irshad. <em>Closed-loop identific ation of stochastic models from filtered data</em>, In Proc. IEEE Multi-conference on Systems and Control,San Antonio, TX, September 3–5, 2008</p>
APA, Harvard, Vancouver, ISO, and other styles
29

Yiin, Lihbor. "Sequence estimation receivers for trellis-coded continuous phase modulation on mobile channels." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/14818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Foster, Miranda Jane. "Continuous time estimation and its application to active mixing volume (AMV) models." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Acciaroli, Giada. "Calibration of continuous glucose monitoring sensors by time-varying models and Bayesian estimation." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3425746.

Full text
Abstract:
Minimally invasive continuous glucose monitoring (CGM) sensors are wearable medical devices that provide frequent (e.g., 1-5 min sampling rate) real-time measurements of glucose concentration for several consecutive days. This can be of great help in the daily management of diabetes. Most of the CGM systems commercially available today have a wire-based electrochemical sensor, usually placed in the subcutaneous tissue, which measures a "raw" electrical current signal via a glucose-oxidase electrochemical reaction. Observations of the raw electrical signal are frequently revealed by the sensor on a fine, uniformly spaced, time grid. These samples of electrical nature are in real-time converted to interstitial glucose (IG) concentration levels through a calibration process by fitting a few blood glucose (BG) concentration measurements, sparsely collected by the patient through fingerprick. Usually, for coping with such a process, CGM sensor manufacturers employ linear calibration models to approximate, albeit in limited time-intervals, the nonlinear relationship between electrical signal and glucose concentration. Thus, on the one hand, frequent calibrations (e.g., two per day) are required to guarantee a good sensor accuracy. On the other, each calibration requires patients to add uncomfortable extra actions to the many already needed in the routine of diabetes management. The aim of this thesis is to develop new calibration algorithms for minimally invasive CGM sensors able to ensure good sensor accuracy with the minimum number of calibrations. In particular, we propose i) to replace the time-invariant gain and offset conventionally used by the linear calibration models with more sophisticated time-varying functions valid for multiple-day periods, with unknown model parameters for which an a priori statistical description is available from independent training sets; ii) to numerically estimate the calibration model parameters by means of a Bayesian estimation procedure that exploits the a priori information on model parameters in addition to some BG samples sparsely collected by the patient. The thesis is organized in 6 chapters. In Chapter 1, after a background introduction on CGM sensor technologies, the calibration problem is illustrated. Then, some state-of-art calibration techniques are briefly discussed with their open problems, which result in the aims of the thesis illustrated at the end of the chapter. In Chapter 2, the datasets used for the implementation of the calibration techniques are described, together with the performance metrics and the statistical analysis tools which will be employed to assess the quality of the results. In Chapter 3, we illustrate a recently proposed calibration algorithm (Vet- toretti et al., IEEE Trans Biomed Eng 2016), which represents the starting point of the study proposed in this thesis. In particular, we demonstrate that, thanks to the development of a time-varying day-specific Bayesian prior, the algorithm can become able to reduce the calibration frequency from two to one per day. However, the linear calibration model used by the algorithm has domain of validity limited to certain time intervals, not allowing to further reduce calibrations to less then one per day and calling for the development of a new calibration model valid for multiple-day periods like that developed in the remainder of this thesis. In Chapter 4, a novel Bayesian calibration algorithm working in a multi-day framework (referred to as Bayesian multi-day, BMD, calibration algorithm) is presented. It is based on a multiple-day model of sensor time-variability with second order statistical priors on its unknown parameters. In each patient-sensor realization, the numerical values of the calibration model parameters are determined by a Bayesian estimation procedure exploiting the BG samples sparsely collected by the patient. In addition, the distortion introduced by the BG-to-IG kinetics is compensated during parameter identification via non-parametric deconvolution. The BMD calibration algorithm is applied to two datasets acquired with the "present-generation" Dexcom (Dexcom Inc., San Diego, CA) G4 Platinum (DG4P) CGM sensor and a "next-generation" Dexcom CGM sensor prototype (NGD). In the DG4P dataset, results show that, despite the reduction of calibration frequency (on average from 2 per day to 0.25 per day), the BMD calibration algorithm significantly improves sensor accuracy compared to the manufacturer calibration algorithm. In the NGD dataset, performance is even better than that of present generation, allowing to further reduce calibrations toward zero. In Chapter 5, we analyze the potential margins for improvement of the BMD calibration algorithm and propose a further extension of the method. In particular, to cope with the inter-sensor and inter-subject variability, we propose a multi-model approach and a Bayesian model selection framework (referred to as multi-model Bayesian framework, MMBF) in which the most likely calibration model is chosen among a finite set of candidates. A preliminary assessment of the MMBF is conducted on synthetic data generated by a well-established type 1 diabetes simulation model. Results show a statistically significant accuracy improvement compared to the use of a unique calibration model. Finally, the major findings of the work carried out in this thesis, possible applications and margins for improvement are summarized in Chapter 6.<br>I sensori minimamente invasivi per il monitoraggio in continua della glicemia, indicati con l’acronimo CGM (continuous glucose monitoring), sono dei dispositivi medici indossabili capaci di misurare la glicemia in tempo reale, ogni 1-5 minuti, per più giorni consecutivi. Questo tipo di misura fornisce un profilo di glicemia quasi continuo che risulta essere un’informazione molto utile per la gestione quotidiana della terapia del diabete. La maggior parte dei dispositivi CGM ad oggi disponibili nel mercato dispongono di un sensore di tipo elettrochimico, solitamente inserito nel tessuto sottocutaneo, che misura una corrente elettrica generata dalla reazione chimica di glucosio-ossidasi. Le misure di corrente elettrica sono fornite dal sensore con campionamento uniforme ad elevata frequenza temporale e vengono convertite in tempo reale in valori di glicemia interstiziale attraverso un processo di calibrazione. La procedura di calibrazione prevede l’acquisizione da parte del paziente di qualche misura di glicemia plasmatica di riferimento tramite dispositivi pungidito. Solitamente, le aziende produttrici di sensori CGM implementano un processo di calibrazione basato su un modello di tipo lineare che approssima, sebbene in intervalli di tempo di durata limitata, la più complessa relazione tra corrente elettrica e glicemia. Di conseguenza, si rendono necessarie frequenti calibrazioni (per esempio, due al giorno) per aggiornare i parametri del modello di calibrazione e garantire una buona accuratezza di misura. Tuttavia, ogni calibrazione prevede l’acquisizione da parte del paziente di misure di glicemia tramite dispositivi pungidito. Questo aumenta la già numerosa lista di azioni che i pazienti devono svolgere quotidianamente per gestire la loro terapia. Lo scopo di questa tesi è quello di sviluppare un nuovo algoritmo di calibrazione per sensori CGM minimamente invasivi capace di garantire una buona accuratezza di misura con il minimo numero di calibrazioni. Nello specifico, si propone i) di sostituire il guadagno ed offset tempo-invarianti solitamente utilizzati nei modelli di calibrazione di tipo lineare con delle funzioni tempo-varianti, capaci di descrivere il comportamento del sensore per intervalli di tempo di più giorni, e per cui sia disponibile dell’informazione a priori riguardante i parametri incogniti; ii) di stimare il valore numerico dei parametri del modello di calibrazione con metodo Bayesiano, sfruttando l’informazione a priori sui parametri di calibrazione in aggiunta ad alcune misure di glicemia plasmatica di riferimento. La tesi è organizzata in 6 capitoli. Nel Capitolo 1, dopo un’introduzione sulle tecnologie dei sensori CGM, viene illustrato il problema della calibrazione. In seguito, vengono discusse alcune tecniche di calibrazione che rappresentano lo stato dell’arte ed i loro problemi aperti, che risultano negli scopi della tesi descritti alla fine del capitolo. Nel Capitolo 2 vengono descritti i dataset utilizzati per l’implementazione delle tecniche di calibrazione. Inoltre, vengono illustrate le metriche di accuratezza e le tecniche di analisi statistica utilizzate per analizzare la qualità dei risultati. Nel Capitolo 3 viene illustrato un algoritmo di calibrazione recentemente proposto in letteratura (Vettoretti et al., IEEE, Trans Biomed Eng 2016). Questo algoritmo rappresenta il punto di partenza dello studio svolto in questa tesi. Più precisamente, viene dimostrato che, grazie all’utilizzo di un prior Bayesiano specifico per ogni giorno di utilizzo, l’algoritmo diventa efficace nel ridurre le calibrazioni da due a una al giorno senza perdita di accuratezza. Tuttavia, il modello lineare di calibrazione utilizzato dall’algoritmo ha dominio di validità limitato a brevi intervalli di tempo tra due calibrazioni successive, rendendo impossibile l’ulteriore riduzione delle calibrazioni a meno di una al giorno senza perdita di accuratezza. Questo determina la necessità di sviluppare un nuovo modello di calibrazione valido per intervalli di tempo più estesi, fino a più giorni consecutivi, come quello sviluppato nel resto di questa tesi. Nel Capitolo 4 viene presentato un nuovo algoritmo di calibrazione di tipo Bayesiano (Bayesian multi-day, BMD). L’algoritmo si basa su un modello della tempo-varianza delle caratteristiche del sensore nei suoi giorni di utilizzo e sulla disponibilità di informazione statistica a priori sui suoi parametri incogniti. Per ogni coppia paziente-sensore, il valore numerico dei parametri del modello è determinato tramite stima Bayesiana sfruttando alcune misure plasmatiche di riferimento acquisite dal paziente con dispositivi pungidito. Inoltre, durante la stima dei parametri, la dinamica introdotta dalla cinetica plasma-interstizio viene compensata tramite deconvoluzione nonparametrica. L’algoritmo di calibrazione BMD viene applicato a due differenti set di dati acquisiti con il sensore commerciale Dexcom (Dexocm Inc., San Diego, CA) G4 Platinum (DG4P) e con un prototipo di sensore Dexcom di nuova generazione (NGD). Nei dati acquisiti con il sensore DG4P, i risultati dimostrano che, nonostante le calibrazioni vengano ridotte (in media da 2 al giorno a 0.25 al giorno), l’ algoritmo BMD migliora significativamente l’accuratezza del sensore rispetto all’algoritmo di calibrazione utilizzato dall’azienda produttrice del sensore. Nei dati acquisiti con il sensore NGD, i risultati sono ancora migliori, permettendo di ridurre ulteriormente le calibrazioni fino a zero. Nel Capitolo 5 vengono analizzati i potenziali margini di miglioramento dell’algoritmo di calibrazione BMD discusso nel capitolo precedente e viene proposta un’ulteriore estensione dello stesso. In particolare, per meglio gestire la variabilità tra sensori e tra soggetti, viene proposto un approccio di calibrazione multi-modello e un metodo Bayesiano di selezione del modello (Multi-model Bayesian framework, MMBF) in cui il modello di calibrazione più probabile a posteriori viene scelto tra un set di possibili candidati. Tale approccio multi-modello viene analizzato in via preliminare su un set di dati simulati generati da un simulatore del paziente diabetico di tipo 1 ben noto in letteratura. I risultati dimostrano che l’accuratezza del sensore migliora in modo significativo con MMBF rispetto ad utilizzare un unico modello di calibrazione. Infine, nel Capitolo 6 vengono riassunti i principali risultati ottenuti in questa tesi, le possibili applicazioni, e i margini di miglioramento per gli sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
32

Fan, Yiying. "Covariance estimation and application to building a new control chart." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1291406214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chang, Shu-Ching. "Antedependence Models for Skewed Continuous Longitudinal Data." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4827.

Full text
Abstract:
This thesis explores the problems of fitting antedependence (AD) models and partial antecorrelation (PAC) models to continuous non-Gaussian longitudinal data. AD models impose certain conditional independence relations among the measurements within each subject, while PAC models characterize the partial correlation relations. The models are parsimonious and useful for data exhibiting time-dependent correlations. Since the relation of conditional independence among variables is rather restrictive, we first consider an autoregressively characterized PAC model with independent asymmetric Laplace (ALD) innovations and prove that this model is an AD model. The ALD distribution previously has been applied to quantile regression and has shown promise for modeling asymmetrically distributed ecological data. In addition, the double exponential distribution, a special case of the ALD, has played an important role in fitting symmetric finance and hydrology data. We give the distribution of a linear combination of independent standard ALD variables in order to derive marginal distributions for the model. For the model estimation problem, we propose an iterative algorithm for the maximum likelihood estimation. The estimation accuracy is illustrated by some numerical examples as well as some longitudinal data sets. The second component of this dissertation focuses on AD multivariate skew normal models. The multivariate skew normal distribution not only shares some nice properties with multivariate normal distributions but also allows for any value of skewness. We derive necessary and sufficient conditions on the shape and covariance parameters for multivariate skew normal variables to be AD(p) for some p. Likelihood-based estimation for balanced and monotone missing data as well as likelihood ratio hypothesis tests for the order of antedependence and for zero skewness under the models are presented. Since the class of skew normal random variables is closed under the addition of independent standard normal random variables, we then consider an autoregressively characterized PAC model with a combination of independent skew normal and normal innovations. Explicit expressions for the marginals, which all have skew normal distributions, and maximum likelihood estimates of model parameters, are given. Numerical results show that these three proposed models may provide reasonable fits to some continuous non-Gaussian longitudinal data sets. Furthermore, we compare the fits of these models to the Treatment A cattle growth data using penalized likelihood criteria, and demonstrate that the AD(2) multivariate skew normal model fits the data best among those proposed models.
APA, Harvard, Vancouver, ISO, and other styles
34

Bohn, Christian. "Recursive parameter estimation for nonlinear continuous time systems through sensitivity model based adaptive filters." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=959575170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Katsikatsou, Myrsini. "Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188342.

Full text
Abstract:
The estimation of latent variable models with ordinal and continuous, or ranking variables is the research focus of this thesis. The existing estimation methods are discussed and a composite likelihood approach is developed. The main advantages of the new method are its low computational complexity which remains unchanged regardless of the model size, and that it yields an asymptotically unbiased, consistent, and normally distributed estimator. The thesis consists of four papers. The first one investigates the two main formulations of the unrestricted Thurstonian model for ranking data along with the corresponding identification constraints. It is found that the extra identifications constraints required in one of them lead to unreliable estimates unless the constraints coincide with the true values of the fixed parameters. In the second paper, a pairwise likelihood (PL) estimation is developed for factor analysis models with ordinal variables. The performance of PL is studied in terms of bias and mean squared error (MSE) and compared with that of the conventional estimation methods via a simulation study and through some real data examples. It is found that the PL estimates and standard errors have very small bias and MSE both decreasing with the sample size, and that the method is competitive to the conventional ones. The results of the first two papers lead to the next one where PL estimation is adjusted to the unrestricted Thurstonian ranking model. As before, the performance of the proposed approach is studied through a simulation study with respect to relative bias and relative MSE and in comparison with the conventional estimation methods. The conclusions are similar to those of the second paper. The last paper extends the PL estimation to the whole structural equation modeling framework where data may include both ordinal and continuous variables as well as covariates. The approach is demonstrated through an example run in R software. The code used has been incorporated in the R package lavaan (version 0.5-11).
APA, Harvard, Vancouver, ISO, and other styles
36

Runeskog, Henrik. "Continuous Balance Evaluation by Image Analysis of Live Video : Fall Prevention Through Pose Estimation." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297541.

Full text
Abstract:
The deep learning technique Human Pose Estimation (or Human Keypoint Detection) is a promising field in tracking a person and identifying its posture. As posture and balance are two closely related concepts, the use of human pose estimation could be applied to fall prevention. By deriving the location of a persons Center of Mass and thereafter its Center of Pressure, one can evaluate the balance of a person without the use of force plates or sensors and solely using cameras. In this study, a human pose estimation model together with a predefined human weight distribution model were used to extract the location of a persons Center of Pressure in real time. The proposed method utilized two different methods of acquiring depth information from the frames - stereoscopy through two RGB-cameras and with the use of one RGB-depth camera. The estimated location of the Center of Pressure were compared to the location of the same parameter extracted while using the force plate Wii Balance Board. As the proposed method were to operate in real-time and without the use of computational processor enhancement, the choice of human pose estimation model were aimed to maximize software input/output speed. Thus, three models were used - one smaller and faster model called Lightweight Pose Network, one larger and accurate model called High-Resolution Network and one model placing itself somewhere in between the two other models, namely Pose Residual Network. The proposed method showed promising results for a real-time method of acquiring balance parameters. Although the largest source of error were the acquisition of depth information from the cameras. The results also showed that using a smaller and faster human pose estimation model proved to be sufficient in relation to the larger more accurate models in real-time usage and without the use of computational processor enhancement.<br>Djupinlärningstekniken Kroppshållningsestimation är ett lovande medel gällande att följa en person och identifiera dess kroppshållning. Eftersom kroppshållning och balans är två närliggande koncept, kan användning av kroppshållningsestimation appliceras till fallprevention. Genom att härleda läget för en persons tyngdpunkt och därefter läget för dess tryckcentrum, kan utvärdering en persons balans genomföras utan att använda kraftplattor eller sensorer och att enbart använda kameror. I denna studie har en kroppshållningsestimationmodell tillsammans med en fördefinierad kroppsviktfördelning använts för att extrahera läget för en persons tryckcentrum i realtid. Den föreslagna metoden använder två olika metoder för att utvinna djupseende av bilderna från kameror - stereoskopi genom användning av två RGB-kameror eller genom användning av en RGB-djupseende kamera. Det estimerade läget av tryckcentrat jämfördes med läget av samma parameter utvunnet genom användning av tryckplattan Wii Balance Board. Eftersom den föreslagna metoden var ämnad att fungera i realtid och utan hjälp av en GPU, blev valet av kroppshållningsestimationsmodellen inriktat på att maximera mjukvaruhastighet. Därför användes tre olika modeller - en mindre och snabbare modell vid namn Lightweight Pose Network, en större och mer träffsäker modell vid namn High-Resolution Network och en model som placerar sig någonstans mitt emellan de två andra modellerna gällande snabbhet och träffsäkerhet vid namn Pose Resolution Network. Den föreslagna metoden visade lovande resultat för utvinning av balansparametrar i realtid, fastän den största felfaktorn visade sig vara djupseendetekniken. Resultaten visade att användning av en mindre och snabbare kroppshållningsestimationsmodellen påvisar att hålla måttet i jämförelse med större och mer träffsäkra modeller vid användning i realtid och utan användning av externa dataprocessorer.
APA, Harvard, Vancouver, ISO, and other styles
37

Zareba, Sebastian [Verfasser]. "Modeling, parameter estimation, and optimization of continuous annealing furnaces in strip rolling lines / Sebastian Zareba." Aachen : Shaker, 2017. http://d-nb.info/1138178756/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Collins, Donovan (Donovan Scott). "Feature-based investment cost estimation based on modular design of a continuous pharmaceutical manufacturing system." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66063.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, June 2011.<br>"June 2011." Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 72-73).<br>Previous studies of continuous manufacturing processes have used equipment-factored cost estimation methods to predict savings in initial plant investment costs. In order to challenge and validate the existing methods of cost estimation, feature-based cost estimates were constructed based on a modular process design model. Synthesis of an existing chemical intermediate was selected as the model continuous process. A continuous process was designed that was a literal, step by step, translation of the batch process. Supporting design work included process flow diagrams and basic piping and instrumentation diagrams. Design parameters from the process model were combined with feature-based costs to develop a series of segmented cost estimates for the model continuous plant at several production scales. Based on this analysis, the continuous facility seems to be intrinsically less expensive only at a relatively high production scale. Additionally, the distribution of cost areas for the continuous facility differs significantly from the distribution previous assumed for batch plants. This finding suggests that current models may not be appropriate for generating cost estimates for continuous plants. These results should not have a significant negative impact on the value proposition for the continuous manufacturing platform. The continuous process designed for this project was not optimized. Therefore, this work reiterates that the switch to continuous must be accompanied with optimization and innovation in the underlying continuous chemistry.<br>by Donovan Collins.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
39

Henry, Janell Christine. "Flow estimation for stream restoration and wetland projects in ungaged watersheds using continuous simulation modeling." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/22021.

Full text
Abstract:
More than a billion dollars are spent annually on stream restoration in the United States (Bernhardt et al., 2005), but the science remains immature. A promising technique for estimating a single or range of design discharges is the generalization of a parsimonious conceptual continuous simulation model. In this study the Probability Distributed Model (PDM), was generalized for the Maryland and Virginia Piedmont. Two hundred and sixty years of daily average flow data from fifteen watersheds were used to calibrate PDM. Because the application of the study is to stream restoration, the model was calibrated to discharges greater than two times baseflow and less than flows with a return period of ten years. The hydrologic calibration parameters were related to watershed characteristics through regression analysis, and these equations were used to calculate regional model parameters based on watershed characteristics for a single "ungaged" independent evaluation watershed in the region. Simulated flow was compared to observed flow; the model simulated discharges of lower return periods moderately well (e.g., within 13% of observed for a flow with a five year return period). These results indicate this technique may be useful for stream restoration and wetland design.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Barceló, Rico Fátima. "Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17173.

Full text
Abstract:
ABSTRACT Diabetes Mellitus (DM) embraces a group of metabolic diseases which main characteristic is the presence of high glucose levels in blood. It is one of the diseases with major social and health impact, both for its prevalence and also the consequences of the chronic complications that it implies. One of the research lines to improve the quality of life of people with diabetes is of technical focus. It involves several lines of research, including the development and improvement of devices to estimate "online" plasma glucose: continuous glucose monitoring systems (CGMS), both invasive and non-invasive. These devices estimate plasma glucose from sensor measurements from compartments alternative to blood. Current commercially available CGMS are minimally invasive and offer an estimation of plasma glucose from measurements in the interstitial fluid CGMS is a key component of the technical approach to build the artificial pancreas, aiming at closing the loop in combination with an insulin pump. Yet, the accuracy of current CGMS is still poor and it may partly depend on low performance of the implemented Calibration Algorithm (CA). In addition, the sensor-to-patient sensitivity is different between patients and also for the same patient in time. It is clear, then, that the development of new efficient calibration algorithms for CGMS is an interesting and challenging problem. The indirect measurement of plasma glucose through interstitial glucose is a main confounder of CGMS accuracy. Many components take part in the glucose transport dynamics. Indeed, physiology might suggest the existence of different local behaviors in the glucose transport process. For this reason, local modeling techniques may be the best option for the structure of the desired CA. Thus, similar input samples are represented by the same local model. The integration of all of them considering the input regions where they are valid is the final model of the whole data set. Clustering is t<br>Barceló Rico, F. (2012). Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17173<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
41

Frye, Elora. "Material Thermal Property Estimation of Fibrous Insulation: Heat Transfer Modeling and the Continuous Genetic Algorithm." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5433.

Full text
Abstract:
Material thermal properties are highly sought after to better understand the performance of a material under particular conditions. As new materials are created, their physical properties will determine their performance for various applications. These properties have been estimated using many techniques including experimental testing, numerical modeling, and a combination of both. Existing methods can be time consuming, thus, a time-efficient and precise method to estimate these thermal properties was desired. A one-dimensional finite difference numerical model was developed to replicate the heat transfer through an experimental apparatus. A combination of this numerical model and the Continuous Genetic Algorithm optimization technique was used to estimate material thermal properties of fibrous insulation from test data. The focus of this work was to predict these material thermal properties for an Alumina Paper that is commonly used in aerospace applications. The background, methodology, and results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

Kukreja, Sunil L. "Method for estimation of continuous-time models of linear time-invariant systems via the bilinear transform." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29861.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Scholz, Markus [Verfasser], and V. [Akademischer Betreuer] Fasen-Hartmann. "Estimation of Cointegrated Multivariate Continuous-Time Autoregressive Moving Average Processes / Markus Scholz. Betreuer: V. Fasen-Hartmann." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1112224866/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kukreja, Sunil L. "Method for estimation of continuous-time models of linear time-invariant systems via the bilinear transform." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=27486.

Full text
Abstract:
In this thesis, we develop a technique which is capable of identifying and rejecting the sampling zeros for continuous-time linear time-invariant systems. The method makes use of the MOESP family of identification algorithms (59), (60), (61), (62) to obtain a discrete-time model of the system. Since, the structure and parameters of discrete-time models are difficult to relate to the underlying continuous-time system of interest it is necessary to compute the continuous-time equivalent from the discrete-time model. The bilinear transform (19), (54) does this effectively but also converts extraneous system zeros (the so-called process or sampling zeros) to the continuous-time model. Here, we present an approach to distinguish which of the system zeros are due to sampling effects and which are truly part of the model's dynamics. Dropping sampling zeros yields an accurate description of the system under test. Digital simulations demonstrate that this method is robust in the presence of measurement noise. Moreover, series of experiments performed on a known, physical, linear system validate the simulation results. Finally, an investigation of the passive dynamics of the ankle was used to demonstrate the applicability of this method to a physiological system.
APA, Harvard, Vancouver, ISO, and other styles
45

Aling, Peter. "Gaussian estimation of single-factor continuous-time models of the South African short-term interest rate." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/5752.

Full text
Abstract:
Includes bibliographical references (leaves 33-36).<br>This paper presents the results of Gaussian estimation of the South African short-term interest rate. It uses the same Gaussian estimation techniques employed by Nowman (1997) to estimate the South African short-term interest rate using South afrcan Treasury bill data. A range of single-factor continuous-time models of the short-term interest rate are estimated using a discrete-time model and compared to a discrete approximation used by Chan, Karolyi, Lonstaff and Sanders (1992a). We find that the process followed by the South African short-term interest rate is best explained by the Constant Elasticity of Variance (CEV) model and that the conditional volatility depends to some extent on the level of the interest rate. In addition we find evidence of a structural break in the mid-1980s, confirming our suspicions that the financial liberalisation of that period affected the short rate process.
APA, Harvard, Vancouver, ISO, and other styles
46

VanDerwerken, Douglas Nielsen. "Variable Selection and Parameter Estimation Using a Continuous and Differentiable Approximation to the L0 Penalty Function." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2486.

Full text
Abstract:
L0 penalized likelihood procedures like Mallows' Cp, AIC, and BIC directly penalize for the number of variables included in a regression model. This is a straightforward approach to the problem of overfitting, and these methods are now part of every statistician's repertoire. However, these procedures have been shown to sometimes result in unstable parameter estimates as a result on the L0 penalty's discontinuity at zero. One proposed alternative, seamless-L0 (SELO), utilizes a continuous penalty function that mimics L0 and allows for stable estimates. Like other similar methods (e.g. LASSO and SCAD), SELO produces sparse solutions because the penalty function is non-differentiable at the origin. Because these penalized likelihoods are singular (non-differentiable) at zero, there is no closed-form solution for the extremum of the objective function. We propose a continuous and everywhere-differentiable penalty function that can have arbitrarily steep slope in a neighborhood near zero, thus mimicking the L0 penalty, but allowing for a nearly closed-form solution for the beta-hat vector. Because our function is not singular at zero, beta-hat will have no zero-valued components, although some will have been shrunk arbitrarily close thereto. We employ a BIC-selected tuning parameter used in the shrinkage step to perform zero-thresholding as well. We call the resulting vector of coefficients the ShrinkSet estimator. It is comparable to SELO in terms of model performance (selecting the truly nonzero coefficients, overall MSE, etc.), but we believe it to be more intuitive and simpler to compute. We provide strong evidence that the estimator enjoys favorable asymptotic properties, including the oracle property.
APA, Harvard, Vancouver, ISO, and other styles
47

Larsson, Erik. "Identification of stochastic continuous-time systems : algorithms, irregular sampling and Cramér-Rao bounds /." Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wood, Andrew Charles. "Methods for rainfall-runoff continuous simulation and flood frequency estimation on an ungauged river catchment with uncertainty." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547969.

Full text
Abstract:
Historic methods for time series predictions on ungauged sites in the UK have tended to focus on the regionalisation and regression of model parameters against catchment characteristics. Owing to wide variations in catchment characteristics and the (often) poor identification of model parameters, this has resulted in highly uncertain predictions on the ungauged site. However, only very few studies have sought to assess uncertainties in the predicted hydrograph. Methods from the UK Flood Estimation Handbook, that are normally applied for an event design hydrograph, are adopted to choose a pooling group of hydrologically similar gauged catchments to an ungauged application site on the River Tyne. Model simulations are derived for each pooling group catchment with a BETA rainfall-runoff model structure conditioned for the catchment. The BETA rainfall-runoff model simulations are developed using a Monte Carlo approach. For the estimation of uncertainty a modification of the GLUE methodology is applied. Gauging station errors are used to develop limits of acceptability for selecting behavioural model simulations and the final uncertainty limits are obtained with a set of performance thresholds. Prediction limits are derived from a set of calibration and validation simulations for each catchment. Methods are investigated for the carry over of data from the pooled group of models to the ungauged site to develop a weighted model set prediction with pooled prediction limits. Further development of this methodology may offer some interesting approaches for cross-validation of models and further improvements in uncertainty estimation in hydrological regionalisation.
APA, Harvard, Vancouver, ISO, and other styles
49

Dehghan, Mohammad Hossein. "Nonparametric methods for the estimation of the conditional distribution of an interval-censored lifetime given continuous covariates." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29471/29471.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dreißigacker, Christoph [Verfasser]. "Searches for continuous gravitational waves : sensitivity estimation and deep learning as a novel search method / Christoph Dreißigacker." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1220422142/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography