To see the other types of publications on this topic, follow the link: Dynamic linear models (DLMs).

Dissertations / Theses on the topic 'Dynamic linear models (DLMs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Dynamic linear models (DLMs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tongur, Can. "Seasonal Adjustment and Dynamic Linear Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-89496.

Full text
Abstract:
Dynamic Linear Models are a state space model framework based on the Kalman filter. We use this framework to do seasonal adjustments of empirical and artificial data. A simple model and an extended model based on Gibbs sampling are used and the results are compared with the results of a standard seasonal adjustment method. The state space approach is then extended to discuss direct and indirect seasonal adjustments. This is achieved by applying a seasonal level model with no trend and some specific input variances that render different signal-to-noise ratios. This is illustrated for a system consisting of two artificial time series. Relative efficiencies between direct, indirect and multivariate, i.e. optimal, variances are then analyzed. In practice, standard seasonal adjustment packages do not support optimal/multivariate seasonal adjustments, so a univariate approach to simultaneous estimation is presented by specifying a Holt-Winters exponential smoothing method. This is applied to two sets of time series systems by defining a total loss function that is specified with a trade-off weight between the individual series’ loss functions and their aggregate loss function. The loss function is based on either the more conventional squared errors loss or on a robust Huber loss. The exponential decay parameters are then estimated by minimizing the total loss function for different trade-off weights. It is then concluded what approach, direct or indirect seasonal adjustment, is to be preferred for the two time series systems. The dynamic linear modeling approach is also applied to Swedish political opinion polls to assert the true underlying political opinion when there are several polls, with potential design effects and bias, observed at non-equidistant time points. A Wiener process model is used to model the change in the proportion of voters supporting either a specific party or a party block. Similar to stock market models, all available (political) information is assumed to be capitalized in the poll results and is incorporated in the model by assimilating opinion poll results with the model through Bayesian updating of the posterior distribution. Based on the results, we are able to assess the true underlying voter proportion and additionally predict the elections.

At the time of doctoral defence the following papers were unpublished and had a status as follows: Paper 3: Manuscript; Paper 4: Manuscripts

APA, Harvard, Vancouver, ISO, and other styles
2

Randell, David. "Bayes linear variance learning for mixed linear temporal models." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3646/.

Full text
Abstract:
Modelling of complex corroding industrial systems is ritical to effective inspection and maintenance for ssurance of system integrity. Wall thickness and corrosion rate are modelled for multiple dependent corroding omponents, given observations of minimum wall thickness per component. At each inspection, partial observations of the system are considered. A Bayes Linear approach is adopted simplifying parameter estimation and avoiding often unrealistic distributional assumptions. Key system variances are modelled, making exchangeability assumptions to facilitate analysis for sparse inspection time-series. A utility based criterion is used to assess quality of inspection design and aid decision making. The model is applied to inspection data from pipework networks on a full-scale offshore platform.
APA, Harvard, Vancouver, ISO, and other styles
3

Frühwirth-Schnatter, Sylvia. "Data Augmentation and Dynamic Linear Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1992. http://epub.wu.ac.at/392/1/document.pdf.

Full text
Abstract:
We define a subclass of dynamic linear models with unknown hyperparameters called d-inverse-gamma models. We then approximate the marginal p.d.f.s of the hyperparameter and the state vector by the data augmentation algorithm of Tanner/Wong. We prove that the regularity conditions for convergence hold. A sampling based scheme for practical implementation is discussed. Finally, we illustrate how to obtain an iterative importance sampling estimate of the model likelihood. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
4

Frankel, Joe. "Linear dynamic models for automatic speech recognition." Thesis, University of Edinburgh, 2004. http://hdl.handle.net/1842/1087.

Full text
Abstract:
The majority of automatic speech recognition (ASR) systems rely on hidden Markov models (HMM), in which the output distribution associated with each state is modelled by a mixture of diagonal covariance Gaussians. Dynamic information is typically included by appending time-derivatives to feature vectors. This approach, whilst successful, makes the false assumption of framewise independence of the augmented feature vectors and ignores the spatial correlations in the parametrised speech signal. This dissertation seeks to address these shortcomings by exploring acoustic modelling for ASR with an application of a form of state-space model, the linear dynamic model (LDM). Rather than modelling individual frames of data, LDMs characterize entire segments of speech. An auto-regressive state evolution through a continuous space gives a Markovian model of the underlying dynamics, and spatial correlations between feature dimensions are absorbed into the structure of the observation process. LDMs have been applied to speech recognition before, however a smoothed Gauss-Markov form was used which ignored the potential for subspace modelling. The continuous dynamical state means that information is passed along the length of each segment. Furthermore, if the state is allowed to be continuous across segment boundaries, long range dependencies are built into the system and the assumption of independence of successive segments is loosened. The state provides an explicit model of temporal correlation which sets this approach apart from frame-based and some segment-based models where the ordering of the data is unimportant. The benefits of such a model are examined both within and between segments. LDMs are well suited to modelling smoothly varying, continuous, yet noisy trajectories such as found in measured articulatory data. Using speaker-dependent data from the MOCHA corpus, the performance of systems which model acoustic, articulatory, and combined acoustic-articulatory features are compared. As well as measured articulatory parameters, experiments use the output of neural networks trained to perform an articulatory inversion mapping. The speaker-independent TIMIT corpus provides the basis for larger scale acoustic-only experiments. Classification tasks provide an ideal means to compare modelling choices without the confounding influence of recognition search errors, and are used to explore issues such as choice of state dimension, front-end acoustic parametrization and parameter initialization. Recognition for segment models is typically more computationally expensive than for frame-based models. Unlike frame-level models, it is not always possible to share likelihood calculations for observation sequences which occur within hypothesized segments that have different start and end times. Furthermore, the Viterbi criterion is not necessarily applicable at the frame level. This work introduces a novel approach to decoding for segment models in the form of a stack decoder with A* search. Such a scheme allows flexibility in the choice of acoustic and language models since the Viterbi criterion is not integral to the search, and hypothesis generation is independent of the particular language model. Furthermore, the time-asynchronous ordering of the search means that only likely paths are extended, and so a minimum number of models are evaluated. The decoder is used to give full recognition results for feature-sets derived from the MOCHA and TIMIT corpora. Conventional train/test divisions and choice of language model are used so that results can be directly compared to those in other studies. The decoder is also used to implement Viterbi training, in which model parameters are alternately updated and then used to re-align the training data.
APA, Harvard, Vancouver, ISO, and other styles
5

Browne, Perry James. "The filtering of linear dynamic models with switching coefficients." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Veerapen, Parmaseeven Pillay. "Recurrence relationships and model monitoring for Dynamic Linear Models." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/109386/.

Full text
Abstract:
This thesis considers the incorporation and deletion of information in Dynamic Linear Models together with the detection of model changes and unusual values. General results are derived for the Normal Dynamic Linear Model which naturally also relate to second order modelling such as occurs with the Kalman Filter, linear least squares and linear Bayes estimation. The incorporation of new information, the assessment of its influence and the deletion of old or suspect information are important features of all sequential models. Many dynamic sequential models exhibit conditioned, independence properties. Important results concerning conditional independence in normal models are established which provide the framework and the tools necessary to develop neat procedures and to obtain appropriate recurrence relationships for data incorporation and deletion. These are demonstrated in the context of dynamic linear models, with particularly simple procedures for discount regression models. Appropriate model and forecast monitoring mechanisms are required to detect model changes and unusual values. Cumulative Sum (Cusum) techniques widely used in quality control and in model and forecast monitoring have been the source of inspiration in this context. Bearing in mind that a single sided Cusum may be regarded essentially as a sequence of sequential tests, such a Cusum is, in many cases, equivalent to a Sequence of Sequential Probability Ratio Tests in many cases, as for example in the case of the Exponential Family. A relationship between Cusums and Bayesian decision is established for a useful class of linear loss functions. It is found to apply to the Normal and other important practical cases. For V- mask Cusum graphs, a particularly interesting result which emerges is the interpretation of the distance of the V vertex from the latest plotted point as the prior precision in terms of a number of equivalent observations.
APA, Harvard, Vancouver, ISO, and other styles
7

Flury, Thomas. "Econometrics of dynamic non-linear models in macroeconomics and finance." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miyandoab, Sara Alizadeh. "Three essays on non-linear effects in dynamic macroeconomic models." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/39350.

Full text
Abstract:
This thesis has aimed to analyse non-linearity in dynamic models. Attention has focused on the class of dynamic models that accommodate the possibility of distributional modification in the models. In chapter 1, I have studied the non-linear effects of policy shocks in the classical DSGE model. The analysis of such model is subject to two types of shocks, technology and monetary policy. I have extended the analysis of classical model by allowing for the distributional modification of monetary policy shock using WSN distribution. This study reveals the extent to which the distribution of macroeconomic variables may response to policy actions and outcomes involved. Moreover, in classical monetary model the long run behaviour of the level of inflation with respect to the inflation uncertainty has investigated. I have also analysed the dynamic model of AR-GARCH time series. I have investigated the possible non-linear and asymmetric effects of distributional assumptions on the behaviour of the QMLE of the parameters in AR(1)-GARCH(1,1) model. A Monte Carlo experiment is set up to evaluate the distributional misspecification in aforementioned model by applying both symmetric and asymmetric WSN distribution across a range of mean and volatility persistence. The other contribution in chapter 2 is computing the quantiles under distributional misspecification in AR-GARCH model. In terms of the accuracy of the estimated quantiles, I have implemented the bootstrap technique. In addition, in chapter 3 the attention has concentrated on the procedures with suitable technique for the analysis of unit root tests. The usefulness of bootstrap technique is investigated in the context of unit root test applying in stock indices and exchange rate series. I evaluate the popular unit root tests including Augmented Dickey Fuller(ADF) and Phillips Perron(PP) as well as DF-GLS. Furthermore, this chapter attempts to answer the question of how the difference in frequency of empirical data say, monthly, weekly, and daily might affect the unit root results.
APA, Harvard, Vancouver, ISO, and other styles
9

Dimopoulos, Konstantinos Panagiotis. "Non-linear control strategies using input-state network models." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karlon, Kathleen Mary. "Determining optimal architecture for dynamic linear models in time series applications /." Electronic version (PDF), 2006. http://dl.uncw.edu/etd/2006/karlonk/kathleenkarlon.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Corneli, Marco. "Dynamic stochastic block models, clustering and segmentation in dynamic graphs." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E012/document.

Full text
Abstract:
Cette thèse porte sur l’analyse de graphes dynamiques, définis en temps discret ou continu. Nous introduisons une nouvelle extension dynamique du modèle a blocs stochastiques (SBM), appelée dSBM, qui utilise des processus de Poisson non homogènes pour modéliser les interactions parmi les paires de nœuds d’un graphe dynamique. Les fonctions d’intensité des processus ne dépendent que des classes des nœuds comme dans SBM. De plus, ces fonctions d’intensité ont des propriétés de régularité sur des intervalles temporels qui sont à estimer, et à l’intérieur desquels les processus de Poisson redeviennent homogènes. Un récent algorithme d’estimation pour SBM, qui repose sur la maximisation d’un critère exact (ICL exacte) est ici adopté pour estimer les paramètres de dSBM et sélectionner simultanément le modèle optimal. Ensuite, un algorithme exact pour la détection de rupture dans les séries temporelles, la méthode «pruned exact linear time» (PELT), est étendu pour faire de la détection de rupture dans des données de graphe dynamique selon le modèle dSBM. Enfin, le modèle dSBM est étendu ultérieurement pour faire de l’analyse de réseau textuel dynamique. Les réseaux sociaux sont un exemple de réseaux textuels: les acteurs s’échangent des documents (posts, tweets, etc.) dont le contenu textuel peut être utilisé pour faire de la classification et détecter la structure temporelle du graphe dynamique. Le modèle que nous introduisons est appelé «dynamic stochastic topic block model» (dSTBM)
This thesis focuses on the statistical analysis of dynamic graphs, both defined in discrete or continuous time. We introduce a new extension of the stochastic block model (SBM) for dynamic graphs. The proposed approach, called dSBM, adopts non homogeneous Poisson processes to model the interaction times between pairs of nodes in dynamic graphs, either in discrete or continuous time. The intensity functions of the processes only depend on the node clusters, in a block modelling perspective. Moreover, all the intensity functions share some regularity properties on hidden time intervals that need to be estimated. A recent estimation algorithm for SBM, based on the greedy maximization of an exact criterion (exact ICL) is adopted for inference and model selection in dSBM. Moreover, an exact algorithm for change point detection in time series, the "pruned exact linear time" (PELT) method is extended to deal with dynamic graph data modelled via dSBM. The approach we propose can be used for change point analysis in graph data. Finally, a further extension of dSBM is developed to analyse dynamic net- works with textual edges (like social networks, for instance). In this context, the graph edges are associated with documents exchanged between the corresponding vertices. The textual content of the documents can provide additional information about the dynamic graph topological structure. The new model we propose is called "dynamic stochastic topic block model" (dSTBM).Graphs are mathematical structures very suitable to model interactions between objects or actors of interest. Several real networks such as communication networks, financial transaction networks, mobile telephone networks and social networks (Facebook, Linkedin, etc.) can be modelled via graphs. When observing a network, the time variable comes into play in two different ways: we can study the time dates at which the interactions occur and/or the interaction time spans. This thesis only focuses on the first time dimension and each interaction is assumed to be instantaneous, for simplicity. Hence, the network evolution is given by the interaction time dates only. In this framework, graphs can be used in two different ways to model networks. Discrete time […] Continuous time […]. In this thesis both these perspectives are adopted, alternatively. We consider new unsupervised methods to cluster the vertices of a graph into groups of homogeneous connection profiles. In this manuscript, the node groups are assumed to be time invariant to avoid possible identifiability issues. Moreover, the approaches that we propose aim to detect structural changes in the way the node clusters interact with each other. The building block of this thesis is the stochastic block model (SBM), a probabilistic approach initially used in social sciences. The standard SBM assumes that the nodes of a graph belong to hidden (disjoint) clusters and that the probability of observing an edge between two nodes only depends on their clusters. Since no further assumption is made on the connection probabilities, SBM is a very flexible model able to detect different network topologies (hubs, stars, communities, etc.)
APA, Harvard, Vancouver, ISO, and other styles
12

Ranganathan, Shyam. "Non-linear dynamic modelling for panel data in the social sciences." Doctoral thesis, Uppsala universitet, Tillämpad matematik och statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-261289.

Full text
Abstract:
Non-linearities and dynamic interactions between state variables are characteristic of complex social systems and processes. In this thesis, we present a new methodology to model these non-linearities and interactions from the large panel datasets available for some of these systems. We build macro-level statistical models that can verify theoretical predictions, and use polynomial basis functions so that each term in the model represents a specific mechanism. This bridges the existing gap between macro-level theories supported by statistical models and micro-level mechanistic models supported by behavioural evidence. We apply this methodology to two important problems in the social sciences, the demographic transition and the transition to democracy. The demographic transition is an important problem for economists and development scientists. Research has shown that economic growth reduces mortality and fertility rates, which reduction in turn results in faster economic growth. We build a non-linear dynamic model and show how this data-driven model extends existing mechanistic models. We also show policy applications for our models, especially in setting development targets for the Millennium Development Goals or the Sustainable Development Goals. The transition to democracy is an important problem for political scientists and sociologists. Research has shown that economic growth and overall human development transforms socio-cultural values and drives political institutions towards democracy. We model the interactions between the state variables and find that changes in institutional freedoms precedes changes in socio-cultural values. We show applications of our models in studying development traps. This thesis comprises the comprehensive summary and seven papers. Papers I and II describe two similar but complementary methodologies to build non-linear dynamic models from panel datasets. Papers III and IV deal with the demographic transition and policy applications. Papers V and VI describe the transition to democracy and applications. Paper VII describes an application to sustainable development.
APA, Harvard, Vancouver, ISO, and other styles
13

Schnatter, Sylvia. "Integration-based Kalman-filtering for a Dynamic Generalized Linear Trend Model." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/424/1/document.pdf.

Full text
Abstract:
The topic of the paper is filtering for non-Gaussian dynamic (state space) models by approximate computation of posterior moments using numerical integration. A Gauss-Hermite procedure is implemented based on the approximate posterior mode estimator and curvature recently proposed in 121. This integration-based filtering method will be illustrated by a dynamic trend model for non-Gaussian time series. Comparision of the proposed method with other approximations ([15], [2]) is carried out by simulation experiments for time series from Poisson, exponential and Gamma distributions. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
14

Skelton, George. "Variation of Fenchel Nielsen coordinates." Thesis, University of Warwick, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Gang. "Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1236.

Full text
Abstract:
Stochastic dynamic programming is a recursive method for solving sequential or multistage decision problems. It helps economists and mathematicians construct and solve a huge variety of sequential decision making problems in stochastic cases. Research on stochastic dynamic programming is important and meaningful because stochastic dynamic programming reflects the behavior of the decision maker without risk aversion; i.e., decision making under uncertainty. In the solution process, it is extremely difficult to represent the existing or future state precisely since uncertainty is a state of having limited knowledge. Indeed, compared to the deterministic case, which is decision making under certainty, the stochastic case is more realistic and gives more accurate results because the majority of problems in reality inevitably have many unknown parameters. In addition, time scale calculus theory is applicable to any field in which a dynamic process can be described with discrete or continuous models. Many stochastic dynamic models are discrete or continuous, so the results of time scale calculus are directly applicable to them as well. The aim of this thesis is to introduce a general form of a stochastic dynamic sequence problem on complex discrete time domains and to find the optimal sequence which maximizes the sequence problem.
APA, Harvard, Vancouver, ISO, and other styles
16

Srivastava, Nandini. "Essays on non-linear dynamic models with application to the FX and art markets." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yoon, Jongyun. "QUASI-LINEAR DYNAMIC MODELS OF HYDRAULIC ENGINE MOUNT WITH FOCUS ON INTERFACIAL FORCE ESTIMATION." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1281366966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Qing, and 王卿. "Model reduction for dynamic systems with time delays: a linear matrix inequality approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38645439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Frühwirth-Schnatter, Sylvia. "MCMC Estimation of Classical and Dynamic Switching and Mixture Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/698/1/document.pdf.

Full text
Abstract:
In the present paper we discuss Bayesian estimation of a very general model class where the distribution of the observations is assumed to depend on a latent mixture or switching variable taking values in a discrete state space. This model class covers e.g. finite mixture modelling, Markov switching autoregressive modelling and dynamic linear models with switching. Joint Bayesian estimation of all latent variables, model parameters and parameters determining the probability law of the switching variable is carried out by a new Markov Chain Monte Carlo method called permutation sampling. Estimation of switching and mixture models is known to be faced with identifiability problems as switching and mixture are identifiable only up to permutations of the indices of the states. For a Bayesian analysis the posterior has to be constrained in such a way that identifiablity constraints are fulfilled. The permutation sampler is designed to sample efficiently from the constrained posterior, by first sampling from the unconstrained posterior - which often can be done in a convenient multimove manner - and then by applying a suitable permutation, if the identifiability constraint is violated. We present simple conditions on the prior which ensure that this method is a valid Markov Chain Monte Carlo method (that is invariance, irreducibility and aperiodicity hold). Three case studies are presented, including finite mixture modelling of fetal lamb data, Markov switching Autoregressive modelling of the U.S. quarterly real GDP data, and modelling the U .S./U.K. real exchange rate by a dynamic linear model with Markov switching heteroscedasticity. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
20

Benites, Ventura Sihela. "Analysis and development of a software package for identifying parameter correlations in dynamic linear models." Master's thesis, Pontificia Universidad Católica del Perú, 2017. http://tesis.pucp.edu.pe/repositorio/handle/123456789/8928.

Full text
Abstract:
In the last two decades, the increasing appearance of new complex network systems, which includes a large number of state variables and even greater amount of interconnections (represented in hundreds of parameters) became a demanding task for modeling, especially in the areas of pharmacology and bioengineering. Nowadays, there exists a serious recognition of the importance of Identifiability (ID) since parameters can be non-identifiable when it comes to make experimental design. As biological models contain a considerably large amount of parameters, it is difficult to make a proper estimation of them. Building a dynamic biological model involves not only the input and output quantification but also the structural components considering the importance of the information about the internal structure of a system and their components in biology [4]. After many years of development , complex dynamic systems can be modeled using Ordinary Differential Equations (ODE)s, which are capable of describing suitably the dynamical systems behavior. Nevertheless, in the majority of cases, the parameters values of a system are unknown; consequently, it is necessary to do estimation based on experimental data to determine their values. Biological models , commonly complex dynamic systems, include a large number of parameters and few variables to measure hence the estimation of them represents a major challenge. An important step is to do a previous identifiability analysis of the parameters before their estimation. The concept of structural or a priori identifiability involves the question of examining whether a system is identifiable or not given a set of ideal conditions (noise-free and enough input-output data) before a parameter estimation. Through the years, different approaches and their respective software applications to perform a structural identifiability analysis have been developed; however, does not have suitable measures to repair the non-identifiable problem [11] [12]. On the contrary, the method developed by Li and Vu [9] takes into consideration this subject by using parameter correlations as the indicator of the non-identifiability problem and remedy this challenge by defining proper initial conditions. For all these reasons, the main goal of this work is to implement the method of structural identifiability proposed previously, which allows the clarification of the identifiability analysis for linear dynamic models and gives relevant information about the conditions for a posterior experimental design and remedy if the model results nonidentifiable. As the level of mathematical difficulty is not high since the basic idea is the use of the output sensitivity matrix by calculations of Laplace transform and manageable linear algebra, the implementation is efficient and simple, taking less than a minute to analyze identifiability in simple models even examining different scenarios (values of initial states, absence of input) at the same time in comparison to the calculation of all the procedure by hand. As Maple is one of the best software to compute symbolic calculations in the market today, is the application of choice to work with models containing unknown parameters.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
21

Dou, Yiping. "Dynamic Bayesian models for modelling environmental space-time fields." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/634.

Full text
Abstract:
This thesis addresses spatial interpolation and temporal prediction using air pollution data by several space-time modelling approaches. Firstly, we implement the dynamic linear modelling (DLM) approach in spatial interpolation and find various potential problems with that approach. We develop software to implement our approach. Secondly, we implement a Bayesian spatial prediction (BSP) approach to model spatio-temporal ground-level ozone fields and compare the accuracy of that approach with that of the DLM. Thirdly, we develop a Bayesian version empirical orthogonal function (EOF) method to incorporate the uncertainties due to temporally varying spatial process, and the spatial variations at broad- and fine- scale. Finally, we extend the BSP into the DLM framework to develop a unified Bayesian spatio-temporal model for univariate and multivariate responses. The result generalizes a number of current approaches in this field.
APA, Harvard, Vancouver, ISO, and other styles
22

Kyprianou, Andreas. "Non-linear parameter estimation of dynamic models using differential evolution : application to hysteretic systems and hydraulic engine mounts." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rivers, Derick Lorenzo. "Dynamic Bayesian Approaches to the Statistical Calibration Problem." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3599.

Full text
Abstract:
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the "classical" approach and the "inverse" regression approach. Both of these models are static models and are used to estimate "exact" measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe the measurement. The Bayesian time series analysis method of Dynamic Linear Models (DLM) can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs the use of Bayesian methodology to perform statistical calibration. The DLM framework is used to capture the time-varying parameters that may be changing or drifting over time. Dynamic based approaches to the linear, nonlinear, and multivariate calibration problem are presented in this dissertation. Simulation studies are conducted where the dynamic models are compared to some well known "static'" calibration approaches in the literature from both the frequentist and Bayesian perspectives. Applications to microwave radiometry are given.
APA, Harvard, Vancouver, ISO, and other styles
24

Monfardini, Frederico [UNESP]. "Modelos lineares generalizados bayesianos para dados longitudinais." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/138326.

Full text
Abstract:
Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-04T01:21:27Z No. of bitstreams: 1 DISSERTAÇÃO - FREDERICO.pdf: 1083790 bytes, checksum: e190391e7f59e12ce3b3f062297293e5 (MD5)
Rejected by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-05-06T14:24:35Z (GMT)
Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-11T01:12:32Z No. of bitstreams: 1 DISSERTAÇÃO - FREDERICO.pdf: 979406 bytes, checksum: 75d1f03b99c1e8e3627b3ee7b3776361 (MD5)
Rejected by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: O mês informado na capa e contra-capa do documento estão diferentes da data de defesa informada na folha de aprovação. Corrija estas informações no arquivo PDF e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-05-13T13:14:09Z (GMT)
Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-16T04:01:38Z No. of bitstreams: 1 DISSSERTAÇÃO - FREDERICO.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5)
Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-05-16T14:41:59Z (GMT) No. of bitstreams: 1 monfardini_f_me_prud.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5)
Made available in DSpace on 2016-05-16T14:41:59Z (GMT). No. of bitstreams: 1 monfardini_f_me_prud.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5) Previous issue date: 2016-02-19
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Os Modelos Lineares Generalizados (GLM) foram introduzidos no início dos anos 70, tendo um grande impacto no desenvolvimento da teoria estatística. Do ponto de vista teórico, esta classe de modelos representa uma abordagem unificada de muitos modelos estatísticos, correntemente usados nas aplicações, podendo-se utilizar dos mesmos procedimentos de inferência. Com o avanço computacional das últimas décadas foi notável o desenvolvimento de extensões nesta classe de modelos e de métodos para os procedimentos de inferência. No contexto da abordagem Bayesiana, até a década de 80 utilizava-se de métodos aproximados de inferência, tais como aproximação de Laplace, quadratura Gaussiana e outros. No início da década de 90, foram popularizados os métodos de Monte Carlo via Cadeias de Markov (Monte Carlo Markov Chain - MCMC) que revolucionaram as aplicações no contexto Bayesiano. Apesar de serem métodos altamente eficientes, a convergência do algoritmo em modelos complexos pode ser extremamente lenta, o que gera alto custo computacional. Em 2009 surgiu o método de Aproximações de Laplace Aninhadas Integradas (Integrated Nested Laplace Aproximation - INLA) que busca eficiência tanto no custo computacional como na precisão das estimativas. Considerando a importância desta classe de modelos, neste trabalho propõem-se explorar extensões dos MLG para dados longitudinais e recentes propostas apresentadas na literatura para os procedimentos de inferência. Mais especificamente, explorar modelos para dados binários (binomiais) e para dados de contagem (Poisson), considerando a presença de variabilidade extra, incluindo superdispersão e presença de efeitos aleatórios através de modelos hierárquicos e modelos hierárquicos dinâmicos. Além disso, explorar diferentes procedimentos de inferência no contexto Bayesiano, incluindo MCMC e INLA.
Generalized Linear Models (GLM) were introduced in the early 70s, having a great impact on the development of statistical theory. From a theoretical point of view, this class of model is a unified approach to many statistical models commonly used in applications and can be used with the same inference procedures. With advances in the computer over subsequent decades has come a remarkable development of extensions in this class of design and method for inference procedures. In the context of Bayesian approach, until the 80s, it was used to approximate inference methods, such as approximation of Laplace, Gaussian quadrature, etc., The Monte Carlo Markov Chain methods (MCMC) were popularized in the early 90s and have revolutionized applications in a Bayesian context. Although they are highly efficient methods, the convergence of the algorithm in complex models can be extremely slow, which causes high computational cost. The Integrated Nested Laplace Approximations method (INLA), seeking efficiency in both computational cost and accuracy of estimates, appeared in 2009. This work proposes to explore extensions of GLM for longitudinal data considering the importance of this class of model, and recent proposals in the literature for inference procedures. More specifically, it explores models for binary data (binomial) and count data (Poisson), considering the presence of extra variability, including overdispersion and the presence of random effects through hierarchical models and hierarchical dynamic models. It also explores different Bayesian inference procedures in this context, including MCMC and INLA.
APA, Harvard, Vancouver, ISO, and other styles
25

Toriello, Alejandro. "Time decomposition of multi-period supply chain models." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42704.

Full text
Abstract:
Many supply chain problems involve discrete decisions in a dynamic environment. The inventory routing problem is an example that combines the dynamic control of inventory at various facilities in a supply chain with the discrete routing decisions of a fleet of vehicles that moves product between the facilities. We study these problems modeled as mixed-integer programs and propose a time decomposition based on approximate inventory valuation. We generate the approximate value function with an algorithm that combines data fitting, discrete optimization and dynamic programming methodology. Our framework allows the user to specify a class of piecewise linear, concave functions from which the algorithm chooses the value function. The use of piecewise linear concave functions is motivated by intuition, theory and practice. Intuitively, concavity reflects the notion that inventory is marginally more valuable the closer one is to a stock-out. Theoretically, piecewise linear concave functions have certain structural properties that also hold for finite mixed-integer program value functions. (Whether the same properties hold in the infinite case is an open question, to our knowledge.) Practically, piecewise linear concave functions are easily embedded in the objective function of a maximization mixed-integer or linear program, with only a few additional auxiliary continuous variables. We evaluate the solutions generated by our value functions in a case study using maritime inventory routing instances inspired by the petrochemical industry. The thesis also includes two other contributions. First, we review various data fitting optimization models related to piecewise linear concave functions, and introduce new mixed-integer programming formulations for some cases. The formulations may be of independent interest, with applications in engineering, mixed-integer non-linear programming, and other areas. Second, we study a discounted, infinite-horizon version of the canonical single-item lot-sizing problem and characterize its value function, proving that it inherits all properties of interest from its finite counterpart. We then compare its optimal policies to our algorithm's solutions as a proof of concept.
APA, Harvard, Vancouver, ISO, and other styles
26

Fox, David. "Dynamic demand modelling and pricing decision support systems for petroleum." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/dynamic-demand-modelling-and-pricing-decision-support-systems-for-petroleum(2ce6efed-a7eb-4d10-b325-4d4590ba57ad).html.

Full text
Abstract:
Pricing decision support systems have been developed in order to help retail companies optimise the prices they set when selling their goods and services. This research aims to enhance the essential forecasting and optimisation techniques that underlie these systems. This is first done by applying the method of Dynamic Linear Models in order to provide sales forecasts of a higher accuracy compared with current methods. Secondly, the method of Support Vector Regression is used to forecast future competitor prices. This new technique aims to produce forecasts of greater accuracy compared with the assumption currentlyused in pricing decision support systems that each competitor's price will simply remain unchanged. Thirdly, when competitor prices aren't forecasted, a new pricing optimisation technique is presented which provides the highest guaranteed profit. Existing pricing decision support systems optimise price assuming that competitor prices will remain unchanged but this optimisation can't be trusted since competitor prices are never actually forecasted. Finally, when competitor prices are forecasted, an exhaustive search of a game-tree is presented as a new way to optimise a retailer's price. This optimisation incorporates future competitor price moves, something which is vital when analysing the success of a pricing strategy but is absent from current pricing decision support systems. Each approach is applied to the forecasting and optimisation of daily retail vehicle fuel pricing using real commercial data, showing the improved results in each case.
APA, Harvard, Vancouver, ISO, and other styles
27

Almannaa, Mohammed Hamad. "Optimizing Bike Sharing Systems: Dynamic Prediction Using Machine Learning and Statistical Techniques and Rebalancing." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100737.

Full text
Abstract:
The large increase in on-road vehicles over the years has resulted in cities facing challenges in providing high-quality transportation services. Traffic jams are a clear sign that cities are overwhelmed, and that current transportation networks and systems cannot accommodate the current demand without a change in policy, infrastructure, transportation modes, and commuter mode choice. In response to this problem, cities in a number of countries have started putting a threshold on the number of vehicles on the road by deploying a partial or complete ban on cars in the city center. For example, in Oslo, leaders have decided to completely ban privately-owned cars from its center by the end of 2019, making it the first European city to totally ban cars in the city center. Instead, public transit and cycling will be supported and encouraged in the banned-car zone, and hundreds of parking spaces in the city will be replaced by bike lanes. As a government effort to support bicycling and offer alternative transportation modes, bike-sharing systems (BSSs) have been introduced in over 50 countries. BSSs aim to encourage people to travel via bike by distributing bicycles at stations located across an area of service. Residents and visitors can borrow a bike from any station and then return it to any station near their destination. Bicycles are considered an affordable, easy-to-use, and, healthy transportation mode, and BSSs show significant transportation, environmental, and health benefits. As the use of BSSs have grown, imbalances in the system have become an issue and an obstacle for further growth. Imbalance occurs when bikers cannot drop off or pick-up a bike because the bike station is either full or empty. This problem has been investigated extensively by many researchers and policy makers, and several solutions have been proposed. There are three major ways to address the rebalancing issue: static, dynamic and incentivized. The incentivized approaches make use of the users in the balancing efforts, in which the operating company incentives them to change their destination in favor of keeping the system balanced. The other two approaches: static and dynamic, deal with the movement of bikes between stations either during or at the end of the day to overcome station imbalances. They both assume the location and number of bike stations are fixed and only the bikes can be moved. This is a realistic assumption given that current BSSs have only fixed stations. However, cities are dynamic and their geographical and economic growth affects the distribution of trips and thus constantly changing BSS user behavior. In addition, work-related bike trips cause certain stations to face a high-demand level during weekdays, while these same stations are at a low-demand level on weekends, and thus may be of little use. Moreover, fixed stations fail to accommodate big events such as football games, holidays, or sudden weather changes. This dissertation proposes a new generation of BSSs in which we assume some of the bike stations can be portable. This approach takes advantage of both types of BSSs: dock-based and dock-less. Towards this goal, a BSS optimization framework was developed at both the tactical and operational level. Specifically, the framework consists of two levels: predicting bike counts at stations using fast, online, and incremental learning approaches and then balancing the system using portable stations. The goal is to propose a framework to solve the dynamic bike sharing repositioning problem, aiming at minimizing the unmet demand, leading to increased user satisfaction and reducing repositioning/rebalancing operations. This dissertation contributes to the field in five ways. First, a multi-objective supervised clustering algorithm was developed to identify the similarity of bike-usage with respect to time events. Second, a dynamic, easy-to-interpret, rapid approach to predict bike counts at stations in a BSS was developed. Third, a univariate inventory model using a Markov chain process that provides an optimal range of bike levels at stations was created. Fourth, an investigation of the advantages of portable bike stations, using an agent-based simulation approach as a proof-of-concept was developed. Fifth, mathematical and heuristic approaches were proposed to balance bike stations.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
28

Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.

Full text
Abstract:
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär).
In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
APA, Harvard, Vancouver, ISO, and other styles
29

JÃnior, Josà Maria Pires de Menezes. "Redes neurais dinÃmicas para prediÃÃo e modelagem nÃo-linear de sÃries temporais." Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2090.

Full text
Abstract:
Neste trabalho, redes neurais dinÃmicas sÃo avaliadas como modelos nÃo-lineares eficientes para prediÃÃo de sÃries temporais complexas. Entre as arquiteturas avaliadas estÃo as redes FTDNN, Elman e NARX. A capacidade preditiva destas redes sÃo testadas em tarefas de prediÃÃo de um-passo-adiante e mÃltiplos-passos-adiante. Para este fim, sÃo usadas as seguintes sÃries temporais: sÃrie laser caÃtico, sÃrie caÃtica Mackey-Glass, alÃm de sÃries de trÃfego de rede de computadores com caracterÃsticas auto-similares. O uso da rede NARX em prediÃÃo de sÃries temporais à uma contribuiÃÃo desta dissertaÃÃo. Esta rede possui uma arquitetura neural recorrente usada originalmente para identificaÃÃo entrada-saÃda de sistemas nÃo-lineares. A entrada da rede NARX à formada por duas janelas deslizantes (sliding time window), uma que desliza sobre o sinal de entrada e outra que desliza sobre sinal de saÃda. Quando aplicada para prediÃÃo caÃtica de sÃries temporais, a rede NARX à projetada geralmente como um modelo autoregressivo nÃolinear (NAR), eliminando a janela de atraso da saÃda. Neste trabalho, à proposta uma estratÃgia simples, porÃm eficiente, para permitir que a rede NARX explore inteiramente as janelas de tempo da entrada e da saÃda, a fim de melhorar sua capacidade preditiva. Os resultados obtidos mostram que a abordagem proposta tem desempenho superior ao desempenho apresentado por preditores baseados nas redes FTDNN e Elman.
APA, Harvard, Vancouver, ISO, and other styles
30

González, Barrameda José Andrés. "Novel Application Models and Efficient Algorithms for Offloading to Clouds." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36469.

Full text
Abstract:
The application offloading problem for Mobile Cloud Computing aims at improving the mobile user experience by leveraging the resources of the cloud. The execution of the mobile application is offloaded to the cloud, saving energy at the mobile device or speeding up the execution of the application. We improve the accuracy and performance of application offloading solutions in three main directions. First, we propose a novel fine-grained application model that supports complex module dependencies such as sequential, conditional and parallel module executions. The model also allows for multiple offloading decisions that are tailored towards the current application, network, or user contexts. As a result, the model is more precise in capturing the structure of the application and supports more complex offloading solutions. Second, we propose three cost models, namely, average-based, statistics-based and interval-based cost models, defined for the proposed application model. The average-based approach models each module cost by the expected cost value, and the expected cost of the entire application is estimated considering each of the three module dependencies. The novel statistics-based cost model employs Cumulative Distribution Function (CDFs) to represent the costs of the modules and of the mobile application, which is estimated considering the cost and dependencies of the modules. This cost model opens the doors for new statistics-based optimization functions and constraints whereas the state of the art only support optimizations based on the average running cost of the application. Furthermore, this cost model can be used to perform statistical analysis of the performance of the application in different scenarios such as varying network data rates. The last cost model, the interval-based, represents the module costs via intervals in order to addresses the cost uncertainty while having lower requirements and computational complexity than the statistics-based model. The cost of the application is estimated as an expected maximum cost via a linear optimization function. Finally, we present offloading decision algorithms for each cost model. For the average-based model, we present a fast optimal dynamic programming algorithm. For the statistics-based model, we present another fast optimal dynamic programming algorithm for the scenario where the optimization function meets specific properties. Finally, for the interval-based cost model, we present a robust formulation that solves a linear number of linear optimization problems. Our evaluations verify the accuracy of the models and show higher cost savings for our solutions when compared to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
31

Lülf, Fritz Adrian. "An integrated method for the transient solution of reduced order models of geometrically nonlinear structural dynamic systems." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00957455.

Full text
Abstract:
For repeated transient solutions of geometrically nonlinear structures the numerical effort often poses a major obstacle. Thus, the introduction of a reduced order model, which takes the nonlinear effects into account and accelerates the calculations considerably, is often necessary.This work yields a method that allows for rapid, accurate and parameterisable solutions by means of a reduced model of the original structure. The structure is discretised and its dynamic equilibrium described by a matrix equation. The projection on a reduced basis is introduced to obtain the reduced model. A comprehensive numerical study on several common reduced bases shows that the simple introduction of a constant basis is not sufficient to account for the nonlinear behaviour. Three requirements for an rapid, accurate and parameterisable solution are derived. The solution algorithm has to take into account the nonlinear evolution of the solution, the solution has to be independent of the nonlinear finite element terms and the basis has to be adapted to external parameters.Three approaches are provided, each responding to one requirement. These approaches are assembled to the integrated method. The approaches are the update and augmentation of the basis, the polynomial formulation of the nonlinear terms and the interpolation of the basis. A Newmark-type time-marching algorithm provides the frame of the integrated method. The application of the integrated method on test-cases with geometrically nonlinear finite elements confirms that this method leads to the initial aim of a rapid, accurate and parameterisable transient solution.
APA, Harvard, Vancouver, ISO, and other styles
32

Saavedra, Cayan Atreio Portela Bárcena. "Um aplicativo shiny para modelos lineares generalizados." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-22012019-174209/.

Full text
Abstract:
Recentes avanços tecnológicos e computacionais trouxeram alternativas que acarretaram em mudanças na forma com que se faz análises e visualizações de dados. Uma dessas mudanças caracteriza-se no uso de plataformas interativas e gráficos dinâmicos para a realização de tais análises. Desta maneira, análises e visualizações de dados não se limitam mais a um ambiente estático, de modo que, explorar a interatividade pode possibilitar um maior leque na investigação e apresentação dos dados. O presente trabalho tem como objetivo propor um aplicativo interativo, de fácil uso e interface amigável, que viabilize estudos, análises descritivas e ajustes de modelos lineares generalizados. Este aplicativo é feito utilizando o pacote shiny no ambiente R de computação estatística com a proposta de atuar como ferramenta de apoio para a pesquisa e ensino da estatística. Usuários sem afinidade em programação podem explorar os dados e realizar o ajuste de modelos lineares generalizados sem digitar uma linha código. Em relação ao ensino, a dinâmica e interatividade do aplicativo proporcionam ao aluno uma investigação descomplicada de métodos envolvidos, tornando mais fácil a assimilação de conceitos relacionados ao tema.
Recent technological and computational advances have brought alternatives that have led to changes in the way data analyzes and visualizations are done. One of these changes is characterized by the use of interactive platforms and dynamic graphics to carry out such analyzes. In this way, data analyzes and visualizations are no longer limited to a static environment, so exploring this dynamic interactivity can enable a wider range of data exploration and presentation. The present work aims to propose an interactive application, easy to use and with user-friendly interface, which enables studies and descriptive analysis and fit generalized linear models. This application is made using the shiny package in the R environment of statistical computing. The purpose of the application is to act as a support tool for statistical research and teaching. Users with no familiarity in programming can explore the data and perform the fit of generalized linear models without typing a single code line. Regarding teaching, the dynamics and interactivity of the application gives the student an uncomplicated way to investigate the methods involved, making it easier to assimilate concepts related to the subject.
APA, Harvard, Vancouver, ISO, and other styles
33

Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.

Full text
Abstract:
Network reconstruction is the process of recovering a unique structured representation of some dynamic system using input-output data and some additional knowledge about the structure of the system. Many network reconstruction algorithms have been proposed in recent years, most dealing with the reconstruction of strictly proper networks (i.e., networks that require delays in all dynamics between measured variables). However, no reconstruction technique presently exists capable of recovering both the structure and dynamics of networks where links are proper (delays in dynamics are not required) and not necessarily strictly proper.The ultimate objective of this dissertation is to develop algorithms capable of reconstructing proper networks, and this objective will be addressed in three parts. The first part lays the foundation for the theory of mathematical representations of proper networks, including an exposition on when such networks are well-posed (i.e., physically realizable). The second part studies the notions of abstractions of a network, which are other networks that preserve certain properties of the original network but contain less structural information. As such, abstractions require less a priori information to reconstruct from data than the original network, which allows previously-unsolvable problems to become solvable. The third part addresses our original objective and presents reconstruction algorithms to recover proper networks in both the time domain and in the frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
34

Miquelluti, Daniel Lima. "Métodos alternativos de previsão de safras agrícolas." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-06042015-153838/.

Full text
Abstract:
O setor agrícola é, historicamente, um dos pilares da economia brasileira, e apesar de ter sua importância diminuída com o desenvolvimento do setor industrial e de serviços ainda é responsável por dar dinamismo econômico ao país, bem como garantir a segurança alimentar, auxiliar no controle da inflação e na formação de reservas monetárias. Neste contexto as safras agrícolas exercem grande influência no comportamento do setor e equilíbrio no mercado agrícola. Foram desenvolvidas diversas metodologias de previsão de safra, sendo em sua maioria modelos de simulação de crescimento. Entretanto, recentemente os modelos estatísticos vem sendo utilizados mais comumente devido às suas predições mais rápidas em períodos anteriores à colheita. No presente trabalho foram avaliadas duas destas metodologias, os modelos ARIMA e os Modelos Lineares Dinâmicos (MLD), sendo utilizada tanto a inferência clássica quanto a bayesiana. A avaliação das metodologias deu-se por meio da análise das previsões dos modelos, bem como da facilidade de implementação e poder computacional necessário. As metodologias foram aplicadas a dados de produção de soja para o município de Mamborê-PR, no período de 1980 a 2013, sendo área plantada (ha) e precipitação acumulada (mm) variáveis auxiliares nos modelos de regressão dinâmica. Observou-se que o modelo ARIMA (2,1,0) reparametrizado na forma de um MLD e estimado por meio de máxima verossimilhança, gerou melhores previsões do que aquelas obtidas com o modelo ARIMA(2,1,0) não reparametrizado.
The agriculture is, historically, one of Brazil\'s economic pillars, and despite having it\'s importance diminished with the development of the industry and services it still is responsible for giving dynamism to the country inland\'s economy, ensuring food security, controlling inflation and assisting in the formation of monetary reserves. In this context the agricultural crops exercise great influence in the behaviour of the sector and agricultural market balance. Diverse crop forecast methods were developed, most of them being growth simulation models, however, recently the statistical models are being used due to its capability of forecasting early when compared to the other models. In the present thesis two of these methologies were evaluated, ARIMA and Dynamic Linear Models, utilizing both classical and bayesian inference. The forecast accuracy, difficulties in the implementation and computational power were some of the caracteristics utilized to assess model efficiency. The methodologies were applied to Soy production data of Mamborê-PR, in the 1980-2013 period, also noting that planted area (ha) and cumulative precipitation (mm) were auxiliary variables in the dynamic regression. The ARIMA(2,1,0) reparametrized in the DLM form and adjusted through maximum likelihood generated the best forecasts, folowed by the ARIMA(2,1,0) without reparametrization.
APA, Harvard, Vancouver, ISO, and other styles
35

Heaton, Matthew J. "Temporally Correlated Dirichlet Processes in Pollution Receptor Modeling." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1861.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Nunes, Willian Ricardo Bispo Murbak. "A new dynamic model applied to electrically stimulated lower limbs and switched control design subject to actuator saturation and non-ideal conditions /." Ilha Solteira, 2019. http://hdl.handle.net/11449/183168.

Full text
Abstract:
Orientador: Aparecido Augusto de Carvalho
Abstract: Electrical stimulation is a promising technique for motor rehabilitation in cases of spinal cord injury. Stimulator saturation is important in the control system designs applied to electrical stimulation. The negligence of the actuator saturation in the electrical stimulation can lead to unwanted control results, which evidences the muscular fatigue effects. For the first time a switched controller subject to actuator saturation for electrically stimulated lower limb is proposed. The dynamic limb extension model is nonlinear and uncertain. The uncertain nonlinear system described by Takagi-Sugeno fuzzy models operating within an operating region in the state space is considered in this study. In addition, fault in the actuator, muscle activation uncertainty, and muscular non-idealities, such as fatigue, spasms, and tremor were considered at three three severity levels. The switched controller is compared to parallel distributed compensation technique. Simulations denote better results of the switched controller by dealing with parametric uncertainties. On the other hand, a challenge for FES control systems is to monitor torque in muscle contractions. In isotonic contraction applications, measuring torque is difficult. The novelty in this study is the proposal of a new nonlinear model, whose state variables are angular position, angular velocity and angular acceleration. In this new model the torque variable is replaced by the angular acceleration. Experimental tests list the ... (Complete abstract click electronic access below)
Resumo: A estimulação elétrica é uma técnica promissora para reabilitação motora em casos de lesão medular. A saturação do estimulador também é um requisito importante no projeto de sistemas de controle aplicados à estimulação elétrica. A negligência da saturação do atuador pode conduzir a resultados de controle indesejados, que evidencia os efeitos de fadiga muscular. Pela primeira vez é proposto um controlador chaveado sujeito à saturação para membro inferior estimulado eletricamente. O modelo dinâmico de extensão do membro inferior é não linear e incerto. O sistema descrito por modelos fuzzy Takagi-Sugeno e operando dentro de uma região de operação no espaço de estados é considerado neste trabalho. Além disto, falha do atuador, incerteza de ativação muscular, e não idealidades musculares, tais como fadiga, espasmos e tremor foram considerados em três níveis de severidade. O controle chaveado é comparado com a compensação distribuída paralela. Simulações denotam melhores resultados do controlador chaveado lidando com incertezas paramétricas da planta. Por outro lado, um desafio dos sistemas de controle para estimulação elétrica funcional é monitorar a dinâmica do torque em contrações musculares. Em aplicações de contração isotônica, medir o torque é algo difícil. A novidade neste estudo é a proposta de um novo modelo não linear, cujas variáveis de estado são posição angular, velocidade angular e aceleração angular. Neste novo modelo a variável torque é substituída adequadamente pel... (Resumo completo, clicar acesso eletrônico abaixo)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
37

Pommellet, Adrien. "On model-checking pushdown systems models." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC207/document.

Full text
Abstract:
Cette thèse introduit différentes méthodes de vérification (ou model-checking) sur des modèles de systèmes à pile. En effet, les systèmes à pile (pushdown systems) modélisent naturellement les programmes séquentiels grâce à une pile infinie qui peut simuler la pile d'appel du logiciel. La première partie de cette thèse se concentre sur la vérification sur des systèmes à pile de la logique HyperLTL, qui enrichit la logique temporelle LTL de quantificateurs universels et existentiels sur des variables de chemin. Il a été prouvé que le problème de la vérification de la logique HyperLTL sur des systèmes d'états finis est décidable ; nous montrons que ce problème est en revanche indécidable pour les systèmes à pile ainsi que pour la sous-classe des systèmes à pile visibles (visibly pushdown systems). Nous introduisons donc des algorithmes d'approximation de ce problème, que nous appliquons ensuite à la vérification de politiques de sécurité. Dans la seconde partie de cette thèse, dans la mesure où la représentation de la pile d'appel par les systèmes à pile est approximative, nous introduisons les systèmes à surpile (pushdown systems with an upper stack) ; dans ce modèle, les symboles retirés de la pile d'appel persistent dans la zone mémoire au dessus du pointeur de pile, et peuvent être plus tard écrasés par des appels sur la pile. Nous montrons que les ensembles de successeurs post* et de prédécesseurs pre* d'un ensemble régulier de configurations ne sont pas réguliers pour ce modèle, mais que post* est toutefois contextuel (context-sensitive), et que l'on peut ainsi décider de l'accessibilité d'une configuration. Nous introduisons donc des algorithmes de sur-approximation de post* et de sous-approximation de pre*, que nous appliquons à la détection de débordements de pile et de manipulations nuisibles du pointeur de pile. Enfin, dans le but d'analyser des programmes avec plusieurs fils d'exécution, nous introduisons le modèle des réseaux à piles dynamiques synchronisés (synchronized dynamic pushdown networks), que l'on peut voir comme un réseau de systèmes à pile capables d'effectuer des changements d'états synchronisés, de créer de nouveaux systèmes à piles, et d'effectuer des actions internes sur leur pile. Le problème de l'accessibilité étant naturellement indécidable pour un tel modèle, nous calculons une abstraction des chemins d'exécutions entre deux ensembles réguliers de configurations. Nous appliquons ensuite cette méthode à un processus itératif de raffinement des abstractions
In this thesis, we propose different model-checking techniques for pushdown system models. Pushdown systems (PDSs) are indeed known to be a natural model for sequential programs, as they feature an unbounded stack that can simulate the assembly stack of an actual program. Our first contribution consists in model-checking the logic HyperLTL that adds existential and universal quantifiers on path variables to LTL against pushdown systems (PDSs). The model-checking problem of HyperLTL has been shown to be decidable for finite state systems. We prove that this result does not hold for pushdown systems nor for the subclass of visibly pushdown systems. Therefore, we introduce approximation algorithms for the model-checking problem, and show how these can be used to check security policies. In the second part of this thesis, as pushdown systems can fail to accurately represent the way an assembly stack actually operates, we introduce pushdown systems with an upper stack (UPDSs), a model where symbols popped from the stack are not destroyed but instead remain just above its top, and may be overwritten by later push rules. We prove that the sets of successors post* and predecessors pre* of a regular set of configurations of such a system are not always regular, but that post* is context-sensitive, hence, we can decide whether a single configuration is forward reachable or not. We then present methods to overapproximate post* and under-approximate pre*. Finally, we show how these approximations can be used to detect stack overflows and stack pointer manipulations with malicious intent. Finally, in order to analyse multi-threaded programs, we introduce in this thesis a model called synchronized dynamic pushdown networks (SDPNs) that can be seen as a network of pushdown processes executing synchronized transitions, spawning new pushdown processes, and performing internal pushdown actions. The reachability problem for this model is obviously undecidable. Therefore, we compute an abstraction of the execution paths between two regular sets of configurations. We then apply this abstraction framework to a iterative abstraction refinement scheme
APA, Harvard, Vancouver, ISO, and other styles
38

Frühwirth-Schnatter, Sylvia. "Applied State Space Modelling of Non-Gaussian Time Series using Integration-based Kalman-filtering." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1993. http://epub.wu.ac.at/1558/1/document.pdf.

Full text
Abstract:
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
39

Leininger, Thomas J. "An Adaptive Bayesian Approach to Dose-Response Modeling." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3325.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Calmon, Andre du Pin. "Variação do controle como fonte de incerteza." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259270.

Full text
Abstract:
Orientador: João Bosco Ribeiro do Val
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-14T00:07:24Z (GMT). No. of bitstreams: 1 Calmon_AndreduPin_M.pdf: 862345 bytes, checksum: 122780715dca28ac7fa3199aa0586e7c (MD5) Previous issue date: 2009
Resumo: Este trabalho apresenta a caracterização teórica e a estratégia de controle para sistemas estocásticos em tempo discreto onde a variação da ação de controle aumenta a incerteza sobre o estado (sistemas VCAI). Este tipo de sistema possui várias aplicações práticas, como em problemas de política monetária, medicina e, de forma geral, em problemas onde um modelo dinâmico completo do sistema é complexo demais para ser conhecido. Utilizando ferramentas da análise de funções não suaves, mostra-se para um sistema VCAI multidimensional que a convexidade é uma invariante da função valor da Programação Dinâmica quando o custo por estágio é convexo. Esta estratégia indica a existência de uma região no espaço de estados onde a ação ótima de controle é de não variação (denominada região de não-variação), estando de acordo com a natureza cautelosa do controle de sistemas subdeterminados. Adicionalmente, estudou-se algoritmos para a obtenção da política ótima de controle para sistemas VCAI, com ênfase no caso mono-entrada avaliado através de uma função custo quadrática. Finalmente, os resultados obtidos foram aplicados no problema da condução da política monetária pelo Banco Central.
Abstract: This dissertation presents a theoretical framework and the control strategy for discrete-time stochastic systems for which the control variations increase state uncertainty (CVIU systems). This type of system model can be useful in many practical situations, such as in monetary policy problems, medicine and biology, and, in general, in problems for which a complete dynamic model is too complex to be feasible. The optimal control strategy for a multidimensional CVIU system associated with a convex cost functional is devised using dynamic programming and tools from nonsmooth analysis. Furthermore, this strategy points to a region in the state space in which the optimal action is of no variation (the region of no variation), as expected from the cautionary nature of controlling underdetermined systems. Numerical strategies for obtaining the optimal policy in CVIU systems were developed, with focus on the single-input input case evaluated through a quadratic cost functional. These results are illustrated through a numerical example in economics.
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
41

Cyriac, Praveen. "Tone mapping based on natural image statistics and visual perception models." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/402574.

Full text
Abstract:
Les tècniques d'imatge d'alt rang dinàmic (HDR) potencialment permeten la captura i l'emmagatzematge de tota la informació de llum en una escena. No obstant això, els dispositius comuns de visualització són limitats en termes de les seves capacitats de contrast i brillantor, per tant, les imatges HDR han de ser mapejades tonalment abans de presentar-les en un dispositiu de visualització per assegurar que es reprodueix l'aspecte original de l'escena. En aquesta tesi, es prenen dos enfocaments del problema de mapeig tonal. En primer lloc, es desenvolupa un marc general per a la millora de qualsevol imatge mapejada tonalment mitjançant la reducció de la distància a la corresponent imatge HDR en termes d'una mètrica perceptiva no local. La distància es redueix al mínim per mitjà d'un algoritme de descens de gradient. En segon lloc, es desenvolupa un operador de mapeig tonal (TMO) en temps real que s'adapta bé a les estadístiques d'escenes naturals, i concorda amb els nous descobriments psicofísics i dades neurofísiques. Determinem les correctes adaptacions no lineals necessàries per als nostres resultats de mapeig tonal per tal d'obtenir l'aparença òptima en diferents condicions de visualització, a través d'experiments psicofísics i desenvolupar un mètode automàtic per poder predir dades experimentals. El nostre TMO produeix resultats d'aspecte natural, sense cap tipus d'artefactes espacials o temporals. Els tests de preferència dels usuaris mostren que el nostre mètode obté millors resultats en comparació amb les tècniques més recents. El TMO és ràpid i podria ser implementat en el hardware de la càmera. Pot ser utilitzat per al monitoratge de càmeres HDR en pantalles regulars, com a substitut de la correcció gamma, i com una manera de proporcionar al colorista amb contingut que té alhora un aspecte natural i una aparença nítida i clara.
High Dynamic Range (HDR) imaging techniques potentially allow for the capture and storage of the full information of light in a scene. However, common display devices are limited in terms of their contrast and brightness capabilities, thus HDR images must be tone mapped before presentation on a display device to ensure that the original appearance of the scene is reproduced. In this thesis, we take two approaches to the tone mapping problem. First, we develop a general framework for improving any tone mapped image by reducing the distance with the corresponding HDR image in terms of a non-local perceptual metric. The distance is minimized by means of a gradient descent algorithm. Second, we develop a real-time Tone Mapping Operator (TMO) that is well suited to the statistics of natural scenes, and is in keeping with new psychophysical findings and neurophysical data. We determine the adequate non-linear adjustments needed for our tone mapping results to look best in different viewing conditions through a psychophysical experiment and develop an automatic method that can predict the experimental data. Our TMO produces results that look natural, without any spatio-temporal artifacts. User preference tests show that our method outperforms state of the art approaches. The TMO is fast and could be implemented on camera hardware. It can be used for on-set monitoring of HDR cameras on regular displays, as a substitute for gamma correction, and as a way of providing the colorist with content that is both natural looking and has a crisp and clear appearance.
APA, Harvard, Vancouver, ISO, and other styles
42

Canales, Cristian M. "Population structure and spatio-temporal modelling of biological attributes and population dynamic of nylon shrimp (Heterocarpus reedi) off central Chile (25°-36°S) = Estructura poblacional y modelamiento espacio-temporal de los atributos biológicos y la dinámica poblacional del camarón nailon Heterocarpus reedi (Decapoda, Caridea) frente a Chile central (25°-36°S)." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/400612.

Full text
Abstract:
The population structure of fishery resources and the impact of environmental factors over its productivity are important processes to be considered in fisheries management. Environmental factors could determine both, the success of larval drift as the population spatial structure and its changes of biomass. Considering this, two key elements in the adequate, sustained exploitation of any fishery should be considered; the biological attributes of the species and how these vary over time and space. Research is needed to obtain a more thorough understanding of these effects, how they vary, and how they relate to environmental factors. However, spatial processes rarely constitute an explicit consideration in the evaluation and management of marine invertebrate populations, but this is particularly important in processes in which larval drift acts as one of the main mechanisms of population expansion. The ecological concept metapopulation is widely used and accepted for understanding low-mobility marine populations, and its implications for fishery management purposes should be considered. In this work we show the environmental effect over distribution, abundance and spatial structure of nylon shrimp population (Heterocarpus reedi) off central Chile (25°-37°S) from trawling surveys carried out between 1996 and 2011. Environmental variables considered where sea surface concentration of chlorophyll-a and dissolved organic matter. Results show a geographical separation in population around 32°S. Shrimp density is higher in the southern zone, where concentration of chlorophyll-a and dissolved organic matter are high due to presence of river tributaries and coastal upwelling zones. In this area, the bulk of the adult population is concentrated, which could act as "source" population and thereby its influence on larval drift could explain both, the preponderance of juveniles in the northern area as the smallest size of its population (“pseudo-sink” population). In the southern area, a process of spatial and bathymetric expansion had driven the increase in population size over time, where the colonization and individual somatic growth had been the main mechanisms. We found that periods of good environmental conditions explain high densities of shrimp with a delay of two years, which might be related mainly with larval survival and enhanced recruitment and somatic growth. In order to do a cross check of this proposal, and based on a complementary information source, 17 years of biological data collected from nylon shrimp fishery off central Chile were analyzed. We analyze these data using generalized linear models and determine the factors responsible for changes in carapace length, body weight, maturity, and sex ratio. better physical conditions and reproductive attributes of H. reedi south of 32°S would be related with the best environmental and food conditions at this zone. For example, individuals are larger, females are longer at first maturity (CL50%), and mature females are less prevalent. We outline a theoretical foundation that can guide future research on H. reedi. We also suggest that future conservation measures consider biological attributes within a spatial context. Finally, in order to contrast different hypotheses of population structure and spatial connectivity proposed along this work, we proposed a length-based population model and analyzed the biologic and fishery information available since 1945 under three hypotheses, based on the connectivity rate of two subpopulations located to the north and south of 32°S. The results show that, statistically, several hypotheses can be used to explain the data. The most likely hypothesis is that of a metapopulation in which the south zone acts as a source population (reproductive refuge) and determines, partially or totally, the arrival of recruits in the north zone, thereby explaining the population increase over the last decade. According to our study, empirical evidence will strengthen the hypothesis of spatial connectivity and, given the implications for managing the fishery of this resource, special attention should be paid to the biological-fishery conditions recorded south of 32°S, caution in its exploitation levels and consider this zone as reproductive refuge of nylon shrimp.
L'estructura poblacional dels recursos pesquers i l'impacte dels factors ambientals sobre la productivitat són processos importants a tenir en compte per a la gestió de les seves pesqueries. Els factors ambientals poden determinar l'èxit de la deriva larvària, l'estructura espacial de la població, així com les variacions de la biomassa. Tenint en compte aquests aspectes, s’haurien de considerar dos elements clau per a una explotació adequada i sostinguda de qualsevol pesqueria: les característiques biològiques de les espècies i com aquestes poden variar en el temps i en l’espai, essent necessària la millora de la comprensió d'aquests efectes, com varien, i com es relacionen amb els factors ambientals. No obstant això, els processos espacials rarament són considerats explícitament en la valoració i gestió de les poblacions d'invertebrats marins, més quan aquests aspectes són particularment importants en els processos de deriva larval actuant de principal mecanisme d'expansió d'una població. El concepte ecològic de metapopulations és àmpliament utilitzat i acceptat per a la comprensió de les poblacions marines de baixa mobilitat, i les seves implicacions haurien de ser considerades a la gestió pesquera. En aquest treball es demostra l'efecte del medi en la distribució, abundància i estructura espacial de la població de la espècie de crustaci decàpode Heterocarpus reedi anomenada a Xile en castellà com a “Camarón nailon”. S’ha mostrejat aquesta espècie a a la zona central de Xile (25 ° - 37 ° S) a partir de la informació obtinguda en campanyes oceanogràfiques de pesca d'arrossegament fets entre els anys 1996 i el 2011, així com de les característiques de l’aigua superficial (concentració de clorofil·la α i de la matèria orgànica dissolta). Els resultats van mostrar una separació geogràfica de la població a 32 ° S, on la densitat de gambes és més gran, així com també la concentració de clorofil·la α i la dissolució de matèria orgànica, això darrer degut segurament a la presència d'afluents dels rius i zones d'aflorament costaner. En aquesta zona es concentra la major part de la població adulta i podria actuar com a població “font” i la seva influència en la deriva larvària explicaria tant la preponderància de juvenils en la zona nord així com una població menys abundant (població “pseudo-sumider”). A la zona sud, l'augment de mida de la població s'explica per un procés d'expansió espacial i batimètric a traves del temps, on la colonització i el creixement somàtic dels individus serien els principals mecanismes. Els períodes de condicions ambientals favorables explicarien les altes densitats de individus de l’espècie niló dos anys després. Aquestes condicions ambientals favorables podrien estar principalment relacionades amb la supervivència de les larves, l'èxit dels processos de reclutament i el posterior creixement somàtic. D'altra banda, i amb l’objectiu de fer una verificació dels resultats dalt explicats, i basant-se en una font d'informació complementària, es van analitzar 17 anys de dades biològiques recollides en mostrejos pesquers del “camaron nylon” davant de la costa central de Xile. Es van analitzar aquestes dades utilitzant models lineals generalitzats (GLM) per tal de determinar els factors responsables dels canvis en la longitud del cefalotòrax, el pes del cos, la maduresa sexual i la proporció de sexes. Es va determinar una notable heterogeneïtat espacial en les característiques biològiques dels individus de H. reedi. Els individus en millor condició es van trobar al sud de la latitud de 32 ° S i aquesta millor condició es va relacionar amb les millors condicions ambientals i de menjar en aquesta zona. Per exemple, els individus són més grans, les femelles assoleixen abans el tamany de primera maduresa (CL50%), i les femelles madures són menys freqüents. Amb aquests resultats es presenta una base teòrica que pot guiar la recerca sobre les poblacions de H. reedi tenint en compte les consideracions espacials en les futures mesures de conservació. Finalment, per poder comparar diferents hipòtesis sobre l'estructura de la població i connectivitat espacial proposades al llarg d'aquest treball, es presenta un model de població basat en la talla i s’ha analitzat la informació biològica i pesquera disponible des de 1945. S’han tingut en compte tres hipòtesis en funció de la taxa de connectivitat de les dues subpoblacions, es a dir, les situades al nord i al sud de 32 ° S. Els resultats van mostrar que, estadísticament, diverses hipòtesi es poden fer servir per a explicar les dades disponibles. La hipòtesi més versemblant és la d'una metapoblació en la qual la zona sud actua com a població font o refugi reproductiu. Aquesta població determina parcialment o totalment l'arribada de reclutes a la zona nord i podria explicar l'augment d'aquesta població trobat a l'última dècada. Segons el nostre estudi, la hipòtesi de connectivitat espacial ha de ser contrastada només amb l'evidència empírica. Tenint en compte les implicacions per a la gestió pesquera, s’hauria de prestar una especial atenció a les condicions biològico-pesqueres que es registren al sud de 32 ° S, tenir especial precaució als nivells d'explotació pesquera i considerar aquesta zona (sud del 32 ° S) com a refugi reproductiu a Xile de l’espècie H. reedi.
APA, Harvard, Vancouver, ISO, and other styles
43

SANTOS, Watson Robert Macedo. "Metodos para Solução da Equação HJB-Riccati via Famíla de Estimadores Parametricos RLS Simplificados e Dependentes de Modelo." Universidade Federal do Maranhão, 2014. http://tedebc.ufma.br:8080/jspui/handle/tede/1892.

Full text
Abstract:
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-09-04T13:42:58Z No. of bitstreams: 1 Watson Robert.pdf: 2699368 bytes, checksum: cf204eec3df50b251f4adbbbd380ffd0 (MD5)
Made available in DSpace on 2017-09-04T13:42:58Z (GMT). No. of bitstreams: 1 Watson Robert.pdf: 2699368 bytes, checksum: cf204eec3df50b251f4adbbbd380ffd0 (MD5) Previous issue date: 2014-08-21
Due to the demand for high-performance equipments and the rising cost of energy, the industrial sector is developing equipments to attend minimization of the theirs operational costs. The implementation of these requirements generate a demand for projects and implementations of high-performance control systems. The optimal control theory is an alternative to solve this problem, because in its design considers the normative specifications of the system design, as well as those that are related to the operational costs. Motivated by these perspectives, it is presented the study of methods and the development of algorithms to the approximated solution of the Equation Hamilton-Jacobi-Bellman, in the form of discrete Riccati equation, model free and dependent of the dynamic system. The proposed solutions are developed in the context of adaptive dynamic programming that are based on the methods for online design of optimal control systems, Discrete Linear Quadratic Regulator type. The proposed approach is evaluated in multivariable models of the dynamic systems to evaluate the perspectives of the optimal control law for online implementations.
Devido a demanda por equipamentos de alto desempenho e o custo crescente da energia, o setor industrial desenvolve equipamentos que atendem a minimização dos seus custos operacionais. A implantação destas exigências geram uma demanda por projetos e implementações de sistemas de controle de alto desempenho. A teoria de controle ótimo é uma alternativa para solucionar este problema, porque considera no seu projeto as especificações normativas de projeto do sistema, como também as relativas aos seus custos operacionais. Motivado por estas perspectivas, apresenta-se o estudo de métodos e o desenvolvimento de algoritmos para solução aproximada da Equação Hamilton-Jacobi-Bellman, do tipo Equação Discreta de Riccati, livre e dependente de modelo do sistema dinâmico. As soluções propostas são desenvolvidas no contexto de programação dinâmica adaptativa (ADP) que baseiam-se nos métodos para o projeto on-line de Controladores Ótimos, do tipo Regulador Linear Quadrático Discreto. A abordagem proposta é avaliada em modelos de sistemas dinâmicos multivariáveis, tendo em vista a implementação on-line de leis de controle ótimo.
APA, Harvard, Vancouver, ISO, and other styles
44

Song, Brian Inhyok. "EXPERIMENTAL AND ANALYTICAL ASSESSMENT ON THE PROGRESSIVE COLLAPSE POTENTIAL OF EXISTING BUILDINGS." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1281712538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Niezen, Gerrit. "The optimization of gesture recognition techniques for resource-constrained devices." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01262009-125121/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Мокін, Б. І., and І. О. Чернова. "Еквівалентні моделі динамічних систем з операцією диференціювання у правій частині." Thesis, ВНТУ, 2016. http://conferences.vntu.edu.ua/index.php/all-feeem/all-feeem-2016/paper/view/525.

Full text
Abstract:
Запропоновано метод еквівалентування за критичною частотою лінійних динамічних систем високого порядку, що містять похідні у правій частині математичної моделі, моделями 3-го порядку, придатними для аналізу стійкості та оптимізації
Proposed a method of equivalenting on critical frequency of linear dynamic systems with higher order, containing derivatives in the right side of the mathematical model, by the models 3rd order, suitable for analysis of stability and optimization.
APA, Harvard, Vancouver, ISO, and other styles
47

Legrand, Nicolas. "Reconsidérer le modèle de stockage compétitif comme outil d’analyse empirique de la volatilité des prix des matières premières." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLA012.

Full text
Abstract:
Cette thèse propose une analyse empirique et théorique de la volatilité des prix des matières premières en utilisant le modèle de stockage compétitif à anticipations rationnelles. En substance, la théorie du stockage stipule que les prix des commodités sont susceptibles de s'envoler dès lors que les niveaux de stocks sont bas et donc dans l'incapacité de prémunir le marché contre des chocs exogènes. L'objectif principal poursuivit dans ce travail de recherche est d'utiliser les outils statistiques pour confronter le modèle de stockage aux données afin d'évaluer le bien-fondé empirique de la théorie du stockage, identifier ses potentiels défauts et proposer des solutions possibles afin d'améliorer son pouvoir explicatif. Dans ce contexte, la diversité des approches économétriques employées jusqu'à présent pour tester le modèle et ses prédictions théoriques est passée en revue dans le chapitre introductif (Ch. 2). Les chapitres suivants explorent trois pistes différentes pour essayer d'augmenter la cohérence empirique du modèle de stockage. Le chapitre 3 repose sur l 'idée qu'il existe des mouvements de long- termes dans les prix des matières premières qui n'ont rien à voir avec la théorie du stockage. Ceci tend à être confirmé par les résultats obtenus par la mise en œuvre d'une méthode d'estimation hybride permettant de déterminer conjointement les paramètres fondamentaux du modèle avec ceux caractérisant la tendance. Dans le chapitre 4, la procédure de test de la théorie du stockage est encore approfondie grâce au développement d'une méthode empirique pour estimer le modèle à la fois sur données de prix et de quantités, une première dans la littérature. Une autre nouveauté est que des méthodes Bayésiennes sont utilisées pour l'inférence au lieu des approches fréquentistes employées jusqu'à présent. Ces deux innovations devraient permettre d'ouvrir la voie à des recherches futures en permettant l'estimation de modèles aux structures plus complexes. Le dernier chapitre est plus théorique et porte sur l'extension du volet offre du modèle en prenant en compte la dynamique d'accumulation du capital. Le résultat principal est l'effet d'éviction qu'a le stockage sur l'investissement
This thesis proposes an empirical and theoretical analysis of commodity price volatility using the competitive storage model with rational expectations. In essence, the underlying storage theory states that commodity prices are likely to spike when inventory levels are low and cannot buffer the market from exogenous shocks. The prime objective pursued in this dissertation is to use statistical tools to confront the storage model with the data in an attempt to gauge the empirical merit of the storage theory, identify its potential flaws and provide possible remedies for improving its explanatory power. In this respect, the variety of econometric strategies employed so far to test the model itself or its theoretical predictions are reviewed in the opening survey (Ch. 2). The subsequent chapters explore three different routes with the aim of increasing the empirical relevance of the storage framework. Chapter 3 rests on the idea that there might exist long-term movements in the raw commodity price series which have nothing to do with the storage theory. This tends to be confirmed by the results obtained by implementing a hybrid estimation method for recovering jointly the model’s deep parameters with those characterizing the trend. In chapter 4 the testing of the storage theory is pushed even further thanks to the development of an empirical strategy to take the storage model to the data on both prices and quantities, for the first time in the literature. Another novelty is that Bayesian methods are used for inference in contrast to the frequentist approaches employed thus far. Hopefully both these innovations should help paving the way for future research in allowing for the estimation of more complex model set-ups. The last chapter is more theoretical as it deals with the storage model extension on the supply side to account for the dynamics of capital accumulation. The key finding is the crowding-out effect of storage on investment
APA, Harvard, Vancouver, ISO, and other styles
48

Jmal, Hamdi. "Identification du comportement quasi-statique et dynamique de la mousse de polyuérathane au travers de modèles de mémoire." Phd thesis, Université de Haute Alsace - Mulhouse, 2012. http://tel.archives-ouvertes.fr/tel-01017088.

Full text
Abstract:
La mousse de polyuréthane est un matériau cellulaire caractérisé par un spectre de propriétés mécaniques intéressant : une faible densité, une capacité à absorber l'énergie de déformation et une faible raideur.Elle présente également des propriétés telles qu'une excellente isolation thermique et acoustique, une forte absorption des liquides et une diffusion complexe de la lumière. Ce spectre de propriétés fait de la mousse de polyuréthane un des matériaux couramment utilisés dans de nombreuses applications phoniques, thermiques et de confort. Pour contrôler la vibration transmise aux occupants des sièges, plusieurs dispositifs automatiques de régulation et de contrôle sont actuellement en cours de développement tels que les amortisseurs actifs et semi-actifs. La performance de ces derniers dépend bien évidemment de la prédiction des comportements de tous les composants du siège et en particulier la mousse. D'une façon générale, il est indispensable de modéliser le comportement mécanique complexe de la mousse de polyuréthane et d'identifier ses propriétés quasi-statique et dynamiques afin d'optimiser la conception des systèmes incluant la mousse en particulier l'optimisation de l'aspect confort. Dans cette optique, l'objectif principal de cette thèse consiste à implémenter des modèles mécaniques de la mousse de polyuréthane fiables et capables de prévoir sa réponse sous différentes conditions d'essais. Dans la littérature, on retrouve les divers modèles développés tels que les modèles de mémoire entier et fractionnaire. L'inconvénient majeur de ces modèles est lié à la dépendance de leurs paramètres vis-à-vis des conditions d'essais, chose qui affecte le caractère général de leur représentativité des comportements quasi-statique et dynamique de la mousse polyuréthane. Pour pallier à cet inconvénient, nous avons développé des modèles qui, grâce à des choix judicieux de méthodes d'identification, assurent une représentativité plus générale des comportements quasi-statique et dynamique de la mousse polyuréthane. En effet, nous avons démontré qu'on peut exprimer les paramètres dimensionnels des modèles développés par le produit de deux parties indépendantes ; une regroupant les conditions d'essais et une autre définissant les paramètres adimensionnels et invariants qui caractérisent le matériau. Ces résultats ont été obtenus à partir de plusieurs études expérimentales qui ont permis l'appréhension du comportement quasi-statique (à travers des essais de compression unidirectionnelle) et dynamique (à travers des tests en vibration entretenue). La mousse, sous des grandes déformations, présente à la fois un comportement élastique non linéaire et un comportement viscoélastique. En outre, une discrimination entre les modèles développés particulièrement en quasi-statique a été effectuée. Les avantages et les limites de chacun y ont été discutés.
APA, Harvard, Vancouver, ISO, and other styles
49

Pasin, Chloé. "Modélisation et optimisation de la réponse à des vaccins et à des interventions immunothérapeutiques : application au virus Ebola et au VIH." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0208/document.

Full text
Abstract:
Les vaccins ont été une grande réussite en matière de santé publique au cours des dernières années. Cependant, le développement de vaccins efficaces contre les maladies infectieuses telles que le VIH ou le virus Ebola reste un défi majeur. Cela peut être attribué à notre manque de connaissances approfondies en immunologie et sur le mode d'action de la mémoire immunitaire. Les modèles mathématiques peuvent aider à comprendre les mécanismes de la réponse immunitaire, à quantifier les processus biologiques sous-jacents et à développer des vaccins fondés sur un rationnel scientifique. Nous présentons un modèle mécaniste de la dynamique de la réponse immunitaire humorale après injection d'un vaccin Ebola basé sur des équations différentielles ordinaires. Les paramètres du modèle sont estimés par maximum de vraisemblance dans une approche populationnelle qui permet de quantifier le processus de la réponse immunitaire et ses facteurs de variabilité. En particulier, le schéma vaccinal n'a d'impact que sur la réponse à court terme, alors que des différences significatives entre des sujets de différentes régions géographiques sont observées à plus long terme. Cela pourrait avoir des implications dans la conception des futurs essais cliniques. Ensuite, nous développons un outil numérique basé sur la programmation dynamique pour optimiser des schémas d'injections répétées. En particulier, nous nous intéressons à des patients infectés par le VIH sous traitement mais incapables de reconstruire leur système immunitaire. Des injections répétées d'un produit immunothérapeutique (IL-7) sont envisagées pour améliorer la santé de ces patients. Le processus est modélisé par un modèle de Markov déterministe par morceaux et des résultats récents de la théorie du contrôle impulsionnel permettent de résoudre le problème numériquement à l'aide d'une suite itérative. Nous montrons dans une preuve de concept que cette méthode peut être appliquée à un certain nombre de pseudo-patients. Dans l'ensemble, ces résultats s'intègrent dans un effort de développer des méthodes sophistiquées pour analyser les données d'essais cliniques afin de répondre à des questions cliniques concrètes
Vaccines have been one of the most successful developments in public health in the last years. However, a major challenge still resides in developing effective vaccines against infectious diseases such as HIV or Ebola virus. This can be attributed to our lack of deep knowledge in immunology and the mode of action of immune memory. Mathematical models can help understanding the mechanisms of the immune response, quantifying the underlying biological processes and eventually developing vaccines based on a solid rationale. First, we present a mechanistic model for the dynamics of the humoral immune response following Ebola vaccine immunizations based on ordinary differential equations. The parameters of the model are estimated by likelihood maximization in a population approach, which allows to quantify the process of the immune response and its factors of variability. In particular, the vaccine regimen is found to impact only the response on a short term, while significant differences between subjects of different geographic locations are found at a longer term. This could have implications in the design of future clinical trials. Then, we develop a numerical tool based on dynamic programming for optimizing schedule of repeated injections. In particular, we focus on HIV-infected patients under treatment but unable to recover their immune system. Repeated injections of an immunotherapeutic product (IL-7) are considered for improving the health of these patients. The process is first by a piecewise deterministic Markov model and recent results of the impulse control theory allow to solve the problem numerically with an iterative sequence. We show in a proof-of-concept that this method can be applied to a number of pseudo-patients. All together, these results are part of an effort to develop sophisticated methods for analyzing data from clinical trials to answer concrete clinical questions
APA, Harvard, Vancouver, ISO, and other styles
50

Vacher, Blandine. "Techniques d'optimisation appliquées au pilotage de la solution GTP X-PTS pour la préparation de commandes intégrant un ASRS." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2566.

Full text
Abstract:
Les travaux présentés dans ce document portent sur des problèmes d'optimisation dans le domaine de la logistique interne des entrepôts. Le domaine est soumis à une forte concurrence et est en plein essor, poussé par les besoins croissants du marché et favorisé par l'automatisation. L'entreprise SAVOYE construit des équipements et propose sa propre solution GTP (Goods-To-Person) pour la préparation de commandes. La solution utilise un système de stockage automatisé appelé X-Picking Tray System (X-PTS) et achemine les charges automatiquement à des postes de travail via des carrousels pour effectuer des opérations séquencées. C'est un système de systèmes particulièrement complexe qui offre de nombreuses applications aux techniques de la recherche opérationnelle. Tout cela définit le périmètre applicatif et théorique des travaux menés dans cette thèse. Nous avons d'abord traité un problème d'ordonnancement de type Job Shop avec des contraintes de précédences. Le contexte particulier du problème a permis de le résoudre en un temps polynomial avec un algorithme exact. Celui-ci a permis de calculer les dates d'injection des charges provenant des différents flux de sortie du stockage pour s'agréger sur un carrousel, dans un ordre donné. Ainsi, la gestion inter-allées du stockage PTS a été améliorée et le débit du flux de charges maximisé, depuis le stockage jusqu'à un poste. Nous avons ensuite étudié des algorithmes de tri tels que le tri par base et développé un algorithme de tri en ligne, utilisé pour piloter des systèmes autonomes de tri appelés Buffers Séquenceurs (BS). Placés en amont de chaque poste de travail dans la solution GTP, les BS permettent de délocaliser la fonction de tri en aval du stockage, augmentant de facto le débit des flux de sortie. Enfin, nous avons considéré un problème de séquencement consistant à trouver une extension linéaire d'un ordre partiel minimisant une distance avec un ordre donné. Nous proposons de le résoudre par une approche de programmation linéaire en nombres entiers, par la construction de programmes dynamiques et par des heuristiques de type glouton. Une heuristique efficace a été développée en se basant sur des appels itératifs d'un des programmes dynamiques, permettant d'atteindre une solution proche ou égale à l'optimum en un temps très court. L'application de ce problème aux flux de sortie non ordonnés du stockage X-PTS permet de réaliser un pré-tri au niveau des carrousels. Les diverses solutions développées ont été validées par simulation et certaines ont été brevetées et/ou déjà été mises en application dans des entrepôts
The work presented in this PhD thesis deals with optimization problems in the context of internal warehouse logistics. The field is subject to strong competition and extensive growth, driven by the growing needs of the market and favored by automation. SAVOYE builds warehouse storage handling equipment and offers its own GTP (Goods-To-Person) solution for order picking. The solution uses an Automated Storage and Retrieval System (ASRS) called X-Picking Tray System (X-PTS) and automatically routes loads to workstations via carousels to perform sequenced operations. It is a highly complex system of systems with many applications for operational research techniques. All this defines the applicative and theoretical scope of the work carried out in this thesis. In this thesis, we have first dealt with a specific scheduling Job Shop problem with precedence constraints. The particular context of this problem allowed us to solve it in polynomial time with exact algorithms. These algorithms made it possible to calculate the injection schedule of the loads coming from the different storage output streams to aggregate on a carousel in a given order. Thus, the inter-aisle management of the X-PTS storage was improved and the throughput of the load flow was maximized, from the storage to a station. In the sequel of this work, the radix sort LSD (Least Significant Digit) algorithm was studied and a dedicated online sorting algorithm was developed. The second one is used to drive autonomous sorting systems called Buffers Sequencers (BS), which are placed upstream of each workstation in the GTP solution. Finally, a sequencing problem was considered, consisting of finding a linear extension of a partial order minimizing a distance with a given order. An integer linear programming approach, different variants of dynamic programming and greedy algorithms were proposed to solve it. An efficient heuristic was developed based on iterative calls of dynamic programming routines, allowing to reach a solution close or equal to the optimum in a very short time. The application of this problem to the unordered output streams of X-PTS storage allows pre-sorting at the carousel level. The various solutions developed have been validated by simulation and some have been patented and/or already implemented in warehouses
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography