To see the other types of publications on this topic, follow the link: Structural Equations Estimation Model.

Dissertations / Theses on the topic 'Structural Equations Estimation Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Structural Equations Estimation Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zheng, Xueying, and 郑雪莹. "Robust joint mean-covariance model selection and time-varying correlation structure estimation for dependent data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899703.

Full text
Abstract:
In longitudinal and spatio-temporal data analysis, repeated measurements from a subject can be either regional- or temporal-dependent. The correct specification of the within-subject covariance matrix cultivates an efficient estimation for mean regression coefficients. In this thesis, robust estimation for the mean and covariance jointly for the regression model of longitudinal data within the framework of generalized estimating equations (GEE) is developed. The proposed approach integrates the robust method and joint mean-covariance regression modeling. Robust generalized estimating equations using bounded scores and leverage-based weights are employed for the mean and covariance to achieve robustness against outliers. The resulting estimators are shown to be consistent and asymptotically normally distributed. Robust variable selection method in a joint mean and covariance model is considered, by proposing a set of penalized robust generalized estimating equations to estimate simultaneously the mean regression coefficients, the generalized autoregressive coefficients and innovation variances introduced by the modified Cholesky decomposition. The set of estimating equations select important covariate variables in both mean and covariance models together with the estimating procedure. Under some regularity conditions, the oracle property of the proposed robust variable selection method is developed. For these two robust joint mean and covariance models, simulation studies and a hormone data set analysis are carried out to assess and illustrate the small sample performance, which show that the proposed methods perform favorably by combining the robustifying and penalized estimating techniques together in the joint mean and covariance model. Capturing dynamic change of time-varying correlation structure is both interesting and scientifically important in spatio-temporal data analysis. The time-varying empirical estimator of the spatial correlation matrix is approximated by groups of selected basis matrices which represent substructures of the correlation matrix. After projecting the correlation structure matrix onto the space spanned by basis matrices, varying-coefficient model selection and estimation for signals associated with relevant basis matrices are incorporated. The unique feature of the proposed model and estimation is that time-dependent local region signals can be detected by the proposed penalized objective function. In theory, model selection consistency on detecting local signals is provided. The proposed method is illustrated through simulation studies and a functional magnetic resonance imaging (fMRI) data set from an attention deficit hyperactivity disorder (ADHD) study.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

Jonnavithula, Siva S. "Development of structural equations models of statewide freight flows." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Asgari, Hamidreza. "On the Impacts of Telecommuting over Daily Activity/Travel Behavior: A Comprehensive Investigation through Different Telecommuting Patterns." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2182.

Full text
Abstract:
The interest in telecommuting stems from the potential benefits in alleviating traffic congestion, decreasing vehicle miles traveled (VMT), and improving air quality by reducing the necessity for travel between home and the workplace. Despite the potential economic, environmental, and social benefits, telecommuting has not been widely adopted, and there is little consensus on the actual impacts of telecommuting. One of the major hurdles is lack of a sound instrument to quantify the impacts of telecommuting on individuals’ travel behavior. As a result, the telecommuting phenomenon has not received proper attention in most transportation planning and investment decisions, if not completely ignored. This dissertation addresses the knowledge gap in telecommuting studies by examining several factors. First, it proposes a comprehensive outline to reveal and represent the complexity in telecommuting patterns. There are various types of telecommuting engagement, with different impacts on travel outcomes. It is necessary to identify and distinguish between those people for whom telecommuting involves a substitution of work travel and those for whom telecommuting is an ancillary activity. Secondly, it enhances the current modeling framework by supplementing the choice/frequency approach with daily telework dimensions, since the traditional approach fails to recognize the randomness of telecommuting engagement in a daily context. A multi-stage modeling structure is developed, which incorporates choice, frequency, engagement, and commute, as the fundamental dimensions of telecommuting activity. One pioneering perspective of this methodology is that it identifies non-regular telecommuters, who represent a significant share of daily telecommuters. Lastly, advanced statistical modeling techniques are employed to measure the actual impacts of each telecommuting arrangement on travelers’ daily activity-travel behavior, focusing on time-use analysis and work trip departure times. This research provides a systematic and sound instrument that advances the understanding of the benefits and potentials of telecommuting and impacts on travel outcomes. It is expected to facilitate policy and decision makers with higher accuracy and contribute to the better design and analysis of transportation investment decisions.
APA, Harvard, Vancouver, ISO, and other styles
4

Koch, Rainer, Ulrich Julius, Werner Jaross, and Hans-Egbert Schröder. "Estimation of the Heritability of Latent Variables Which Are Included in a Structural Model for Metabolic Syndrome." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-137470.

Full text
Abstract:
In a study looking for risk factors of atherosclerosis in families with combined hyperlipidemia and hypertension, clinical and biochemical data of 1,149 persons were analyzed to develop two hypothetical multivariate scores concerning the degree to which a patient is affected by the metabolic syndrome. The scores are based on a structural model for low-density cholesterol (LDL) and high-density cholesterol (HDL), triglycerides, uric acid, creatinine, glucose, insulin, systolic blood pressure and waist-to-hip ratio. Age, gender and body mass index were used for adjusting all variables. In segregation analyses of 42 pedigrees without using genotype information, estimations of the heritabilities and environmentally caused variance and covariance components were computed for the individual score values of the two latent factors. The first score shows a heritability of 42%; the environment component disappeared. The score mainly reflects the HDL, LDL and triglyceride levels. The second score shows a heritability of 16% with an environment component of 7%. It includes mainly insulin, uric acid and creatinine. In the search for genetic causes, both scores could be a basis for further phenotypic classification of the metabolic syndrome
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich
APA, Harvard, Vancouver, ISO, and other styles
5

Koch, Rainer, Ulrich Julius, Werner Jaross, and Hans-Egbert Schröder. "Estimation of the Heritability of Latent Variables Which Are Included in a Structural Model for Metabolic Syndrome." Karger, 2001. https://tud.qucosa.de/id/qucosa%3A27735.

Full text
Abstract:
In a study looking for risk factors of atherosclerosis in families with combined hyperlipidemia and hypertension, clinical and biochemical data of 1,149 persons were analyzed to develop two hypothetical multivariate scores concerning the degree to which a patient is affected by the metabolic syndrome. The scores are based on a structural model for low-density cholesterol (LDL) and high-density cholesterol (HDL), triglycerides, uric acid, creatinine, glucose, insulin, systolic blood pressure and waist-to-hip ratio. Age, gender and body mass index were used for adjusting all variables. In segregation analyses of 42 pedigrees without using genotype information, estimations of the heritabilities and environmentally caused variance and covariance components were computed for the individual score values of the two latent factors. The first score shows a heritability of 42%; the environment component disappeared. The score mainly reflects the HDL, LDL and triglyceride levels. The second score shows a heritability of 16% with an environment component of 7%. It includes mainly insulin, uric acid and creatinine. In the search for genetic causes, both scores could be a basis for further phenotypic classification of the metabolic syndrome.
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
APA, Harvard, Vancouver, ISO, and other styles
6

Katsikatsou, Myrsini. "Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188342.

Full text
Abstract:
The estimation of latent variable models with ordinal and continuous, or ranking variables is the research focus of this thesis. The existing estimation methods are discussed and a composite likelihood approach is developed. The main advantages of the new method are its low computational complexity which remains unchanged regardless of the model size, and that it yields an asymptotically unbiased, consistent, and normally distributed estimator. The thesis consists of four papers. The first one investigates the two main formulations of the unrestricted Thurstonian model for ranking data along with the corresponding identification constraints. It is found that the extra identifications constraints required in one of them lead to unreliable estimates unless the constraints coincide with the true values of the fixed parameters. In the second paper, a pairwise likelihood (PL) estimation is developed for factor analysis models with ordinal variables. The performance of PL is studied in terms of bias and mean squared error (MSE) and compared with that of the conventional estimation methods via a simulation study and through some real data examples. It is found that the PL estimates and standard errors have very small bias and MSE both decreasing with the sample size, and that the method is competitive to the conventional ones. The results of the first two papers lead to the next one where PL estimation is adjusted to the unrestricted Thurstonian ranking model. As before, the performance of the proposed approach is studied through a simulation study with respect to relative bias and relative MSE and in comparison with the conventional estimation methods. The conclusions are similar to those of the second paper. The last paper extends the PL estimation to the whole structural equation modeling framework where data may include both ordinal and continuous variables as well as covariates. The approach is demonstrated through an example run in R software. The code used has been incorporated in the R package lavaan (version 0.5-11).
APA, Harvard, Vancouver, ISO, and other styles
7

Bodine, Andrew James. "A Monte Carlo Investigation of Fit Statistic Behavior in Measurement Models Assessed Using Limited-and Full-Information Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433412282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vieira, Catarina Augusto Pires. "Motivações rurais, autenticidade e intenções comportamentais dos turistas." Master's thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/23353.

Full text
Abstract:
Mestrado Bolonha em Marketing
O desafio é desenvolver estratégias que criem perceções positivas aos turistas, induzindo estímulos para revisitar e recomendar destinos turísticos rurais, contribuindo para a sustentabilidade do mundo rural, para a estabilidade e bem-estar pessoal das comunidades de receção e dos turistas e para a maior coesão territorial. A realização deste desafio alcança-se pela valorização da autenticidade dos destinos rurais. O objetivo da presente dissertação consiste em desenvolver um modelo explicativo da influência da autenticidade existencial nas intenções comportamentais, sendo a autenticidade proposta como um construto mediador na relação entre motivações turísticas rurais e intenções comportamentais. Esta dissertação teve uma abordagem quantitativa, com base na aplicação de um inquérito por questionário a uma amostra não probabilística por conveniência de 399 inquiridos. De acordo com os objetivos propostos, utilizou-se o Modelo de Equações Estruturais com estimação PLS (Partial Least Squares) e foi possível confirmar a importância da autenticidade existencial nas intenções em revisitar e recomendar destinos rurais. Concluiu-se, à posteriori, que a autenticidade existencial desempenha um papel mediador entre motivações turísticas e intenções, mas apenas se aplica aos turistas que são motivados pela busca do relaxamento e da aprendizagem.
The proposed challenge is to develop strategies which create positive impressions in tourists, inducing stimuli so that they may wish to return to touristic rural destinations, as well as recommend them. This will contribute to the sustainability of the rural world, to its stability, and to the personal wellbeing of its hosting communities, which in turn uplifts tourist experience and territorial cohesion. This challenge may be approached by giving increased emphasis to the authenticity of rural destinations. The objective of this dissertation consists in developing a model that explains the influence of existential authenticity in behavioural intentions, considering that authenticity can be defined as a mediating construct in the relationship between rural touristic motivations and behavioural intentions. This dissertation sought a quantitative approach, based on the application of a survey, or questionnaire, to a non-probabilistic sample of 399 respondents. In accordance with the proposed objectives, a Structural Equations estimation Model with the PLS (Partial Least Squares) approach was used, and it was possible to confirm the importance of existential authenticity in the intentions of revisiting and recommending rural destinations. It was concluded, a posteriori, that existential authenticity portrays a mediating role between touristic motivations and intentions, but only in tourists who are motivated by the desire for relaxation and learning.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
9

Codd, Casey L. "Nonlinear Structural Equation Models: Estimation and Applications." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1301409131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ciraki, Dario. "Dynamic structural equation models : estimation and interference." Thesis, London School of Economics and Political Science (University of London), 2007. http://etheses.lse.ac.uk/2937/.

Full text
Abstract:
The thesis focuses on estimation of dynamic structural equation models in which some or all variables might be unobservable (latent) or measured with error. Moreover, we consider the situation where latent variables can be measured with multiple observable indicators and where lagged values of latent variables might be included in the model. This situation leads to a dynamic structural equation model (DSEM), which can be viewed as dynamic generalisation of the structural equation model (SEM). Taking the mismeasurement problem into account aims at reducing or eliminating the errors-in-variables bias and hence at minimising the chance of obtaining incorrect coefficient estimates. Furthermore, such methods can be used to improve measurement of latent variables and to obtain more accurate forecasts. The thesis aims to make a contribution to the literature in four areas. Firstly, we propose a unifying theoretical framework for the analysis of dynamic structural equation models. Secondly, we provide analytical results for both panel and time series DSEM models along with the software implementation suggestions. Thirdly, we propose non-parametric estimation methods that can also be used for obtaining starting values in maximum likelihood estimation. Finally, we illustrate these methods on several real data examples demonstrating the capabilities of the currently available software as well as importance of good starting values.
APA, Harvard, Vancouver, ISO, and other styles
11

Yuan, Yiyong Kolenikov Stanislav. "Empirical likelihood approach estimation of structural equation models." Diss., Columbia, Mo. : University of Missouri--Columbia, 2007. http://hdl.handle.net/10355/5029.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on September 15, 2009). Thesis advisor: Dr. Stanislav Kolenikov. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
12

Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.

Full text
Abstract:
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
APA, Harvard, Vancouver, ISO, and other styles
13

Cheevatanarak, Suchittra. "A Comparison of Multivariate Normal and Elliptical Estimation Methods in Structural Equation Models." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc278401/.

Full text
Abstract:
In the present study, parameter estimates, standard errors and chi-square statistics were compared using normal and elliptical estimation methods given three research conditions: population data contamination (10%, 20%, and 30%), sample size (100, 400, and 1000), and kurtosis (kappa =1,10, 20).
APA, Harvard, Vancouver, ISO, and other styles
14

Jang, Mi Jin. "Working correlation selection in generalized estimating equations." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2719.

Full text
Abstract:
Longitudinal data analysis is common in biomedical research area. Generalized estimating equations (GEE) approach is widely used for longitudinal marginal models. The GEE method is known to provide consistent regression parameter estimates regardless of the choice of working correlation structure, provided the square root of n consistent nuisance parameters are used. However, it is important to use the appropriate working correlation structure in small samples, since it improves the statistical efficiency of β estimate. Several working correlation selection criteria have been proposed (Rotnitzky and Jewell, 1990; Pan, 2001; Hin and Wang, 2009; Shults et. al, 2009). However, these selection criteria have the same limitation in that they perform poorly when over-parameterized structures are considered as candidates. In this dissertation, new working correlation selection criteria are developed based on generalized eigenvalues. A set of generalized eigenvalues is used to measure the disparity between the bias-corrected sandwich variance estimator under the hypothesized working correlation matrix and the model-based variance estimator under a working independence assumption. A summary measure based on the set of the generalized eigenvalues provides an indication of the disparity between the true correlation structure and the misspecified working correlation structure. Motivated by the test statistics in MANOVA, three working correlation selection criteria are proposed: PT (Pillai's trace type criterion),WR (Wilks' ratio type criterion) and RMR (Roy's Maximum Root type criterion). The relationship between these generalized eigenvalues and the CIC measure is revealed. In addition, this dissertation proposes a method to penalize for the over-parameterized working correlation structures. The over-parameterized structure converges to the true correlation structure, using extra parameters. Thus, the true correlation structure and the over-parameterized structure tend to provide similar variance estimate of the estimated β and similar working correlation selection criterion values. However, the over-parameterized structure is more likely to be chosen as the best working correlation structure by "the smaller the better" rule for criterion values. This is because the over-parameterization leads to the negatively biased sandwich variance estimator, hence smaller selection criterion value. In this dissertation, the over-parameterized structure is penalized through cluster detection and an optimization function. In order to find the group ("cluster") of the working correlation structures that are similar to each other, a cluster detection method is developed, based on spacings of the order statistics of the selection criterion measures. Once a cluster is found, the optimization function considering the trade-off between bias and variability provides the choice of the "best" approximating working correlation structure. The performance of our proposed criterion measures relative to other relevant criteria (QIC, RJ and CIC) is examined in a series of simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
15

Picart, Delphine. "Modélisation et estimation des paramètres liés au succès reproducteur d'un ravageur de la vigne (Lobesia botrana DEN. & SCHIFF.)." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00405686.

Full text
Abstract:
L'objectif de ce travail de thèse est de développer un modèle mathématique pour l'étude et la compréhension de la dynamique des populations d'un insecte ravageur, l'Eudémis de la vigne, dans son écosystème. Le modèle proposé est un système d'équations aux dérivées partielles (EDP) de type hyperbolique qui décrit les variations numériques au cours du temps de la population en fonction des stades de développement, du sexe des individus et des conditions environnementales. La ressource alimentaire, la température, l'humidité et la prédation sont les principaux facteurs environnementaux du modèle expliquant les fluctuations du nombre d'individus au cours du temps. Les différences de développement qui existent dans une cohorte d'Eudémis sont aussi modélisées pour affiner les prédictions du modèle. A partir de données expérimentales obtenues par les entomologistes de l'INRA, les paramètres du modèle sont estimés. Ce modèle ainsi ajusté nous permet alors d'étudier quelques aspects biologiques et écologiques de l'insecte comme par exemple l'impact de scénarios climatiques sur le ponte des femelles ou sur la dynamique d'attaque de la vigne par les jeunes larves. Les analyses mathématique et numérique du modèle mathématique et des problèmes d'estimation des paramètres sont développes dans cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
16

Rockwood, Nicholas John. "Estimating Multilevel Structural Equation Models with Random Slopes for Latent Covariates." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1554478681581538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Akanda, Md Abdus Salam. "A generalized estimating equations approach to capture-recapture closed population models: methods." Doctoral thesis, Universidade de Évora, 2014. http://hdl.handle.net/10174/18297.

Full text
Abstract:
ABSTRACT; Wildlife population parameters, such as capture or detection probabilities, and density or population size, can be estimated from capture-recapture data. These estimates are of particular interest to ecologists and biologists who rely on ac- curate inferences for management and conservation of the population of interest. However, there are many challenges to researchers for making accurate inferences on population parameters. For instance, capture-recapture data can be considered as binary longitudinal observations since repeated measurements are collected on the same individuals across successive points in times, and these observations are often correlated over time. If these correlations are not taken into account when estimating capture probabilities, then parameter estimates will be biased, possibly producing misleading results. Also, an estimator of population size is generally biased under the presence of heterogeneity in capture probabilities. The use of covariates (or auxiliary variables), when available, has been proposed as an alternative way to cope with the problem of heterogeneous capture probabilities. In this dissertation, we are interested in tackling these two main problems, (i) when capture probabilities are dependent among capture occasions in closed population capture-recapture models, and (ii) when capture probabilities are heterogeneous among individuals. Hence, the capture-recapture literature can be improved, if we could propose an approach to jointly account for these problems. In summary, this dissertation proposes: (i) a generalized estimating equations (GEE) approach to model possible effects in capture-recapture closed population studies due to correlation over time and individual heterogeneity; (ii) the corresponding estimating equations for each closed population capture-recapture model; (iii) a comprehensive analysis on various real capture-recapture data sets using classical, GEE and generalized linear mixed models (GLMM); (iv) an evaluation of the effect of ac- counting for correlation structures on capture-recapture model selection based on the ‘Quasi-likelihood Information Criterion (QIC)’; (v) a comparison of the performance of population size estimators using GEE and GLMM approaches in the analysis of capture-recapture data. The performance of these approaches is evaluated by Monte Carlo (MC) simulation studies resembling real capture-recapture data. The proposed GEE approach provides a useful inference procedure for estimating population parameters, particularly when a large proportion of individuals are captured. For a low capture proportion, it is difficult to obtain reliable estimates for all approaches, but the GEE approach outperforms the other methods. Simulation results show that quasi-likelihood GEE provide lower standard error than partial likelihood based on generalized linear modelling (GLM) and GLMM approaches. The estimated population sizes vary on the nature of the existing correlation among capture occasions; RESUMO: Parâmetros populacionais em espécies de vida selvagens, como probabilidade captura ou deteção, e abundância ou densidade da população, podem ser estimados a partir de dados de captura-recaptura. Estas estimativas são de particular interesse para ecologistas e biólogos que dependem de inferências precisas a gestão e conservação das populações. No entanto, há muitos desafios par investigadores fazer inferências precisas de parâmetros populacionais. Por exemplo, os dados de captura-recaptura podem ser considerados como observa longitudinais binárias uma vez que são medições repetidas coletadas nos mesmos indivíduos em pontos sucessivos no tempo, e muitas vezes correlacionadas. Essas correlações não são levadas em conta ao estimar as probabilidades de tura, as estimativas dos parâmetros serão tendenciosas e possivelmente produz resultados enganosos. Também, um estimador do tamanho de uma população geralmente enviesado na presença de heterogeneidade das probabilidades de captura. A utilização de co-variáveis (ou variáveis auxiliares), quando disponível tem sido proposta como uma forma de lidar com o problema de probabilidade captura heterogéneas. Nesta dissertação, estamos interessados em abordar problemas principais em mode1os de captura-recapturar para população fecha (i) quando as probabilidades de captura são dependentes entre ocasiões de captura e (ii) quando as probabilidades de captura são heterogéneas entre os indivíduos Assim, a literatura de captura-recaptura pode ser melhorada, se pudéssemos por uma abordagem conjunta para estes problemas. Em resumo, nesta dissertação propõe-se: (i) uma abordagem de estimação de equações generalizadas (GEE) para modelar possíveis efeitos de correlação temporal e heterogeneidade individual nas probabilidades de captura; (ii) as correspondentes equações de estimação generalizadas para cada modelo de captura-recaptura em população fechadas; (iii) uma análise sobre vários conjuntos de dados reais de captura-recaptura usando a abordagem clássica, GEE e modelos linear generalizados misto (GLMM); (iv) uma avaliação do efeito das estruturas de correlação na seleção de modelos de captura-recaptura com base no ‘critério de informação da Quasi-verossimilhança (QIC); (v) uma comparação da performance das estimativas do tamanho da população usando GEE e GLMM. O desempenho destas abordagens ´e avaliado usando simulações Monte Carlo (MC) que se assemelham a dados de captura- recapture reais. A abordagem GEE proposto ´e um procedimento de inferência útil para estimar parâmetros populacionais, especialmente quando uma grande proporção de indivíduos ´e capturada. Para uma proporção baixa de capturas, ´e difícil obter estimativas fiáveis para todas as abordagens aplicadas, mas GEE supera os outros métodos. Os resultados das simulações mostram que o método da quase-verossimilhança do GEE fornece estimativas do erro padrão menor do que o método da verossimilhança parcial dos modelos lineares generalizados (GLM) e GLMM. As estimativas do tamanho da população variam de acordo com a natureza da correlação existente entre as ocasiões de captura.
APA, Harvard, Vancouver, ISO, and other styles
18

Rankin, John C. "Development of Cost Estimation Equations for Forging." Ohio : Ohio University, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1129661983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zaeva, Maria. "Maximum likelihood estimators for circular structural model." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009m/zaeva.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Alabama at Birmingham, 2009.
Title from PDF title page (viewed Jan. 21, 2010). Additional advisors: Yulia Karpeshina, Ian Knowles, Rudi Weikard. Includes bibliographical references (p. 19).
APA, Harvard, Vancouver, ISO, and other styles
20

SHARFMAN, MARK PHILLIP. "ENVIRONMENTAL PRESSURE, ORGANIZATIONAL BUFFERS AND ORGANIZATIONAL PERFORMANCE: A STRUCTURAL EQUATIONS MODEL (SLACK)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188130.

Full text
Abstract:
This dissertation addresses questions concerning slack's nature and its relationships with the environment and performance. The research investigates which view of slack (the operations or behavioral approach) best predicts performance. It examines the relationship of environment and slack using both interaction and mediation models. The PIMS database was used for 610 assembly manufacturing firms. The results support both the behavioral and the operations perspectives. This combined view suggests that slack capacity is optimized to improve sales while being minimized to improve profits. Excess inventory is minimized to improve sales but optimized to improve average ROS. In all cases, excess cash is minimized. In all equations, the slack variables entered the equations as costs. These results also support the argument that slack interacts with the environment rather than being in a functional relationship with it. Interaction terms of the slack types and the environment were significant in predicting sales. A mediation model was also tested but had a poorer fit with the data. Slack was found to be a multi-dimensional concept. The slack variables did not all intercorrelate positively. The negative relationships suggest that management makes decisions as when to use each slack resource. The slack variables (when lagged) had significant effects on each other, but not on performance. This indicates that the time horizon for slack may be shorter than was investigated in this research. The research demonstrated that slack inventory and non-slack supply buffers were negatively related. The conditions under which the firm trades slack for other buffering mechanisms were not clear. Predicted positive relationships between size and slack were found except that excess capacity and size were negatively related. This suggests that larger firms were holding slack in ways that are more discretionary and less obvious to their control systems. What is not clear from this research are the conditions under which management will choose a specific type of slack. In one case (excess working capital), technology predicts the level of this variable. Additional research is suggested to determine how, when and where these decisions are made.
APA, Harvard, Vancouver, ISO, and other styles
21

MOMESSO, ROBERTA G. R. A. P. "Desenvolvimento e validação de um referencial metodológico para avaliação da cultura de segurança de organizações nucleares." reponame:Repositório Institucional do IPEN, 2017. http://repositorio.ipen.br:8080/xmlui/handle/123456789/28035.

Full text
Abstract:
Submitted by Pedro Silva Filho (pfsilva@ipen.br) on 2017-11-22T16:34:17Z No. of bitstreams: 0
Made available in DSpace on 2017-11-22T16:34:17Z (GMT). No. of bitstreams: 0
A cultura de segurança na área nuclear é definida como o conjunto de características e atitudes da organização e dos indivíduos que fazem que, com uma prioridade insuperável, as questões relacionadas à proteção e segurança nuclear recebam a atenção assegurada pelo seu significado. Até o momento, não existem instrumentos validados que permitam avaliar a cultura de segurança na área nuclear. Em vista disso, os resultados da definição de estratégias para o seu fortalecimento e o acompanhamento do desempenho das ações de melhorias tornam-se difíceis de serem avaliados. Este trabalho teve como objetivo principal desenvolver e validar um instrumento para a avaliação da cultura de segurança de organizações nucleares, utilizando o Instituto de Pesquisas Energéticas e Nucleares como unidade de pesquisa e coleta de dados. Os indicadores e variáveis latentes do instrumento foram definidos utilizando como referência modelos de avaliação de cultura de segurança da área da saúde e área nuclear. O instrumento de coleta de dados proposto inicialmente foi submetido à avaliação por especialistas da área nuclear e, posteriormente, ao pré-teste com indivíduos que pertenciam à população pesquisada. A validação do modelo foi feita por meio da modelagem por equações estruturais utilizando o método de mínimos quadrados parciais (Partial Least Square - Structural Equation Modeling PLS-SEM), no software SmartPLS. A versão final do instrumento foi composta por quarenta indicadores distribuídos em nove variáveis latentes. O modelo de mensuração apresentou validade convergente, validade discriminante e confiabilidade e, o modelo estrutural apresentou significância estatística, demonstrando que o instrumento cumpriu adequadamente todas as etapas de validação.
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
22

Bhikkaji, Bharath. "Model Reduction and Parameter Estimation for Diffusion Systems." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Plassmann, Vandana Shah. "Ethnicity and Clothing Expenditures of U.S. Households: A Structural Equations Model with Latent Quality Variables." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/29221.

Full text
Abstract:
The main objective of this study was to determine the relationship between household characteristics and the expenditure shares allocated among various categories of women's clothing for U.S. households belonging to different ethnic groups. The study also estimated unobserved latent quality variables based on household characteristics, and examined the effects of the latent quality variables on the expenditure shares for the various apparel categories. A Multiple Indicator-Multiple Cause Model, which is a special case of the general Structural Equations Model, was used to estimate separate Engel equations for 15 expenditure shares for women's clothing categories, for four different ethnic groups. The results of the study showed that household characteristics had a significant impact on the latent quality variables associated with different categories of women's clothing, and the latent quality variables themselves impacted the clothing expenditure shares. Also, for different ethnic groups, household characteristics had differing effects on women's clothing expenditure shares. Of all the characteristics examined, annual total household expenditures and numbers of children and adults in the household had significant effects on the largest numbers of latent quality variables associated with the clothing categories for the four ethnic groups. The socio-economic variables also significantly affected several clothing expenditure shares for the four ethnic groups. These results imply that socio-economic variables impact consumers' quality choices, and presumably prices paid, for women's clothing. The results support the conclusions of Paulin (1998), and Wagner and Soberon-Ferrer (1990), in that different ethnic groups have distinct expenditure patterns possibly due to differences in socio-economic characteristics; such characteristics may signify resources and constraints faced by a household. The distinct expenditure patterns and tastes of the four ethnic groups are reflected in the significantly different effects of annual total expenditures on the expenditure shares for each category of women's clothing, as well as in the significantly different effects of the latent quality variables on several expenditure shares, for the four ethnic groups.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Zweber, Jeffrey Vincent. "A method for structural dynamic model updating via the estimation of damping parameters." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Taddei, Tommaso. "Model order reduction methods for data assimilation : state estimation and structural health monitoring." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108942.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 243-258).
The objective of this thesis is to develop and analyze model order reduction approaches for the efficient integration of parametrized mathematical models and experimental measurements. Model Order Reduction (MOR) techniques for parameterized Partial Differential Equations (PDEs) offer new opportunities for the integration of models and experimental data. First, MOR techniques speed up computations allowing better explorations of the parameter space. Second, MOR provides actionable tools to compress our prior knowledge about the system coming from the parameterized best-knowledge model into low-dimensional and more manageable forms. In this thesis, we demonstrate how to take advantage of MOR to design computational methods for two classes of problems in data assimilation. In the first part of the thesis, we discuss and extend the Parametrized-Background Data-Weak (PBDW) approach for state estimation. PBDW combines a parameterized best knowledge mathematical model and experimental data to rapidly estimate the system state over the domain of interest using a small number of local measurements. The approach relies on projection-by-data, and exploits model reduction techniques to encode the knowledge of the parametrized model into a linear space appropriate for real-time evaluation. In this work, we extend the PBDW formulation in three ways. First, we develop an experimental a posteriori estimator for the error in the state. Second, we develop computational procedures to construct local approximation spaces in subregions of the computational domain in which the best-knowledge model is defined. Third, we present an adaptive strategy to handle experimental noise in the observations. We apply our approach to a companioni heat transfer experiment to prove the effectiveness of our technique. In the second part of the thesis, we present a model-order reduction approach to simulation based classification, with particular application to Structural Health Monitoring (SHM). The approach exploits (i) synthetic results obtained by repeated solution of a parametrized PDE for different values of the parameters, (ii) machine-learning algorithms to generate a classifier that monitors the state of damage of the system, and (iii) a reduced basis method to reduce the computational burden associated with the model evaluations. The approach is based on an offline/online computational decomposition. In the offline stage, the fields associated with many different system configurations, corresponding to different states of damage, are computed and then employed to teach a classifier. Model reduction techniques, ideal for this many-query context, are employed to reduce the computational burden associated with the parameter exploration. In the online stage, the classifier is used to associate measured data to the relevant diagnostic class. In developing our approach for SHM, we focus on two specific aspects. First, we develop a mathematical formulation which properly integrates the parameterized PDE model within the classification problem. Second, we present a sensitivity analysis to take into account the error in the model. We illustrate our method and we demonstrate its effectiveness through the vehicle of a particular companion experiment, a harmonically excited microtruss.
by Tommaso Taddei.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

MENICHINI, AMILCAR ARMANDO. "Financial Frictions and Capital Structure Choice: A Structural Dynamic Estimation." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145397.

Full text
Abstract:
This thesis studies different aspects of firm decisions by using a dynamic model. I estimate a dynamic model of the firm based on the trade-off theory of capital structure that endogenizes investment, leverage, and payout decisions. For the estimation of the model I use Efficient Method of Moments (EMM), which allows me to recover the structural parameters that best replicate the characteristics of the data. I start analyzing the question of whether target leverage varies over time. While this is a central issue in finance, there is no consensus in the literature on this point. I propose an explanation that reconciles some of the seemingly contradictory empirical evidence. The dynamic model generates a target leverage that changes over time and consistently reproduces the results of Lemmon, Roberts, and Zender (2008). These findings suggest that the time-varying target leverage assumption of the big bulk of the previous literature is not incompatible with the evidence presented by Lemmon, Roberts, and Zender (2008). Then I study how corporate income tax payments vary with the corporate income tax rate. The dynamic model produces a bell-shaped relationship between tax revenue and the tax rate that is consistent with the notion of the Laffer curve. The dynamic model generates the maximum tax revenue for a tax rate between 36% and 41%. Finally, I investigate the impact of financial constraints on investment decisions by firms. Model results show that investment-cash flow sensitivity is higher for less financially constrained firms. This result is consistent with Kaplan and Zingales (1997). The dynamic model also rationalizes why large and mature firms have a positive and significant investment-cash flow sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
27

Kröhne, Joachim Ulf [Verfasser], Rolf [Akademischer Betreuer] Steyer, and Matthias [Akademischer Betreuer] Reitzle. "Estimation of average total effects in quasi-experimental designs : nonlinear contraints in structural equation models / Joachim Ulf Kröhne. Gutachter: Rolf Steyer ; Matthias Reitzle." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2011. http://d-nb.info/1016368070/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Oberhofer, Harald, and Michael Pfaffermayr. "Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6020/1/wp259.pdf.

Full text
Abstract:
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely to decline within a range between 7.2% and 45.7% (5.9% and 38.2%) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UKs real income between 1.4% and 5.7% under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero.
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
29

Rippe, Christian M. "Burnthrough Modeling of Marine Grade Aluminum Alloy Structural Plates Exposed to Fire." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64154.

Full text
Abstract:
Current fire induced burnthrough models of aluminum typically rely solely on temperature thresholds and cannot accurately capture either the occurrence or the time to burnthrough. This research experimentally explores the fire induced burnthrough phenomenon of AA6061-T651 plates under multiple sized exposures and introduces a new burnthrough model based on the near melting creep rupture properties of the material. Fire experiments to induce burnthrough on aluminum plates were conducted using localized exposure from a propane jet burner and broader exposure from a propane sand burner. A material melting mechanism was observed for all localized exposures while a material rupture mechanism was observed for horizontally oriented plates exposed to the broader heat flux. Numerical burnthrough models were developed for each of the observed burnthrough mechanisms. Material melting was captured using a temperature threshold model of 633 deg C. Material rupture was captured using a Larson-Miller based creep rupture model. To implement the material rupture model, a characterization of the creep rupture properties was conducted at temperatures between 500 and 590 deg C. The Larson-Miller curve was subsequently developed to capture rupture behavior. Additionally, the secondary and tertiary creep behavior of the material was modeled using a modified Kachanov-Rabotnov creep model. Thermal finite element model accuracy was increased by adapting a methodology for using infrared thermography to measure spatially and temporally varying full-field heat flux maps. Once validated and implemented, thermal models of the aluminum burnthrough experiments were accurate to 20 deg C in the transient and 10 deg C in the steady state regions. Using thermo-mechanical finite element analyses, the burnthrough models were benchmarked against experimental data. Utilizing the melting and rupture mechanism models, burnthrough occurrence was accurately modeled for over 90% of experiments and modeled burnthrough times were within 20% for the melting mechanism and 50% for the rupture mechanism. Simplified burnthrough equations were also developed to facilitate the use of the burnthrough models in a design setting. Equations were benchmarked against models of flat and stiffened plates and the burnthrough experiments. Melting mechanism burnthrough time results were within 25% of benchmark values suggesting accurate capture of the mechanism. Rupture mechanism burnthrough results were within 60% of benchmark values.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Khodabandeloo, Babak, Dyan Melvin, and Hongki Jo. "Model-Based Heterogeneous Data Fusion for Reliable Force Estimation in Dynamic Structures under Uncertainties." MDPI AG, 2017. http://hdl.handle.net/10150/626477.

Full text
Abstract:
Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well.
APA, Harvard, Vancouver, ISO, and other styles
31

Romahn, André. "Beers and Bonds : Essays in Structural Empirical Economics." Doctoral thesis, Handelshögskolan i Stockholm, Institutionen för Nationalekonomi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-2238.

Full text
Abstract:
This dissertation consists of four papers in structural empirics that can be broadly categorized into two areas. The first three papers revolve around the structural estimation of demand for differentiated products and several applications thereof (Berry (1994), Berry, Levinsohn and Pakes (1995), Nevo (2000)), while the fourth paper examines the U.S. Treasury yield curve by estimating yields as linear functions of observable state variables (Ang and Piazzesi (2003), Ang et al. (2006)). The central focus of each paper are the underlying economics. Nevertheless, all papers share a common empirical approach. Be it prices of beers in Sweden or yields of U.S. Treasury bonds, it is assumed throughout that the economic variables of interest can be modeled by imposing specific parametric functional forms. The underlying structural parameters are then consistently estimated based on the variation in available data. Consistent estimation naturally hinges on the assumption that the assumed functional forms are correct. Another way of viewing this is that the imposed functions are flexible enough not to impose restrictive patterns on the data that ultimately lead to biased estimates of the structural parameters and thereby produce misleading conclusions regarding the underlying economics. In principle, the danger of misspecification could therefore be avoided by adopting sufficiently flexible functional forms. This, however, typically requires the estimation of a growing number of structural parameters that determine the underlying economic relationships. As an example, we can think of the estimation of differentiated product demand. The key object of interest here is the substitution patterns between the products. That is, we are interested in what happens to the demand of good X and all its rival products, as the price of good X increases. With N products in total, we could collect the product-specific changes in demand in a vector with N entries. It is also possible, however, that the price of any other good Y changes and thereby alters the demands for the remaining varieties. Thus, in total, we are interested in N2 price effects on product-specific demand. With few products, these effects could be estimated directly and the risk of functional misspecification could be excluded (Goolsbee and Petrin (2004)). With 100 products, however, we are required to estimate 10,000 parameters, which rarely, if ever, is feasible. This is the curse of dimensionality. Each estimation method employed in the four papers breaks this curse by imposing functions that depend on relatively few parameters and thereby tries to strike a balance between the necessity to rely on parsimonious structural frameworks and the risk of misspecification. This is a fundamental feature of empirical research in economics that makes it both interesting and challenging.

Diss. Stockholm :  Stockholm School of Economics, 2012. Introduction together with 4 papers

APA, Harvard, Vancouver, ISO, and other styles
32

Johansson, Magnus, and Johan Kingstedt. "Methods for Residual Generation Using Mixed Causality in Model Based Diagnosis." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12062.

Full text
Abstract:

Several different air pollutions are produced during combustion in a diesel engine, for example nitric oxides, NOx, which can be harmful for humans. This has led to stricter emission legislations for heavy duty trucks. The law requires both lower emissions and an On-Board Diagnosis system for all manufactured heavy duty trucks. The OBD system supervises the engine in order to keep the emissions below legislation demands. The OBD system shall detect malfunctions which may lead to increased emissions. To design the OBD system an automatic model based diagnosis approach has been developed at Scania CV AB where residual generators are generated from an engine model.

The main objective of this thesis is to improve the existing methods at Scania CV AB to extract residual generators from a model in order to generate more residual generators. The focus lies on the methods to find possible residual generators given an overdetermined subsystem. This includes methods to estimate derivatives of noisy signals.

A method to use both integral and derivative causality has been developed, called mixed causality. With this method it has been shown that more residual generators can be found when designing a model based diagnosis system, which improves the fault isolation. To use mixed causality, derivatives are estimated with smoothing spline approximation.

APA, Harvard, Vancouver, ISO, and other styles
33

Li, Daoji. "Empirical likelihood and mean-variance models for longitudinal data." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/empirical-likelihood-and-meanvariance-models-for-longitudinal-data(98e3c7ef-fc88-4384-8a06-2c76107a9134).html.

Full text
Abstract:
Improving the estimation efficiency has always been one of the important aspects in statistical modelling. Our goal is to develop new statistical methodologies yielding more efficient estimators in the analysis of longitudinal data. In this thesis, we consider two different approaches, empirical likelihood and jointly modelling the mean and variance, to improve the estimation efficiency. In part I of this thesis, empirical likelihood-based inference for longitudinal data within the framework of generalized linear model is investigated. The proposed procedure takes into account the within-subject correlation without involving direct estimation of nuisance parameters in the correlation matrix and retains optimality even if the working correlation structure is misspecified. The proposed approach yields more efficient estimators than conventional generalized estimating equations and achieves the same asymptotic variance as quadratic inference functions based methods. The second part of this thesis focus on the joint mean-variance models. We proposed a data-driven approach to modelling the mean and variance simultaneously, yielding more efficient estimates of the mean regression parameters than the conventional generalized estimating equations approach even if the within-subject correlation structure is misspecified in our joint mean-variance models. The joint mean-variances in parametric form as well as semi-parametric form has been investigated. Extensive simulation studies are conducted to assess the performance of our proposed approaches. Three longitudinal data sets, Ohio Children’s wheeze status data (Ware et al., 1984), Cattle data (Kenward, 1987) and CD4+ data (Kaslowet al., 1987), are used to demonstrate our models and approaches.
APA, Harvard, Vancouver, ISO, and other styles
34

Muhammad, Ruqiah. "A new dynamic model for non-viral multi-treatment gene delivery systems for bone regeneration: parameter extraction, estimation, and sensitivity." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6996.

Full text
Abstract:
In this thesis we develop new mathematical models, using dynamical systems, to represent localized gene delivery of bone morphogenetic protein 2 into bone marrow-derived mesenchymal stem cells and rat calvarial defects. We examine two approaches, using pDNA or cmRNA treatments, respectively, towards the production of calcium deposition and bone regeneration in in vitro and in vivo experiments. We first review the relevant scientific literature and survey existing mathematical representations for similar treatment approaches. We then motivate and develop our new models and determine model parameters from literature, heuristic approaches, and estimation using sparse data. We next conduct a qualitative analysis using dynamical systems theory. Due to the nature of the parameter estimation, it was important that we obtain local and global sensitivity analyses of model outputs to changes in model inputs. Finally we compared results from different treatment protocols. Our model suggests that cmRNA treatments may perform better than pDNA treatments towards bone fracture healing. This work is intended to be a foundation for predictive models of non-viral local gene delivery systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Karimli, Nigar. "Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3114.

Full text
Abstract:
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
APA, Harvard, Vancouver, ISO, and other styles
36

Tami, Myriam. "Approche EM pour modèles multi-blocs à facteurs à une équation structurelle." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT303/document.

Full text
Abstract:
Les modèles d'équations structurelles à variables latentes permettent de modéliser des relations entre des variables observables et non observables. Les deux paradigmes actuels d'estimation de ces modèles sont les méthodes de moindres carrés partiels sur composantes et l'analyse de la structure de covariance. Dans ce travail, après avoir décrit les deux principales méthodes d'estimation que sont PLS et LISREL, nous proposons une approche d'estimation fondée sur la maximisation par algorithme EM de la vraisemblance globale d'un modèle à facteurs latents et à une équation structurelle. Nous en étudions les performances sur des données simulées et nous montrons, via une application sur des données réelles environnementales, comment construire pratiquement un modèle et en évaluer la qualité. Enfin, nous appliquons l'approche développée dans le contexte d'un essai clinique en cancérologie pour l'étude de données longitudinales de qualité de vie. Nous montrons que par la réduction efficace de la dimension des données, l'approche EM simplifie l'analyse longitudinale de la qualité de vie en évitant les tests multiples. Ainsi, elle contribue à faciliter l'évaluation du bénéfice clinique d'un traitement
Structural equation models enable the modeling of interactions between observed variables and latent ones. The two leading estimation methods are partial least squares on components and covariance-structure analysis. In this work, we first describe the PLS and LISREL methods and, then, we propose an estimation method using the EM algorithm in order to maximize the likelihood of a structural equation model with latent factors. Through a simulation study, we investigate how fast and accurate the method is, and thanks to an application to real environmental data, we show how one can handly construct a model or evaluate its quality. Finally, in the context of oncology, we apply the EM approach on health-related quality-of-life data. We show that it simplifies the longitudinal analysis of quality-of-life and helps evaluating the clinical benefit of a treatment
APA, Harvard, Vancouver, ISO, and other styles
37

Elango, Vetri Venthan. "Methodology to model activity participation using longitudinal travel variability and spatial extent of activity." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54290.

Full text
Abstract:
Macroscopic changes in the urban environment and in the built transportation infrastructure, as well as changes in household demographics and socio-economics, can lead to spatio-temporal variations in household travel patterns and therefore regional travel demand. Dynamics in travel behavior may also simply arise from the randomness associated with values, perceptions, attitudes, needs, preferences and decision-making process of the individual travelers. Most urban travel behavior models and analysis seek to explain variations in travel behavior in terms of characteristics of the individuals and their environment. Spatial extents and temporal variation in an individual’s travel pattern may represent a measure of the individual’s spatial appetite for activity and the variability-seeking nature on his/her travel behavior. The objective of this dissertation effort is to develop a methodology to predict activity participation using revealed spatial extents and temporal variability as variables that represent the spatial appetite and variability-seeking nature associated with individual household. Activity participation is defined as a set of activities in which an individual or household takes part, to satisfy the sustenance, maintenance and discretionary needs of the household. To accomplish the goals of the dissertation, longitudinal travel data collected from the Commute Atlanta Study are used. The raw Global Positioning Systems (GPS) data are processed to summarize trip data by household travel day and individual travel day data. A methodology was developed to automatically identify the activity at the end of each trip. Methods were then developed to estimate travel behavior variability that can represent the variability-seeking nature of the individual. Existing methods to estimate activity space were reviewed and a new Modified Kernel Density area method was developed to address issues with current methods. Finally activity participation models using structural equation modeling methods were developed and the effects of the variability-seeking nature and spatial extent of activities were applied to the models. The variability-seeking nature was presented in the activity participation model as a latent variable with coefficient of variation of trips and distance as indicator variables. The dissertation research found that inclusion of activity space variables can improve the activity participation modeling process to better explain travel behavior.
APA, Harvard, Vancouver, ISO, and other styles
38

Segovia, Castillo Pablo. "Model-based control and diagnosis of inland navigation networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/671004.

Full text
Abstract:
This thesis regards the problem of optimal management of water resources in inland navigation networks from a control theory perspective. In particular, the main objective to be attained consists in guaranteeing the navigability condition of the network, i.e., ensuring that the water levels are such that vessels can travel safely. More specifically, the water levels must be kept within an interval around the setpoint. Other common objectives include minimizing the operational cost and ensuring a long lifespan of the equipment. However, inland navigation networks are large-scale systems characterized by a number of features that complicate their management, namely complex dynamics, large time delays and negligible bottom slopes. In order to achieve the optimal management, the efficient control of the hydraulic structures, e.g., gates, weirs and locks, must be ensured. To this end, a control-oriented modeling approach is derived based on an existing simplified model obtained from the Saint-Venant equations. This representation reduces the complexity of the original model, provides flexibility and allows to coordinate current and delayed information in a systematic manner. However, the resulting model formulation belongs to the class of delayed descriptor systems, for which standard control and state estimation tools would need to be extended. Instead, model predictive control and moving horizon estimation can be easily adapted for this formulation, as well as being able to deal with physical and operational constraints in a natural manner. Due to the large dimensionality of inland navigation networks, a centralized implementation is often neither possible nor desirable. In this regard, non-centralized approaches are considered, decomposing the overall system in subsystems and distributing the computational burden among the local agents, each of them in charge of meeting the local objectives. Given the fact that inland navigation networks are strongly coupled systems, a distributed approach is followed, featuring a communication protocol among local agents. Despite the optimality of the computed solutions, state estimation will only be effective provided that the sensors acquire reliable data. Likewise, the control actions will only be applied correctly if the actuators are not impacted by faults. Indeed, any error can lead to an inefficient management of the system. Therefore, the last part of the thesis is concerned with the design of supervisory strategies that allow to detect and isolate faults in inland navigation networks. All the presented modeling, centralized and distributed control and state estimation and fault diagnosis approaches are applied to a realistic case study based on the inland navigation network in the north of France to validate their effectiveness.
Cette thèse contribue à répondre au problème de la gestion optimale des ressources en eau dans les réseaux de navigation intérieure du point de vue de la théorie du contrôle. Les objectifs principales à atteindre consistent à garantir la navigabilité des réseaux de voies navigables, veiller à la réduction des coûts opérationnels et à la longue durée de vie des équipements. Lors de la conception de lois de contrôle, les caractéristiques des réseaux doivent être prises en compte, à savoir leurs dynamiques complexes, des retards variables et l’absence de pente. Afin de réaliser la gestion optimale, le contrôle efficace des structures hydrauliques doit être assuré. A cette fin, une approche de modélisation orientée contrôle est dérivée. Cependant, la formulation obtenue appartient à la classe des systèmes de descripteurs retardés, pour lesquels la commande prédictive MPC et l’estimation d’état sur horizon glissant MHE peuvent être facilement adaptés à cette formulation, tout en permettant de gérer les contraintes physiques et opérationnelles de manière naturelle. En raison de leur grande dimensionnalité, une mise en œuvre centralisée n’est souvent ni possible ni souhaitable. Compte tenu du fait que les réseaux de navigation intérieure sont des systèmes fortement couplés, une approche distribuée est proposée, incluant un protocole de communication entre agents. Malgré l’optimalité des solutions, toute erreur peut entraîner une gestion inefficace du système. Par conséquent, les dernières contributions de la thèse concernent la conception de stratégies de supervision permettant de détecter et d’isoler les pannes des équipements. Toutes les approches présentées sont appliquées à une étude de cas réaliste basée sur le réseau de voies navigables du nord e la France afin de valider leur efficacité.
La present tesi versa sobre el problema de la gestió òptima dels recursos hídrics en vies de navegació interior des de la perspectiva de la teoria de control. Concretament, l’objectiu principal radica en garantir la condició de navegabilitat del s is tema. Dit d’una altra manera, es vol garantir que els nivells d’aigua siguin tals que les embarcacions puguin navegar-hi de forma segura. Aquest objectiu s’assoleix mantenint els nivells a l’interior d’un interval construït al voltant del punt d’operació. Altres objectius comuns en aquest context as piren a minimitzar els cos tos associats a l’operació dels equips, així com a prolongar-ne la seva vida útil. Ara bé, les vies de navegació interior són sistemes a gran escala caracteritzats per dinàmiques complexes, grans retards temporals i pendents negligibles, aspectes que en dificulten la gestió. Per tal d’assolir la ges tió òptima, s’ha de garantir un control eficient de les estructures hidràuliques tals com comportes, dics i rescloses. Amb aquesta finalitat, es deriva un modelat del sistema orientat a control basat en un model existent simplificat, obtingut a partir de les equacions de Saint-Venant. Aquesta nova representació redueix la complexitat del model original, proporciona flexibilitat i permet coordinar informació actual i retardada de manera sistemàtica. Malgrat això, la formulació resultant pertany a la classe de sistemes descriptors amb retard, per als quals les tècniques de control i d’estimació estàndards necessiten ser esteses. En canvi, el control predictiu basat en models i l’estimació d’estat amb horitzó lliscant es poden adaptar fàcilment a la formulació proposada. A més, són capaços de tractar amb restriccions físiques i operacionals de forma natural. Degut a les grans dimensions de les vies de navegació interior, una implementació centralitzada no resulta, tot sovint, ni possible ni desitjada. Per tal de pal·liar aquest problema, es consideren mètodes no centralitzats. D’aquesta manera, es descompon el sistema global en subsistemes i es distribueix la càrrega computacional del problema centralitzat entre els agents locals, de manera que cadascun d’ells s’encarrega de fer complir els objectius locals . En tant que les vies de navegació interior són sistemes fortament connectats, se segueix un plantejament distribuït, incloent un protocol de comunicació entre els agents locals. Malgrat la optimalitat dels resultats que les estratègies proposades puguin proporcionar, l’estimació d’estat només serà efectiva a condició que els sensors proveeixin informació fiable. Igualment, les accions de control únicament es podran aplicar correctament si els actuadors no estan afectats per fallades. En efecte, qualsevol error pot conduir a una gestió ineficaç del sistema. És per aquest motiu que la darrera part de la tes i tracta s obre el disseny d’estratègies de supervisió, que permetin detectar i aïllar fallades en vies de navegació interior. Tots els resultats de modelat, control i estimació d’es tat centralitzats i distribuïts, així com de diagnòstic de fallades, s’apliquen a un cas d’estudi realista, basat en les vies de navegació interior del nord de França, per tal de provar-ne la seva eficàcia.
La presente tesis versa sobre el problema de la gestión óptima de los recursos hídricos en vías de navegación interior desde la perspectiva de la teoría de control. En concreto, el objetivo principal consiste en garantizar la condición de navegabilidad del sistema, es decir, garantizar que los niveles de agua de los canales sean tales que las embarcaciones puedan navegar de forma segura. Dicho objetivo se consigue manteniendo los niveles dentro de un intervalo alrededor del punto de operación. Otros objetivos comunes consisten en minimizar los costes asociados a la operación de los equipos, así como a extender su vida útil. Hay que tener en cuenta que las vías de navegación interiores son sistemas a gran escala caracterizados por dinámicas complejas, grandes retardos temporales y pendientes prácticamente nulas, lo que dificulta su gestión. Para alcanzar la gestión óptima, se debe garantizar un control eficiente de las estructuras hidráulicas tales como compuertas, diques y esclusas, y para ello se deriva un modelado del sistema orientado a control, basado en un modelo simplificado ya existente, obtenido a partir de las ecuaciones de Saint-Venant. Esta nueva representación reduce la complejidad del modelo original, proporciona flexibilidad y permite coordinar información actual y retardada de forma sistemática. Sin embargo, la formulación resultante pertenece a la clase de sistemas descriptores con retardos, para los cuales las técnicas de control y de estimación de estado estándares necesitan ser extendidas. En cambio, el control predictivo basado en modelos y la estimación de estado con horizonte deslizante pueden ser fácilmente adaptadas para la formulación propuesta, además de permitir lidiar con restricciones físicas y operacionales de forma natural. Hay que tener en cuenta que, debido a las grandes dimensiones de las vías de navegación interior, una implementación centralizada no es, a menudo, ni posible ni deseada, y para paliar este problema se consideran los enfoques no centralizados. De este modo, se descompone el sistema global en subsistemas y se distribuye la carga computacional del problema centralizado entre los agentes locales, de manera que cada uno de ellos se encarga de cumplir los objetivos locales. Como las vías de navegación interior son sistemas fuertemente conectados, se sigue un enfoque distribuido, incluyendo un protocolo de comunicación entre los agentes. También se ha de considerar que la estimación de estado sólo será efectiva a condición de que los sensores provean información fiable. Asimismo, las acciones de control únicamente se podrán aplicar correctamente si los actuadores no están afectados por fallas. En efecto, cualquier avería puede conducir a una gestión ineficaz del sistema. Es por ello que la última parte de la tesis trata sobre el diseño de estrategias de supervisión que permitan detectar y aislar fallas en vías de navegación interior. Todos los resultados de modelado, control y estimación de estado centralizados y distribuidos, así como de diagnóstico de fallas, se aplican a un caso de estudio realista basado en las vías de navegación interior del norte de Francia para probar su eficacia.
APA, Harvard, Vancouver, ISO, and other styles
39

Asgharzadeh, Shishavan Reza. "Nonlinear Estimation and Control with Application to Upstream Processes." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5291.

Full text
Abstract:
Subsea development and production of hydrocarbons is challenging due to remote andharsh conditions. Recent technology development with high speed communication to subsea anddownhole equipment has created a new opportunity to both monitor and control abnormal or undesirableevents with a proactive and preventative approach rather than a reactive approach. Twospecific technology developments are high speed, long-distance fiber optic sensing for productionand completion systems and wired pipe for drilling communications. Both of these communicationsystems offer unprecedented high speed and accurate sensing of equipment and processes that aresusceptible to uncontrolled well situations, leaks, issues with flow assurance, structural integrity,and platform stability, as well as other critical monitoring and control issues. The scope of thisdissertation is to design monitoring and control systems with new theoretical developments andpractical applications. For estimators, a novel `1-norm method is proposed that is less sensitiveto data with outliers, noise, and drift in recovering the true value of unmeasured parameters. Forcontrollers, a similar `1-norm strategy is used to design optimal control strategies that utilize a comprehensivedesign with multivariate control and nonlinear dynamic optimization. A framework forsolving large scale dynamic optimization problems with differential and algebraic equations is detailedfor estimation and control. A first area of application is in fiber optic sensing and automationfor subsea equipment. A post-installable fiber optic clamp is used to transmit structural informationfor a tension leg platform. A proposed controller automatically performs ballast operationsthat both stabilize the floating structure and minimize fatigue damage to the tendons that hold thestructure in place. A second area of application is with managed pressure drilling with movinghorizon estimation and nonlinear model predictive control. The purpose of this application is tomaximize rate of drilling penetration, maintain pressure in the borehole, respond to unexpected gasinflux, detect cuttings loading and pack-off, and better manage abnormal events with the drillingprocess through automation. The benefit of high speed data accessibility is quantified as well asthe potential benefit from a combined control strategy versus separate controllers.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Guojun. "Some Bayesian Methods in the Estimation of Parameters in the Measurement Error Models and Crossover Trial." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1076852153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Jeunesse, Paulien. "Estimation non paramétrique du taux de mort dans un modèle de population générale : Théorie et applications. A new inference strategy for general population mortality tables Nonparametric adaptive inference of birth and death models in a large population limit Nonparametric inference of age-structured models in a large population limit with interactions, immigration and characteristics Nonparametric test of time dependance of age-structured models in a large population limit." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED013.

Full text
Abstract:
L’étude du taux de mortalité dans des modèles de population humaine ou en biologie est le cœur de ce travail. Cette thèse se situe à la frontière de la statistique des processus, de la statistique non-paramétrique et de l’analyse.Dans une première partie, centrée sur une problématique actuarielle, un algorithme est proposé pour estimer les tables de mortalité, utiles en assurance. Cet algorithme se base sur un modèle déterministe de population. Ces nouvelles estimations améliorent les résultats actuels en prenant en compte la dynamique globale de la population. Ainsi les naissances sont incorporées dans le modèle pour calculer le taux de mort. De plus, ces estimations sont mises en lien avec les travaux précédents, assurant ainsi la continuité théorique de notre travail.Dans une deuxième partie, nous nous intéressons à l’estimation du taux de mortalité dans un modèle stochastique de population. Cela nous pousse à utiliser des arguments propres à la statistique des processus et à la statistique non-paramétrique. On trouve alors des estimateurs non-paramétriques adaptatifs dans un cadre anisotrope pour la mortalité et la densité de population, ainsi que des inégalités de concentration non asymptotiques quantifiant la distance entre le modèle stochastique et le modèle déterministe limite utilisé dans la première partie. On montre que ces estimateurs restent optimaux dans un modèle où le taux de mort dépend d’interactions, comme dans le cas de la population logistique.Dans une troisième partie, on considère la réalisation d’un test pour détecter la présence d’interactions dans le taux de mortalité. Ce test permet en réalité de juger de la dépendance temporelle de ce taux. Sous une hypothèse, on montre alors qu’il est possible de détecter la présence d’interactions. Un algorithme pratique est proposé pour réaliser ce test
In this thesis, we study the mortality rate in different population models to apply our results to demography or biology. The mathematical framework includes statistics of process, nonparametric estimations and analysis.In a first part, an algorithm is proposed to estimate the mortality tables. This problematic comes from actuarial science and the aim is to apply our results in the insurance field. This algorithm is founded on a deterministic population model. The new estimates we gets improve the actual results. Its advantage is to take into account the global population dynamics. Thanks to that, births are used in our model to compute the mortality rate. Finally these estimations are linked with the precedent works. This is a point of great importance in the field of actuarial science.In a second part, we are interested in the estimation of the mortality rate in a stochastic population model. We need to use the tools coming from nonparametric estimations and statistics of process to do so. Indeed, the mortality rate is a function of two parameters, the time and the age. We propose minimax optimal and adaptive estimators for the mortality and the population density. We also demonstrate some non asymptotics concentration inequalities. These inequalities quantifiy the deviation between the stochastic process and its deterministic limit we used in the first part. We prove that our estimators are still optimal in a model where the mortality is influenced by interactions. This is for example the case for the logistic population.In a third part, we consider the testing problem to detect the existence of interactions. This test is in fact designed to detect the time dependance of the mortality rate. Under the assumption the time dependance in the mortality rate comes only from the interactions, we can detect the presence of interactions. Finally we propose an algorithm to do this test
APA, Harvard, Vancouver, ISO, and other styles
42

Lusivika, Nzinga Clovis. "Estimation d’effets individuels de traitements pris en combinaison dans les études observationnelles." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS218.

Full text
Abstract:
La réalisation d'un essai thérapeutique randomisé peut être difficile à mettre en place pour estimer sans biais l'effet causal d'une stratégie thérapeutique. Dans ce cas, l’étude observationnelle constitue une alternative pour évaluer l’effet causal d’un traitement. Quatre types de difficultés méthodologiques nous intéressent dans ce type d’étude : 1) le biais d’indication ; 2) La présence des facteurs de confusion temps-dépendantes (TD) ; 3) la relation variant dans le temps entre un traitement TD et un effet ; 4) dans la vie réelle, les patients prennent parfois plusieurs traitements, de façon séquentielle ou simultanée. Dans ces conditions, l’évaluation de l’effet propre à chaque traitement constitue un défi méthodologique. L’objectif de cette thèse est de proposer un cadre méthodologique qui permet d’estimer correctement les effets propres aux traitements dans un contexte de multithérapie dans une étude observationnelle en tenant compte de ces difficultés méthodologiques. Nous avons évalué la performance du modèle marginal structurel de Cox pour estimer les effets individuels et conjoints de deux traitements et démontré qu'il a des bonnes performances en présence des facteurs de confusion TD et d'une interaction entre les deux traitements. Nous avons également comparé la performance du modèle marginal structurel de Cox à exposition cumulée pondérée à celle du modèle de Cox à exposition cumulée pondérée standard pour estimer les effets variant dans le temps en présence de confusion temps dépendante et démontré qu'il a une meilleure performance et qu'il peut être appliqué aux données réelles quelque soit la force de confusion temps dépendante
Randomized controlled trials cannot be implemented in all situations for estimating effects of therapeutic strategies. Observational studies would then constitute an alternative for evaluating treatment effects. We have a specified interest in four types of methodological difficulties for such studies: 1) confounding by indication ; 2) presence of time-dependent confounding ; 3) The relationship between a given time-dependent treatment and its effect may vary over time ; 4) In real life, patients often receive multiple treatments, sequentially or simultaneously. In this context, the evaluation of individual effects of treatment is a methodological challenge. The overall objective of this thesis was to propose a methodological framework in which these methodological difficulties are accommodated, allowing the individual effects of treatments to be correctly estimated within the context of multi-treatments in an observational study. We evaluated the performance of the marginal structural Cox model when estimating the individual and joint effects of two treatments and showed that it performed well in the presence of three different scenarios of time-dependent confounding. We also showed the importance of estimating the interaction term when exploring the treatment effect from combination therapy. We compared the performance of weighted cumulative exposure marginal structural Cox model with that of a conventional TD WCE Cox model for estimating time-varying effects of treatments without bias in the presence of TD confounding. Our results showed that the WCE Cox MSM performed better and can be applied to real data whatever the strength of time dependent confounding
APA, Harvard, Vancouver, ISO, and other styles
43

Zhu, Shousheng. "Modeling, identifiability analysis and parameter estimation of a spatial-transmission model of chikungunya in a spatially continuous domain." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2341/document.

Full text
Abstract:
Dans différents domaines de recherche, la modélisation est devenue un outil efficace pour étudier et prédire l’évolution possible d’un système, en particulier en épidémiologie. En raison de la mondialisation et de la mutation génétique de certaines maladies ou vecteurs de transmission, plusieurs épidémies sont apparues dans des régions non encore concernées ces dernières années. Dans cette thèse, un modèle décrivant la transmission de l’épidémie de chikungunya à la population humaine est étudié. Ce modèle prend en compte la mobilité spatiale des humains, ce qui est nouveau. En effet, c’est un facteur intéressant qui a influencé la réapparition de plusieurs maladies épidémiques. Le déplacement des moustiques est omis puisqu’il est limité à quelques mètres. Le modèle complet (modèle EDOs-EDPs) est alors composé d’un système à réaction-diffusion (prenant la forme d’équations différentielles partielles (EDPs) paraboliques semi-linéaires) couplé à des équations différentielles ordinaires (EDOs). Nous démontrons pour ce modèle, d’abord l’existence et l’unicité de la solution globale, sa positivité et sa bornitude, puis nous donnons quelques simulations numériques. Dans ce modèle, certains paramètres ne sont pas directement accessibles à partir des expériences et doivent être estimés numériquement. Cependant, avant de rechercher leurs valeurs, il est essentiel de vérifier l’identifiabilité des paramètres pour déterminer si l’ensemble des paramètres inconnus peut être déterminé de manière unique à partir des données. Cette étude permettra de s’assurer que les procédures numériques peuvent être couronnées de succès. Si l’identifiabilité n’est pas assurée, certaines données supplémentaires doivent être ajoutées. En fait, une première étude d’identifiabilité a été effectuée pour le modèle EDOs en considérant que le nombre d’œufs peut être facilement compté. Toutefois, après avoir discuté avec les chercheurs épidémiologistes, il apparaît que c’est le nombre de larves qui peut être estimé semaines par semaines. Ainsi, nous ferons une étude d’identifiabilité pour le nouveau modèle EDOs-EDPs avec cette hypothèse. Grâce à l’intégration de l’une des équations du modèle, on obtient des équations plus faciles reliant les entrées, les sorties et les paramètres, ce qui simplifie vraiment l’étude d’identifiabilité. A partir de l’étude d’identifiabilité, une méthode et une procédure numérique sont proposés pour estimer les paramètres sans en avoir connaissance
In different fields of research, modeling has become an effective tool for studying and predicting the possible evolution of a system, particularly in epidemiology. Due to the globalization and the genetic mutation of certain diseases or transmission vectors, several epidemics have appeared in regions not yet concerned in the last years. In this thesis, a model describing the transmission of the chikungunya epidemic to the human population is studied. As a novelty, this model incorporates the spatial mobility of humans. Indeed, it is an interesting factor that has influenced the re-emergence of several epidemic diseases. The displacement of mosquitoes is omitted since it is limited to a few meters. The complete model (ODEs-PDEs model) is then composed of a reaction-diffusion system (taken the form of semi-linear parabolic partial differential equations (PDEs)) coupled with ordinary differential equations (ODEs). We prove the existence, uniqueness, positivity and boundedness of a global solution of this model at first and then give some numerical simulations. In such a model, some parameters are not directly accessible from experiments and have to be estimated numerically. However, before searching for their values, it is essential to verify the identifiability of parameters in order to assess whether the set of unknown parameters can be uniquely determined from the data. This study will insure that numerical procedures can be successful. If the identifiability is not ensured, some supplementary data have to be added. In fact, a first identifiability study had been done for the ODEs model by considering that the number of eggs can be easily counted. However, after discussing with epidemiologist searchers, it appears that it is the number of larvae which can be estimated weeks by weeks. Thus, we will do an identifiability study for the novel ODEs-PDEs model with this assumption. Thanks to an integration of one of the model equations, some easier equations linking the inputs, outputs and parameters are obtained which really simplify the study of identifiability. From the identifiability study, a method and numerical procedure are proposed for estimating the parameters without any knowledge of them
APA, Harvard, Vancouver, ISO, and other styles
44

Hou, Chuanchuan. "Vibration-based damage identification with enhanced frequency dataset and a cracked beam element model." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20434.

Full text
Abstract:
Damage identification is an important topic in structural assessment and structural health monitoring (SHM). Vibration-based identification techniques use modal data to identify the existence, location and severity of possible damages in structures, often via a numerical model updating procedure. Among other factors influencing the practicality and reliability of a damage identification approach, two are of primary interest to this study. The first one concerns the amount and quality of modal data that can be used as ‘response’ data for the model updating. It is generally recognised that natural frequencies can be measured with relatively high accuracy; however, their number is limited. Mode shapes, on the other hand, are susceptible to larger measurement errors. Seeking additional modal frequency data is therefore of significant value. The second one concerns the errors at the numerical (finite element) model level, particularly in the representation of the effect of damage on the dynamic properties of the structure. An inadequate damage model can lead to inaccurate and even false damage identification. The first part of the thesis is devoted to enhancing the modal dataset by extracting the so called ‘artificial boundary condition’ (ABC) frequencies in a real measurement environment. The ABC frequencies correspond to the natural frequencies of the structure with a perturbed boundary condition, but can be generated without the need of actually altering the physical support condition. A comprehensive experimental study on the extraction of such frequencies has been conducted. The test specimens included steel beams of relatively flexible nature, as well as thick and stiffer beams made from metal material and reinforced concrete, to cover the typical variation of the dynamic characteristics of real-life structures in a laboratory condition. The extracted ABC frequencies are subsequently applied in the damage identification in beams. Results demonstrate that it is possible to extract the first few ABC frequencies from the modal testing in different beam settings for a variety of ABC incorporating one or two virtual pin supports. The inclusion of ABC frequencies enables the identification of structural damages satisfactorily without the necessity to involve the mode shape information. The second part of the thesis is devoted to developing a robust model updating and damage identification approach for beam cracks, with a special focus on thick beams which present a more challenging problem in terms of the effect of a crack than slender beams. The priority task has been to establish a crack model which comprehensively describes the effect of a crack to reduce the modelling errors. A cracked Timoshenko beam element model is introduced for explicit beam crack identification. The cracked beam element model is formulated by incorporating an additional flexibility due to a crack using the fracture mechanics principles. Complex effects in cracked thick beams, including shear deformation and coupling between transverse and longitudinal vibrations, are represented in the model. The accuracy of the cracked beam element model for predicting modal data of cracked thick beams is first verified against numerically simulated examples. The consistency of predictions across different modes is examined in comparison with the conventional stiffness reduction approach. Upon satisfactory verification, a tailored model updating procedure incorporating an adaptive discretisation approach is developed for the implementation of the cracked beam element model for crack identification. The updating procedure is robust in that it has no restriction on the location, severity and number of cracks to be identified. Example updating results demonstrate that satisfactory identification can be achieved for practically any configurations of cracks in a beam. Experimental study with five solid beam specimens is then carried out to further verify the developed cracked beam element model. Both forward verification and crack damage identification with the tested beams show similar level of accuracy to that with the numerically simulated examples. The cracked beam element model can be extended to crack identification of beams with complex cross sections. To do so the additional flexibility matrix for a specific cross-section type needs to be re-formulated. In the present study this is done for box sections. The stress intensity factors (SIF) for a box section as required for the establishment of the additional flexibility matrix are formulated with an empirical approach combining FE simulation, parametric analysis and regression analysis. The extended cracked beam element model is verified against both FE simulated and experimentally measured modal data. The model is subsequently incorporated in the crack identification for box beams. The successful extension of the cracked beam element model to the box beams paves the way for similar extension to the crack identification of other types of sections in real-life engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Pinjari, Abdul Rawoof. "An analysis of household vehicle ownership and utilization patterns in the United States using the 2001 National Household Travel Survey." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Resende, Rafael Tassinari. "Arquitetura genética de componentes periódicos de crescimento de Hevea brasiliensis." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11137/tde-25022014-104512/.

Full text
Abstract:
Nas metodologias de mapeamento de QTLs tradicionais, a relação de causalidade entre os caracteres fenotípicos e QTLs normalmente não são consideradas. O desenvolvimento deste trabalho contou com a utilização de dados longitudinais de crescimento de progênies oriundas do cruzamento entre os parentais PB217 e PR255 de um plantio de seringueira, localizado em uma área com dois períodos bem definidos ao longo do ano (altas e médias temperaturas; altas e baixas taxas precipitação). O experimento contém 4 medidas de incremento em diâmetro e altura, que são componentes periódicos do crescimento total da cultura, mensurados em um intervalo de dois anos (entre os 18 aos 52 meses de idade das plantas), sendo dois períodos em estação climática favorável ao desenvolvimento e dois em estação desfavorável, intercalados. Dessa forma foram estudados os parâmetros de relacionamento fenotípico e genético com objetivo de construir um diagrama de arquitetura genética que pondere relações de causalidade. Para modelar os dados fenotípicos foi realizado um elaborado modelo multi-caracteres que contemplou a variação espacial das parcelas experimentais e a variação entre os períodos de medição. Para tanto, foram ajustadas matrizes de variância-covariância (VCOV) adequadas à realidade dos dados, e incorporados dados meteorológicos que descrevessem cada um dos períodos. A partir destes modelos, os valores genotípicos ajustados foram utilizados na detecção dos QTLs. Posteriormente, fenótipos e genótipos foram articulados em um diagrama causal estrutural capaz de inferir sobre padrões genéticos de comportamento de crescimento da cultura. Foram mapeados um total de 13 QTLs, sendo que dois deles foram coincidentes para componentes periódicos de diâmetro nos períodos de estação desfavorável. Foi possível identificar efeitos aditivos e devido à dominância interessantes para o desenvolvimento em períodos de menores temperaturas, apontar o parental PR255 como portador de alelos importantes no desenvolvimento em clima adverso, estimar efeitos indiretos de QTLs não mapeados para determinadas características e explicar o padrão comportamental de crescimento no período em que as progênies foram avaliadas. Esta abordagem demonstrou-se proficiente para utilização em programas melhorando genético assistido por marcadores, por agregar informações pertinentes à seleção dos melhores materiais genéticos.
In traditional methodologies of QTL mapping, the causal relationship between phenotypic characters and QTLs are usually not considered. The development of this work involved the use of longitudinal growth data of progenies from parental PR255 and PB217 of a rubber tree plantation, located in an area with two periods of high and medium temperature and low and high precipitation rates well defined throughout the year. The experiment contains four measures of increment of diameter and height, which are periodic growth components of the total crop growing at an interval of two years (from 18 to 52 months old plants), two periods in a favorable climate station and two in a adverse station, intercalated. Was studied the parameters of phenotypic and genetic relationships in order to construct a diagram of genetic architecture to examine these causal relationships. A multi-trait-multi-occasion model that take into consideration spatial variation and climatic variation was developed. It also contains a variation-covariation matrix with appropriate to the reality of the data were adjusted and incorporated meteorological data was conducted to describe each of the periods. From these models the adjusted genotypic values were used in the detection of QTLs and later phenotypes and genotypes were linked in a structural causal diagram to infer about the genetic patterns of behavior. A total of 13 QTLs were mapped to the periodic growth components and total growth. The genetic architecture was able to identify additive effects and effects due dominance interesting to the development in periods of lower temperatures and drought, pointing parental PR255 as carrier of important alleles to development in adverse weather, estimating indirect effects of QTLs that were not mapped to certain characteristics and explain the physiological behavior pattern of growth in the period in which progenies were evaluated. This approach proved to be proficient to use in breeding programs aiming to implement marker assisted selection.
APA, Harvard, Vancouver, ISO, and other styles
47

Banerjee, Amlan. "Understanding activity engagement and time use patterns in a developing country context." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cidade, Julio Cezar de Mello. "Imagem de um Conselho Profissional: um estudo utilizando equações estruturais." Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6220.

Full text
Abstract:
O estudo e medição da imagem, especialmente de um Conselho Profissional são essenciais para auxiliar os gestores destas instituições a tomarem decisões. Como não há uma escala válida e confiável que permita a medição da imagem corporativa de um Conselho Profissional, o presente trabalho busca confirmar, através do método de Modelagem de Equações Estruturais MEE, o modelo hipotético proposto por Peres (2004) e Carvalho (2009) que tomam por base o estudo de Folland, Peacock e Pelfrey (1991) que conclui que a imagem corporativa é composta por dois fatores e a percepção desta imagem impacta na avaliação de seu desempenho. Os resultados obtidos na pesquisa demonstram, com grande segurança estatística, que o modelo proposto é consistente, tem ótimo ajuste, e pode ser aplicado em futuras amostras semelhantes.
Both the analysis and measurement of institutional image have been shown as essential tools to help managerial decisions, and much more so in the case of professional councils. During the bibliographical survey completed for the present research no valid and reliable scale has been identified for image measurement of a professional council. This dissertation intends to present such a scale by means of a confirmatory analysis of a prior two-factor model due to Folland, Peacock and Pelfrey (1991) and exploratorily studied by Peres (2004) and Carvalho (2009). The confirmatory analysis is the first step of a structural equations model that additionally allowed to show the influence of image on organizational performance as perceived by a sample of potential members of a council of professional accountants in the state of Rio de Janeiro, Brazil. Findings indicate that there is a significant statistical support for the proposed model so that the scale deserves further attention as a reliable tool for measuring the image of professional councils. In addition, since its components impact significantly upon performance, measured image may be useful for organizational management in the case of professional councils as well.
APA, Harvard, Vancouver, ISO, and other styles
49

Kang, Boo-Sung. "Empirical study on the Korean treasury auction focusing on the revenue comparison in multiple versus single price auction." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3051.

Full text
Abstract:
This dissertation pursues to find an answer empirically to the question of the revenue ranking between the multiple price auction and the single price auction. I also attempt to get empirical clues in terms of the efficiency ranking between the two. Under the assumptions of symmetric bidders and private independent value (PIV), I derive the optimal bidding conditions for both auction formats. Following the structural model estimation approach, I estimate the underlying distribution of market clearing price using the nonparametric resampling strategy and recover the bidders’ unknown true valuations corresponding to each observed bid point. With these estimated valuations of the bidders, I calculate what the upper bound of the revenue would have been under the Vickery auction to perform the counterfactual revenue comparison with the actual revenue. I find that, ex-post, the multiple price auction yields more revenue to the Korean Treasury than the alternative. I also investigate the efficiency ranking by comparing the number of bids switched and the amount of surplus change which would occur when the bidders are assumed to report their true valuations as their bids. I find that the multiple price auction is also superior to the alternative in efficiency which supports the current theoretical prediction. Finally, I investigate the robustness of my model and empirical results by relaxing the previous assumptions. I, first, extend the model and estimation to the case of asymmetric bidders where the bidders are divided into two groups based on their size. It shows that the model and estimation framework are still valid and that the empirical findings are very similar to the symmetric case. I also test for the presence of common value (CV) component in the bidders’ valuation function. I propose the simple regression model adopting the idea of the policy experimental approach. I obtain quite an inconclusive result in general but find some evidence supporting PIV for relatively higher bid prices while supporting CV for lower bid prices.
APA, Harvard, Vancouver, ISO, and other styles
50

Mancino, Antonio. "On the structural and dynamical properties of a new class of galaxy models with a central BH." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18722/.

Full text
Abstract:
This thesis work focuses on the dynamical properties of two-component galaxy models characterized by a stellar density distribution described by a Jaffe profile, and a galaxy (stars plus dark matter) density distribution following a r^(-3) shape at large radii. The dark matter (hereafter, DM) density profile is defined by the difference between the galaxy and the stellar profiles. The orbital structure of the stellar component is described by the Osipkov-Merritt (OM) radial anisotropy, and that of the DM halo is assumed isotropic; a black hole (BH) is also added at the center of the galaxy. The thesis is organized as follows. In Chapter 2 the main structural properties of the models are presented, and the conditions required to have a nowhere negative and monothonically decreasing DM halo density profile are derived; a discussion is also given of how the DM component can be built in order to have the same asymptotical behaviour, in the outer regions and near the center, as the Navarro-Frenk-White (NFW) profile. In Chapter 3 an investigation of the phase-space properties of the models is carried out, both from the point of view of the necessary and sufficient conditions for consistency, and from the direct inspection of the distribution function; the minimum value of the anisotropy radius for consistency is derived in terms of the galaxy parameters. In Chapter 4 the analytical solution of the Jeans equations with OM anisotropy is presented, together with the projection of the velocity dispersion profile at small and large radii. Finally, in Chapter 5 the global quantities entering the Virial theorem are explicitly calculated; these can be used for energetic considerations that are briefly mentioned, and allow us to determine the fiducial anisotropy limit required to prevent the onset of Radial Orbit Instability as a function of the galaxy parameters. The main results are summarized in Chapter 6, and some technical details are given in the Appendices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography