To see the other types of publications on this topic, follow the link: Semiparametric dynamics.

Dissertations / Theses on the topic 'Semiparametric dynamics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Semiparametric dynamics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Silveira, Neto Paulo Corrêa da. "Utilização de cópulas com dinâmica semiparamétrica para estimação de medidas de risco de mercado." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/147464.

Full text
Abstract:
A análise de risco de mercado, o risco associado a perdas financeiras resultantes de utilizações de preços de mercado, é fundamental para instituições financeiras e gestores de carteiras. A alocação dos ativos nas carteiras envolve decisões risco/retorno eficientes, frequentemente limitadas por uma política de risco. Muitos modelos tradicionais simplificam a estimação do risco de mercado impondo muitas suposições, como distribuições simétricas, correlações lineares, normalidade, entre outras. A utilização de cópulas exibiliza a estimação da estrutura de dependência dessas séries de tempo, possibilitando a modelagem de séries de tempo multivariadas em dois passos: estimações marginais e da dependência entre as séries. Neste trabalho, utilizou-se um modelo de cópulas com dinâmica semiparamétrica para medição de risco de mercado. A estrutura dinâmica das cópulas conta com um parâmetro de dependência que varia ao longo do tempo, em que a proposta semiparamétrica possibilita a modelagem de qualquer tipo de forma funcional que a estrutura dinâmica venha a apresentar. O modelo proposto por Hafner e Reznikova (2010), de dinâmica semiparamétrica, é comparado com o modelo sugerido por Patton (2006), que apresenta dinâmica paramétrica. Todas as cópulas no trabalho são bivariadas. Os dados consistem em quatro séries de tempo do mercado brasileiro de ações. Para cada um desses pares, utilizou-se modelos ARMA-GARCH para a estimação das marginais, enquanto a dependência entre as séries foi estimada utilizando os dois modelos de cópulas dinâmicas mencionados. Para comparar as metodologias estimaram-se duas medidas de risco de mercado: Valor em Risco e Expected Shortfall. Testes de hipóteses foram implementados para verificar a qualidade das estimativas de risco.
Market risk management, i.e. managing the risk associated with nancial loss resulting from market price uctuations, is fundamental to nancial institutions and portfolio managers. Allocations involve e cient risk/return decisions, often restricted by an investment policy statement. Many traditional models simplify risk estimation imposing several assumptions, like symmetrical distributions, the existence of only linear correlations, normality, among others. The modelling of the dependence structure of these time series can be exibly achieved by using copulas. This approach can model a complex multivariate time series structure by analyzing the problem in two blocks: marginal distributions estimation and dependence estimation. The dynamic structure of these copulas can account for a dependence parameter that changes over time, whereas the semiparametric option makes it possible to model any kind of functional form in the dynamic structure. We compare the model suggested by Hafner and Reznikova (2010), which is a dynamic semiparametric one, with the model suggested by Patton (2006), which is also dynamic but fully parametric. The copulas in this work are all bivariate. The data consists of four Brazilian stock market time series. For each of these pairs, ARMA-GARCH models have been used to model the marginals, while the dependences between the series are modeled by using the two methods mentioned above. For the comparison between these methodologies, we estimate Value at Risk and Expected Shortfall of the portfolios built for each pair of assets. Hypothesis tests are implemented to verify the quality of the risk estimates.
APA, Harvard, Vancouver, ISO, and other styles
2

Borak, Szymon. "Dynamic semiparametric factor models." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2008. http://dx.doi.org/10.18452/15802.

Full text
Abstract:
Hochdimensionale Regressionsprobleme, die sich dynamisch entwickeln, sind in zahlreichen Bereichen der Wissenschaft anzutreffen. Die Dynamik eines solchen komplexen Systems wird typischerweise mittels der Zeitreiheneigenschaften einer geringen Anzahl von Faktoren analysiert. Diese Faktoren wiederum sind mit zeitinvarianten Funktionen von explikativen Variablen bewichtet. Diese Doktorarbeit beschäftigt sich mit einem dynamischen semiparametrischen Faktormodell, dass nichtparametrische Bewichtungsfunktionen benutzt. Zu Beginn sollen kurz die wichtigsten statistischen Methoden diskutiert werden um dann auf die Eigenschaften des verwendeten Modells einzugehen. Im Anschluss folgt die Diskussion einiger Anwendungen des Modellrahmens auf verschiedene Datensätze. Besondere Aufmerksamkeit wird auf die Dynamik der so genannten Implizierten Volatilität und das daraus resultierende Faktor-Hedging von Barrier Optionen gerichtet.
High-dimensional regression problems which reveal dynamic behavior occur frequently in many different fields of science. The dynamics of the whole complex system is typically analyzed by time propagation of few number of factors, which are loaded with time invariant functions of exploratory variables. In this thesis we consider dynamic semiparametric factor model, which assumes nonparametric loading functions. We start with a short discussion of related statistical techniques and present the properties of the model. Additionally real data applications are discussed with particular focus on implied volatility dynamics and resulting factor hedging of barrier options.
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.

Full text
Abstract:
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär).
In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
APA, Harvard, Vancouver, ISO, and other styles
4

Fritz, Marlon [Verfasser]. "Empirical analysis of dynamic macroeconomic growth and business cycle processes - using modern non- and semiparametric approaches - / Marlon Fritz." Paderborn : Universitätsbibliothek, 2019. http://d-nb.info/1191831043/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Song [Verfasser], Wolfgang [Akademischer Betreuer] Härdle, and Ya'acov [Akademischer Betreuer] Ritov. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models / Song Song. Gutachter: Wolfgang Karl Härdle ; Ya’acov Ritov." Berlin : Humboldt Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://d-nb.info/1015129803/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tencaliec, Patricia. "Developments in statistics applied to hydrometeorology : imputation of streamflow data and semiparametric precipitation modeling." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM006/document.

Full text
Abstract:
Les précipitations et les débits des cours d'eau constituent les deux variables hydrométéorologiques les plus importantes pour l'analyse des bassins versants. Ils fournissent des informations fondamentales pour la gestion intégrée des ressources en eau, telles que l’approvisionnement en eau potable, l'hydroélectricité, les prévisions d'inondations ou de sécheresses ou les systèmes d'irrigation.Dans cette thèse de doctorat sont abordés deux problèmes distincts. Le premier prend sa source dans l’étude des débits des cours d’eau. Dans le but de bien caractériser le comportement global d'un bassin versant, de longues séries temporelles de débit couvrant plusieurs dizaines d'années sont nécessaires. Cependant les données manquantes constatées dans les séries représentent une perte d'information et de fiabilité, et peuvent entraîner une interprétation erronée des caractéristiques statistiques des données. La méthode que nous proposons pour aborder le problème de l'imputation des débits se base sur des modèles de régression dynamique (DRM), plus spécifiquement, une régression linéaire multiple couplée à une modélisation des résidus de type ARIMA. Contrairement aux études antérieures portant sur l'inclusion de variables explicatives multiples ou la modélisation des résidus à partir d'une régression linéaire simple, l'utilisation des DRMs permet de prendre en compte les deux aspects. Nous appliquons cette méthode pour reconstruire les données journalières de débit à huit stations situées dans le bassin versant de la Durance (France), sur une période de 107 ans. En appliquant la méthode proposée, nous parvenons à reconstituer les débits sans utiliser d'autres variables explicatives. Nous comparons les résultats de notre modèle avec ceux obtenus à partir d'un modèle complexe basé sur les analogues et la modélisation hydrologique et d'une approche basée sur le plus proche voisin. Dans la majorité des cas, les DRMs montrent une meilleure performance lors de la reconstitution de périodes de données manquantes de tailles différentes, dans certains cas pouvant allant jusqu'à 20 ans.Le deuxième problème que nous considérons dans cette thèse concerne la modélisation statistique des quantités de précipitations. La recherche dans ce domaine est actuellement très active car la distribution des précipitations exhibe une queue supérieure lourde et, au début de cette thèse, il n'existait aucune méthode satisfaisante permettant de modéliser toute la gamme des précipitations. Récemment, une nouvelle classe de distribution paramétrique, appelée distribution généralisée de Pareto étendue (EGPD), a été développée dans ce but. Cette distribution exhibe une meilleure performance, mais elle manque de flexibilité pour modéliser la partie centrale de la distribution. Dans le but d’améliorer la flexibilité, nous développons, deux nouveaux modèles reposant sur des méthodes semiparamétriques.Le premier estimateur développé transforme d'abord les données avec la distribution cumulative EGPD puis estime la densité des données transformées en appliquant un estimateur nonparamétrique par noyau. Nous comparons les résultats de la méthode proposée avec ceux obtenus en appliquant la distribution EGPD paramétrique sur plusieurs simulations, ainsi que sur deux séries de précipitations au sud-est de la France. Les résultats montrent que la méthode proposée se comporte mieux que l'EGPD, l’erreur absolue moyenne intégrée (MIAE) de la densité étant dans tous les cas presque deux fois inférieure.Le deuxième modèle considère une distribution EGPD semiparamétrique basée sur les polynômes de Bernstein. Plus précisément, nous utilisons un mélange creuse de densités béta. De même, nous comparons nos résultats avec ceux obtenus par la distribution EGPD paramétrique sur des jeux de données simulés et réels. Comme précédemment, le MIAE de la densité est considérablement réduit, cet effet étant encore plus évident à mesure que la taille de l'échantillon augmente
Precipitation and streamflow are the two most important meteorological and hydrological variables when analyzing river watersheds. They provide fundamental insights for water resources management, design, or planning, such as urban water supplies, hydropower, forecast of flood or droughts events, or irrigation systems for agriculture.In this PhD thesis we approach two different problems. The first one originates from the study of observed streamflow data. In order to properly characterize the overall behavior of a watershed, long datasets spanning tens of years are needed. However, the quality of the measurement dataset decreases the further we go back in time, and blocks of data of different lengths are missing from the dataset. These missing intervals represent a loss of information and can cause erroneous summary data interpretation or unreliable scientific analysis.The method that we propose for approaching the problem of streamflow imputation is based on dynamic regression models (DRMs), more specifically, a multiple linear regression with ARIMA residual modeling. Unlike previous studies that address either the inclusion of multiple explanatory variables or the modeling of the residuals from a simple linear regression, the use of DRMs allows to take into account both aspects. We apply this method for reconstructing the data of eight stations situated in the Durance watershed in the south-east of France, each containing daily streamflow measurements over a period of 107 years. By applying the proposed method, we manage to reconstruct the data without making use of additional variables, like other models require. We compare the results of our model with the ones obtained from a complex approach based on analogs coupled to a hydrological model and a nearest-neighbor approach, respectively. In the majority of cases, DRMs show an increased performance when reconstructing missing values blocks of various lengths, in some of the cases ranging up to 20 years.The second problem that we approach in this PhD thesis addresses the statistical modeling of precipitation amounts. The research area regarding this topic is currently very active as the distribution of precipitation is a heavy-tailed one, and at the moment, there is no general method for modeling the entire range of data with high performance. Recently, in order to propose a method that models the full-range precipitation amounts, a new class of distribution called extended generalized Pareto distribution (EGPD) was introduced, specifically with focus on the EGPD models based on parametric families. These models provide an improved performance when compared to previously proposed distributions, however, they lack flexibility in modeling the bulk of the distribution. We want to improve, through, this aspect by proposing in the second part of the thesis, two new models relying on semiparametric methods.The first method that we develop is the transformed kernel estimator based on the EGPD transformation. That is, we propose an estimator obtained by, first, transforming the data with the EGPD cdf, and then, estimating the density of the transformed data by applying a nonparametric kernel density estimator. We compare the results of the proposed method with the ones obtained by applying EGPD on several simulated scenarios, as well as on two precipitation datasets from south-east of France. The results show that the proposed method behaves better than parametric EGPD, the MIAE of the density being in all the cases almost twice as small.A second approach consists of a new model from the general EGPD class, i.e., we consider a semiparametric EGPD based on Bernstein polynomials, more specifically, we use a sparse mixture of beta densities. Once again, we compare our results with the ones obtained by EGPD on both simulated and real datasets. As before, the MIAE of the density is considerably reduced, this effect being even more obvious as the sample size increases
APA, Harvard, Vancouver, ISO, and other styles
7

Hobert, Anne [Verfasser], Axel [Akademischer Betreuer] Munk, Axel [Gutachter] Munk, and Tatyana [Gutachter] Krivobokova. "Semiparametric Estimation of Drift, Rotation and Scaling in Sparse Sequential Dynamic Imaging: Asymptotic theory and an application in nanoscale fluorescence microscopy / Anne Hobert ; Gutachter: Axel Munk, Tatyana Krivobokova ; Betreuer: Axel Munk." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1203875312/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kolar, Mladen. "Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/229.

Full text
Abstract:
Extracting knowledge and providing insights into complex mechanisms underlying noisy high-dimensional data sets is of utmost importance in many scientific domains. Statistical modeling has become ubiquitous in the analysis of high dimensional functional data in search of better understanding of cognition mechanisms, in the exploration of large-scale gene regulatory networks in hope of developing drugs for lethal diseases, and in prediction of volatility in stock market in hope of beating the market. Statistical analysis in these high-dimensional data sets is possible only if an estimation procedure exploits hidden structures underlying data. This thesis develops flexible estimation procedures with provable theoretical guarantees for uncovering unknown hidden structures underlying data generating process. Of particular interest are procedures that can be used on high dimensional data sets where the number of samples n is much smaller than the ambient dimension p. Learning in high-dimensions is difficult due to the curse of dimensionality, however, the special problem structure makes inference possible. Due to its importance for scientific discovery, we put emphasis on consistent structure recovery throughout the thesis. Particular focus is given to two important problems, semi-parametric estimation of networks and feature selection in multi-task learning.
APA, Harvard, Vancouver, ISO, and other styles
9

Borak, Szymon [Verfasser]. "Dynamic semiparametric factor models / von Szymon Borak." 2008. http://d-nb.info/990911543/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Shih-Feng, and 黃士峰. "Financial Derivatives Pricing and Hedging - A Dynamic Semiparametric Approach." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/yuh4k2.

Full text
Abstract:
博士
國立中山大學
應用數學系研究所
96
A dynamic semiparametric pricing method is proposed for financial derivatives including European and American type options and convertible bonds. The proposed method is an iterative procedure which uses nonparametric regression to approximate derivative values and parametric asset models to derive the continuation values. Extension to higher dimensional option pricing is also developed, in which the dependence structure of financial time series is modeled by copula functions. In the simulation study, we valuate one dimensional American options, convertible bonds and multi-dimensional American geometric average options and max options. The considered one-dimensional underlying asset models include the Black-Scholes, jump-diffusion, and nonlinear asymmetric GARCH models and for multivariate case we study copula models such as the Gaussian, Clayton and Gumbel copulae. Convergence of the method is proved under continuity assumption on the transition densities of the underlying asset models. And the orders of the supnorm errors are derived. Both the theoretical findings and the simulation results show the proposed approach to be tractable for numerical implementation and provides a unified and accurate technique for financial derivative pricing. The second part of this thesis studies the option pricing and hedging problems for conditional leptokurtic returns which is an important feature in financial data. The risk-neutral models for log and simple return models with heavy-tailed innovations are derived by an extended Girsanov change of measure, respectively. The result is applicable to the option pricing of the GARCH model with t innovations (GARCH-t) for simple eturn series. The dynamic semiparametric approach is extended to compute the option prices of conditional leptokurtic returns. The hedging strategy consistent with the extended Girsanov change of measure is constructed and is shown to have smaller cost variation than the commonly used delta hedging under the risk neutral measure. Simulation studies are also performed to show the effect of using GARCH-normal models to compute the option prices and delta hedging of GARCH-t model for plain vanilla and exotic options. The results indicate that there are little pricing and hedging differences between the normal and t innovations for plain vanilla and Asian options, yet significant disparities arise for barrier and lookback options due to improper distribution setting of the GARCH innovations.
APA, Harvard, Vancouver, ISO, and other styles
11

AbdelAziz, Salamh Mustafa. "Second-order least squares estimation in dynamic regression models." 2014. http://hdl.handle.net/1993/23512.

Full text
Abstract:
In this dissertation we proposed two generalizations of the Second-Order Least Squares (SLS) approach in two popular dynamic econometrics models. The first one is the regression model with time varying nonlinear mean function and autoregressive conditionally heteroskedastic (ARCH) disturbances. The second one is a linear dynamic panel data model. We used a semiparametric framework in both models where the SLS approach is based only on the first two conditional moments of response variable given the explanatory variables. There is no need to specify the distribution of the error components in both models. For the ARCH model under the assumption of strong-mixing process with finite moments of some order, we established the strong consistency and asymptotic normality of the SLS estimator. It is shown that the optimal SLS estimator, which makes use of the additional information inherent in the conditional skewness and kurtosis of the process, is superior to the commonly used quasi-MLE, and the efficiency gain is significant when the underlying distribution is asymmetric. Moreover, our large scale simulation studies showed that the optimal SLSE behaves better than the corresponding estimating function estimator in finite sample situation. The practical usefulness of the optimal SLSE was tested by an empirical example on the U.K. Inflation. For the linear dynamic panel data model, we showed that the SLS estimator is consistent and asymptotically normal for large N and finite T under fairly general regularity conditions. Moreover, we showed that the optimal SLS estimator reaches a semiparametric efficiency bound. A specification test was developed for the first time to be used whenever the SLS is applied to real data. Our Monte Carlo simulations showed that the optimal SLS estimator performs satisfactorily in finite sample situations compared to the first-differenced GMM and the random effects pseudo ML estimators. The results apply under stationary/nonstationary process and wih/out exogenous regressors. The performance of the optimal SLS is robust under near-unit root case. Finally, the practical usefulness of the optimal SLSE was examined by an empirical study on the U.S. airfares.
APA, Harvard, Vancouver, ISO, and other styles
12

Kudela, Maria Aleksandra. "Statistical methods for high-dimensional data with complex correlation structure applied to the brain dynamic functional connectivity studyDY." Diss., 2017. http://hdl.handle.net/1805/12835.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
A popular non-invasive brain activity measurement method is based on the functional magnetic resonance imaging (fMRI). Such data are frequently used to study functional connectivity (FC) defined as statistical association among two or more anatomically distinct fMRI signals (Friston, 1994). FC has emerged in recent years as a valuable tool for providing a deeper understanding of neurodegenerative diseases and neuropsychiatric disorders, such as Alzheimer's disease and autism. Information about complex association structure in high-dimensional fMRI data is often discarded by a calculating an average across complex spatiotemporal processes without providing an uncertainty measure around it. First, we propose a non-parametric approach to estimate the uncertainty of dynamic FC (dFC) estimates. Our method is based on three components: an extension of a boot strapping method for multivariate time series, recently introduced by Jentsch and Politis (2015); sliding window correlation estimation; and kernel smoothing. Second, we propose a two-step approach to analyze and summarize dFC estimates from a task-based fMRI study of social-to-heavy alcohol drinkers during stimulation with avors. In the first step, we apply our method from the first paper to estimate dFC for each region subject combination. In the second step, we use semiparametric additive mixed models to account for complex correlation structure and model dFC on a population level following the study's experimental design. Third, we propose to utilize the estimated dFC to study the system's modularity defined as the mutually exclusive division of brain regions into blocks with intra-connectivity greater than the one obtained by chance. As a result, we obtain brain partition suggesting the existence of common functionally-based brain organization. The main contribution of our work stems from the combination of the methods from the fields of statistics, machine learning and network theory to provide statistical tools for studying brain connectivity from a holistic, multi-disciplinary perspective.
APA, Harvard, Vancouver, ISO, and other styles
13

Hobert, Anne. "Semiparametric Estimation of Drift, Rotation and Scaling in Sparse Sequential Dynamic Imaging: Asymptotic theory and an application in nanoscale fluorescence microscopy." Doctoral thesis, 2019. http://hdl.handle.net/11858/00-1735-0000-002E-E5B3-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hartmann, Alexander. "Estimating rigid motion in sparse sequential dynamic imaging: with application to nanoscale fluorescence microscopy." Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-0023-3E08-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shen, Hua. "Statistical Methods for Life History Analysis Involving Latent Processes." Thesis, 2014. http://hdl.handle.net/10012/8496.

Full text
Abstract:
Incomplete data often arise in the study of life history processes. Examples include missing responses, missing covariates, and unobservable latent processes in addition to right censoring. This thesis is on the development of statistical models and methods to address these problems as they arise in oncology and chronic disease. Methods of estimation and inference in parametric, weakly parametric and semiparametric settings are investigated. Studies of chronic diseases routinely sample individuals subject to conditions on an event time of interest. In epidemiology, for example, prevalent cohort studies aiming to evaluate risk factors for survival following onset of dementia require subjects to have survived to the point of screening. In clinical trials designed to assess the effect of experimental cancer treatments on survival, patients are required to survive from the time of cancer diagnosis to recruitment. Such conditions yield samples featuring left-truncated event time distributions. Incomplete covariate data often arise in such settings, but standard methods do not deal with the fact that the covariate distribution is also affected by left truncation. We develop a likelihood and algorithm for estimation for dealing with incomplete covariate data in such settings. An expectation-maximization algorithm deals with the left truncation by using the covariate distribution conditional on the selection criterion. An extension to deal with sub-group analyses in clinical trials is described for the case in which the stratification variable is incompletely observed. In studies of affective disorder, individuals are often observed to experience recurrent symptomatic exacerbations of symptoms warranting hospitalization. Interest lies in modeling the occurrence of such exacerbations over time and identifying associated risk factors to better understand the disease process. In some patients, recurrent exacerbations are temporally clustered following disease onset, but cease to occur after a period of time. We develop a dynamic mover-stayer model in which a canonical binary variable associated with each event indicates whether the underlying disease has resolved. An individual whose disease process has not resolved will experience events following a standard point process model governed by a latent intensity. If and when the disease process resolves, the complete data intensity becomes zero and no further events will arise. An expectation-maximization algorithm is developed for parametric and semiparametric model fitting based on a discrete time dynamic mover-stayer model and a latent intensity-based model of the underlying point process. The method is applied to a motivating dataset from a cohort of individuals with affective disorder experiencing recurrent hospitalization for their mental health disorder. Interval-censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. Extensions on model fitting techniques for the dynamic mover-stayer model are discussed and incorporate interval censoring. The likelihood and algorithm for estimation are developed for piecewise constant baseline rate functions and are shown to yield estimators with small empirical bias in simulation studies. Data on the cumulative number of damaged joints in patients with psoriatic arthritis are analysed to provide an illustrative application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography