Dissertations / Theses on the topic 'Pseudo maximum likelihood'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 32 dissertations / theses for your research on the topic 'Pseudo maximum likelihood.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hu, Huilin. "Large sample theory for pseudo-maximum likelihood estimates in semiparametric models /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/8936.
Full textIANNACE, MAURO. "COGARCH processes: theory and asymptotics for the pseudo-maximum likelihood estimator." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/55528.
Full textFauske, Johannes. "An empirical study of the maximum pseudo-likelihood for discrete Markov random fields." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9949.
Full textIn this text we will look at two parameter estimation methods for Markov random fields on a lattice. They are maximum pseudo-likelihood estimation and maximum general pseudo-likelihood estimation, which we abbreviate MPLE and MGPLE. The idea behind them is that by maximizing an approximation of the likelihood function, we avoid computing cumbersome normalising constants. In MPLE we maximize the product of the conditional distributions for each variable given all the other variables. In MGPLE we use a compromise between pseudo-likelihood and the likelihood function as the approximation. We evaluate and compare the performance of MPLE and MGPLE on three different spatial models, which we have generated observations of. We are specially interested to see what happens with the quality of the estimates when the number of observations increases. The models we use are the Ising model, the extended Ising model and the Sisim model. All the random variables in the models have two possible states, black or white. For the Ising and extended Ising model we have one and three parameters respectively. For Sisim we have $13$ parameters. The quality of both methods get better when the number of observations grow, and MGPLE gives better results than MPLE. However certain parameter combinations of the extended Ising model give worse results.
Campos, Fábio Alexandre. "Estimação de elasticidades constantes : deveremos logaritmizar?" Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/10297.
Full textHá muito que os Economistas ignoram as implicações da desigualdade de Jensen. Na estimação de modelos económicos não lineares, a prática habitual consiste em log-linearizar o modelo. Para que este procedimento seja válido é necessário assumir um conjunto de hipóteses que na realidade revelam-se muito restritas. Neste trabalho, e seguindo de perto a abordagem de Santos Silva e Tenreyro (2006), procura-se analisar as implicações inerentes à estimação de elasticidades constantes a partir do modelo não linear e do seu equivalente linear. Estas implicações serão analisadas dos pontos de vista teórico e empírico. Do ponto de vista teórico, demonstra-se que a prática de estimar modelos linearizados pode levar a estimativas enviesadas. Por outro lado, a aplicação empírica, não conduz a uma conclusão tão assertiva. Todavia, a complexidade dos métodos de estimação de modelos não lineares torna a sua utilização menos atractiva face ao OLS. No entanto, as razões teóricas são suficientemente fortes para se concluir que o modelo não deverá ser logaritmizado. Contudo, tal decisão cabe em última análise naturalmente ao utilizador, e caso este decida não logaritmizar deverá ter em conta as respectivas implicações, realizar todos os testes de especificação disponíveis e interpretar e analisar as estimativas obtidas com cautela.
Economists have long ignored the implications of Jensen's Inequality. In the estimation of non-linear economic models, the usual practice is to log-linearize the model. For this procedure to be valid it´s necessary to take a set of assumptions which turn out to be very strict. This work, follows closely the approach of Santos Silva and Tenreyro (2006), and seeks to analyze the implications inherent in the estimation of elasticity constants from the nonlinear model and its linear equivalent. These implications will be considered in a theoretical and empirical point of view. From the theoretical point of view, it?s demonstrated that the practice of estimating linearized models can lead to biased estimates. On the other hand, the empirical application does not lead to a conclusion so assertive. However, the complexity of the estimation methods of nonlinear models makes their use less attractive compared to OLS. However, the theoretical reasons are strong enough to conclude that the model should not be taken in logarithmic form. However, ultimately this decision belongs to the user, if it should decide to apply the logarithm form, it should take into account the implications, perform all the specification tests available, interpret and analyze the estimates obtained carefully.
Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.
Full textGHOLAMI, MAHDI. "Essays in Applied Economics: Disease Outbreaks and Gravity Model Approach to Bovines movement network in Italy." Doctoral thesis, Università di Siena, 2017. http://hdl.handle.net/11365/1005912.
Full textNora, Elisabete da Conceição Pires de Almeida. "Sistema de Bonus-Malus para frotas de veículos." Master's thesis, Instituto Superior de Economia e Gestão, 2004. http://hdl.handle.net/10400.5/686.
Full textEsta dissertação tem como objectivo a construção de um sistema de Bonus-Malus para frotas de veículos, tendo por base o conhecimento da sinistralidade histórica e utilizando os factores individuais dos veículos e das empresas a que correspondem as frotas. Os coeficientes de bonus-malus são obtidos através das credibilidades específicas do veículo e da frota, tendo em atenção o “turnover” esperado para os veículos de cada frota. A expressão “turnover” indica-nos a percentagem de veículos da frota que, por hipótese, poderão entrar em rotatividade, isto é, supõe-se a possibilidade de existir entradas e saídas de veículos. As frotas são indexadas por f = 1,...,F, e os veículos são indexados por i = 1,..., mf, onde mf é a dimensão, ou seja, o número de veículos da frota f. Supondo que o número de sinistros N fi ~ Pλfi segue uma distribuição de Poisson, o parâmetro fi fi f fi d exp x z será uma função dos factores de avaliação observados ao nível da frota (xf ) e do veículo (zfi ), onde dfi é a duração de observação do veículo i da frota f. Obtemos o conjunto de estimadores ˆ e ˆ , utilizando a Pseudo-Máxima Verosimilhança e o método proposto por Mexia/Corte Real, que se baseia nos Estimadores Extremais, para um conjunto de dados Portugueses, relativos ao período de Novembro de 1997 a Janeiro de 2003. Algumas conclusões serão apresentadas, de acordo com os dados analisados.
The purpose of this thesis is to provide Bonus-Malus System for fleets of vehicles from the history of claims, using the individual characteristics of both the vehicles and the carriers. Bonus-malus coefficients are obtained from vehicle-specific and fleet-specific credibilities. Coefficients take into account an expected turnover for the vehicles within the fleets. The expression “turnover“ means the percentage of vehicles within the fleet that, by assumption, could take in rotation, because we suppose that exists the possibility of getting in and going out vehicles in the fleet. Indexing the fleets by f = 1,...,F, and the vehicles by I = 1,..., mf, where mf is the size, that is, the number of vehicles of the fleet f, if the number of claims N fi ~ Pλfi follows a Poisson distribution, we obtain the estimator of the parameter fi fi f fi d exp x z , which will be a function of rating factors observed at the fleet level (xf) and at the vehicle level (zfi), with dfi the duration of the observation period for the vehicle i in the fleet f. We obtain a set of estimators ˆ and ˆ using the pseudo maximum-likelihood and the method proposed by Mexia/Corte Real, which is based on extremal estimators, for a set of Portuguese data, considering the period from November 1997 to January 2003. Some conclusions are drawn regarding the data analyzed.
Ribeiro, Patrick de Matos [Verfasser], Martin [Akademischer Betreuer] Wagner, and Walter [Gutachter] Krämer. "Pseudo maximum likelihood estimation of cointegrated multiple frequency I(1) VARMA processes using the state space framework / Patrick de Matos Ribeiro ; Gutachter: Walter Krämer ; Betreuer: Martin Wagner." Dortmund : Universitätsbibliothek Dortmund, 2020. http://d-nb.info/1229193693/34.
Full textCarrasco, Jalmar Manuel Farfan. "Modelos de regressão beta com erro nas variáveis." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-15082012-093632/.
Full textIn this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186/document.
Full textThe latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
Oberhofer, Harald, and Michael Pfaffermayr. "Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6020/1/wp259.pdf.
Full textSeries: Department of Economics Working Paper Series
Marchildon, Miguel. "An Application of the Gravity Model to International Trade in Narcotics." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37258.
Full textLevada, Alexandre Luis Magalhães. "Combinação de modelos de campos aleatórios markovianos para classificação contextual de imagens multiespectrais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-11052010-165642/.
Full textThis work presents a novel MAP-MRF approach for multispectral image contextual classification by combining higher-order Markov Random Field models. The statistical modeling follows the Bayesian paradigm, with the definition of a multispectral Gaussian Markov Random Field model for the observations and a Potts MRF model to represent the a priori knowledge. In this scenario, the Potts MRF model parameter (β) plays the role of a regularization parameter by controlling the tradeoff between the likelihood and the prior knowledge, in a way that a suitable tunning for this parameter is required for a good performance in contextual classification. The introduction of higher-order MRF models requires the specification of novel parameter estimation methods. One of the contributions of this work is the definition of novel pseudo-likelihood equations for the estimation of these MRF parameters in second and third order neighborhood systems. Despite its widely usage in practical MRF applications, little is known about the accuracy of maximum pseudo-likelihood approach. Approximations for the asymptotic variance of the proposed MPL estimators were derived, completely characterizing their behavior in the limiting case, allowing statistical inference and quantitative analysis. From the statistical modeling and having the model parameters estimated, the next step is the multispectral image classification. The solution for this Bayesian inference problem is given by the MAP criterion, where the optimal solution is obtained by maximizing the a posteriori distribution, defining an optimization problem. As there is no analytical solution for this problem in case of Markovian priors, combinatorial optimization algorithms are required to approximate the optimal solution. In this work, we use three suboptimal methods: Iterated Conditional Modes, Maximizer of the Posterior Marginals and Game Strategy Approach, a variant approach based on non-cooperative game theory. However, it has been shown that these methods converge to local maxima solutions, since they are extremelly dependent on the initial condition. This fact motivated the development of a novel approach for combination of contextual classifiers, by making use of multiple initializations at the same time, where each one of these initial conditions is provided by different pointwise pattern classifiers. The proposed methodology defines a robust MAP-MRF framework for the solution of general inverse problems since it allows the use and integration of several initial conditions in a variety of applications as image classification, denoising and restoration. To evaluate the performance of the classification results, two statistical measures are used to verify the agreement between the classifiers output and the ground truth: Cohens Kappa and Kendalls Tau coefficient. The obtained results show that the use of higher-order neighborhood systems is capable of significantly improve not only the classification performance, but also the MRF parameter estimation by reducing both the estimation error and the asymptotic variance. Additionally, the combination of contextual classifiers through the use of multiple initializations also improves the classificatoin performance, when compared to the traditional single initialization approach.
Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186.
Full textThe latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
Inês, Mónica Sofia Inácio Duarte. "Econometric analysis of private medicines expenditure in Portugal." Master's thesis, Instituto Superior de Economia e Gestão, 2007. http://hdl.handle.net/10400.5/653.
Full textThe Portuguese National Health System states that access to health care should depend mainly on need. Conditional on need, access to pharmaceuticals should not depend on socio-economic factors such as income, social class, education or geographical factors such as the access to pharmacies. This study uses data from the last two waves of National Health Survey (1995/1996 and 1998/1999) and focuses on equity issues testing for the existence of insurance inequalities, income-related and pharmacies density related inequalities. A two-part model was adopted. To model the probability of occurrence of medicines private expenditure, a modified LOGIT model was specified accounting for the double nature of the zeros of the dependent variable and asymmetry. In the second part a Poisson pseudo maximum likelihood estimator was adopted. No misspecification was detected in the two-part model. The main results showed inequity in Portuguese private medicines expenditures with respect to supplementary health insurance (private and job related), income and pharmacies density.
O Serviço Nacional de Saúde Português estabelece que o acesso a cuidados de saúde deve depender essencialmente das necessidades clínicas. Condicionado nas necessidades individuais, o acesso e utilização de medicamentos não deveria depender de factores económicos como rendimento, classe social, nível de educação ou o acesso a farmácias ou postos de vendas de medicamentos. Utilizando dados das últimas duas realizações do Inquérito Nacional de Saúde (1995/96 e 1998/1999), este estudo testa a existência de inequidades nas despesas com medicamentos, condicionadas na necessidade, relacionadas com o rendimento, com a densidade de farmácias e com possuir seguro de saúde privado ou relacionado com o local de trabalho. Foi aplicado um modelo em duas partes. Para a probabilidade individual de efectuar despesas com medicamentos, foi adoptado um estimador LOGIT modificado para acomodar a dupla natureza dos zeros da variável dependente e que permitisse assimetria. Para modelar as despesas positivas com medicamentos foram utilizadas as propriedades da pseudo verosimilhança através da utilização de um modelo de Poisson. Não se detectou má especificação do modelo em duas partes e concluiu-se que existem inequidades na despesa privada com medicamentos relacionadas com a existência de seguro de saúde privado ou relacionado com o local de trabalho, o rendimento e a densidade de farmácias.
Krisztin, Tamás, and Manfred M. Fischer. "The gravity model for international trade: Specification and estimation issues in the prevalence of zero flows." WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4453/3/TheGravityModelForInternationalTrade2.pdf.
Full textSeries: Working Papers in Regional Science
Lovreta, Lidija. "Structural Credit Risk Models: Estimation and Applications." Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9180.
Full textEl primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.
El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.
El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida.
El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.
El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.
El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.
El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida.
Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.
Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.
Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
Bosquet, Clément. "Commerce international et économie de la science : distances, agglomération, effets de pairs et discrimination." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM1097/document.
Full textThe core of this thesis lies in the field of economics of science to which the two first parts are devoted. The first part questions the impact of methodological choices in the measurement of research productivity and studies the channels of knowledge diffusion. The second part studies the impact on individual publication records of both individual and departments' characteristics and analyse the gender gap in occupations on the academic labour market. The main results are the following: methodological choices in the measurement of research productivity do not impact the estimated hierarchy of research institutions. Citations and journal quality weights measure the same dimension of publication productivity. Location matters in the academic research activity: some departments generate more externalities than others. Externalities are higher where academics are homogeneous in terms of publication performance and have diverse research fields, and, to a lower extent, if the department is large, with more women, older academics, stars and co-authors connection to foreign departments. If women are less likely to be full Professor (with respect to Assistant Professor) than men, this is neither because they are discriminated against in the promotion process, neither because the promotion cost (mobility) is higher for them, nor because they have different preferences for salaries versus department prestige. A possible, but not tested, explanation is that women self-select themselves by participating less in or exerting lower effort during the promotion process
Coucke, Alice. "Statistical modeling of protein sequences beyond structural prediction : high dimensional inference with correlated data." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE034/document.
Full textOver the last decades, genomic databases have grown exponentially in size thanks to the constant progress of modern DNA sequencing. A large variety of statistical tools have been developed, at the interface between bioinformatics, machine learning, and statistical physics, to extract information from these ever increasing datasets. In the specific context of protein sequence data, several approaches have been recently introduced by statistical physicists, such as direct-coupling analysis, a global statistical inference method based on the maximum-entropy principle, that has proven to be extremely effective in predicting the three-dimensional structure of proteins from purely statistical considerations.In this dissertation, we review the relevant inference methods and, encouraged by their success, discuss their extension to other challenging fields, such as sequence folding prediction and homology detection. Contrary to residue-residue contact prediction, which relies on an intrinsically topological information about the network of interactions, these fields require global energetic considerations and therefore a more quantitative and detailed model. Through an extensive study on both artificial and biological data, we provide a better interpretation of the central inferred parameters, up to now poorly understood, especially in the limited sampling regime. Finally, we present a new and more precise procedure for the inference of generative models, which leads to further improvements on real, finitely sampled data
Koch, Erwan. "Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10138/document.
Full textThis thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is high
Rodrigues, Agatha Sacramento. "Regressão logística com erro de medida: comparação de métodos de estimação." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-23082013-172348/.
Full textWe study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
Grazian, Clara. "Contributions aux méthodes bayésiennes approchées pour modèles complexes." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED001.
Full textRecently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior
McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.
Find full textMoreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.
Full textMissing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
Shih, Chu-Fu, and 石儲輔. "Pseudo Maximum Likelihood in Hidden Markov Model." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/88961318150457809796.
Full text國立臺灣大學
應用數學科學研究所
104
Hidden Markov models are a fundamental tool in applied statistics, econometrics, and machine learning for treating data taken from multiple subpopulations. When the sequence of observations is from a discrete-time, finite-state hidden Markov model, the current practice for estimating the parameters of such models relies on local search heuristics such as the EM algorithm. A new method named as pairing method is proposed to serve as an initial estimate of the transition matrix and parameters in hidden Markov models. Under regularity conditions, it can be shown that EM leads to the maximum likelihood estimator by given a suitable initial estimate. However, there is no method of finding suitable initial points in hidden Markov model. Pairing method can provide a good initial parameter estimate which can expedite EM in terms of computing time.When the underlying state transition matrix is not taken into consideration, the marginal distribution will be a mixture distribution while only limited information on state transition matrix is kept for inference. In order to recover full information contained in the data on transition matrix, we utilize characteristics of stochastic matrix by enlarging the Markov chain to recover information governing dynamic of transition matrix. Consistent and asymptotic normal estimators of hidden transition matrix are provided.
ZHUO, YING-ZHEN, and 卓穎蓁. "Pseudo maximum likelihood estimation for Cox Model with Doubly Truncated Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/33415305069710995121.
Full text東海大學
統計學系
104
The partial likelihood (PL) function has been mainly used for proportional hazards models with censored data. The PL approach can also be used for analyzing left-truncated or left-truncated and right-censored data. However, when data is subject to double truncation, the PL approach no longer works due to the complexities of risk sets. In this article, we propose pseudo maximum likelihood approach for estimating regression coefficients and baseline hazard function for the Cox model with doubly truncated data. We propose expectationmaximization algorithms for obtaining the pseudo maximum likelihood estimators (PMLE). The consistency property of the PMLE is established. Simulations are performed to evaluate the finite-sample performance of the PMLE. The proposed method is illustrated using an AIDS data set.
劉奕. "The Pseudo-Maximum Likelihood Estimators of Semipatametric Transformation models with Left-truncated and Right-censored Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54488163699767094798.
Full text東海大學
統計學系
104
In this paper, we consider the maximum likelihood estimators (MLE) of regression coecients and cumulated baseline hazard for semiparametric transformation model with left-truncated and right-censored (LTRC) data when the distribution of the truncation times is assumed to belong to a given parametric family. Based on pseudo-likelihood involving innite-dimensional parameters, we propose expectation maximization algorithms for obtaining the pseudo-MLE. Simulations are performed to evaluate the finite-sample performance of the MLE.
Sotáková, Martina. "Zobecněné odhadovací rovnice (GEE)." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-434538.
Full textLiao, Wei-Jhih, and 廖偉志. "On Predicting the Stage of Breast Cancer- Using a Modified Two-Stage Pseudo Maximum Likelihood Estimation (MTSPMLE)." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/46565255562280707810.
Full text國立彰化師範大學
統計資訊研究所
102
In Tradition Chinese Medicine (TCM), there are four diagnostics - inspection, listening (smelling), inquiring and palpation. Among some commonly used methods of the “inspection” diagnosis, tongue inspection plays an irreplaceable role. In the theory of TCM diagnosis, the tongue can be viewed as the projection of the internal organs through transmission meridians. It also reflects the health conditions of organs. Tongue diagnosis mainly focuses on the shape, the color, the number of red spots, and the coating of the tongues. Conventionally, the Western diagnostics mostly comprise of doctors’ inquiring, blood testing, urine testing, ultrasound scans, and/or radioactive rays as instruments for diagnoses of diseases. Due to the unwillingness of accepting these invasive medical treatments, there might be occasions of collecting data sets with missing observations. According to the literature, approaches such as regression method, mean imputation, EM algorithm and pseudo maximum likelihood, among others, are viable on dealing with a data set with missing values. In this study four methods are used to the implementation: the two-stage pseudo maximum likelihood estimation based on the regression imputations of GOT on the information from tongue inspection, the two-stage pseudo maximum likelihood estimation based on the regression imputations of GPT on the information from tongue inspection, the conventional EM algorithm, and regression imputations using both GOT and GPT as responses and the information of tongue inspections as explanatory variables. Our goal is to predict the probabilities of the stages of the patients’ breast cancer. One hundred and sixty two sets of tongue images were taken and analyzed. Based on the analysis, we find that the two-stage pseudo maximum likelihood estimation based on the regression imputations of GOT on the information from tongue inspection outperforms the other approaches.
Lin, Sheng-Yuan, and 林聖淵. "A Hybrid DOA Estimation of Frequency Hopping Signal Source Using Pseudo Doppler Direction Finder and Maximum Likelihood Estimator." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/75853378177160337880.
Full text國立臺灣科技大學
電機工程系
103
In this thesis, we propose a blind direction of arrival (DOA) estimation of a frequency hopping signal source without a priori information of its hopping pattern. The DOA estimation can be divided into 3 stages, including frequency hopping signal detection, preliminary DOA estimation, and accurate DOA estimation. First, we use low complexity short-time Fourier transform (STFT) time-frequency analysis to detect the hopping pattern of a frequency hopping signal, and then applying the estimated hopping pattern to a quick pseudo Doppler direction finder to perform the preliminary DOA estimation. Finally, we substitute the preliminary estimation result into a maximum likelihood (ML) scheme as the initial value and obtain the accurate estimation result. Computer simulations are used to verify the performance of the hybrid DOA estimation, where Bluetooth signals generated by Simulink software are used as sources. After comparing the maximum likelihood estimation with or without pseudo Doppler scheme, the results show that hybrid DOA estimation not only can accelerate the convergence speed of maximum likelihood estimation, but also prevent the maximum likelihood estimation from converging to a local minimum problem. Consequently, the hybrid DOA estimation can effectively improve the accuracy of DOA estimation of a frequency hopping signal.
Prehn, Sören. "Agriculture & New New Trade Theory." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-0015-52E0-B.
Full textFlorindo, Emmanuel Fernandes. "Analysing Ukraine’s bilateral trade flows in goods with the European Union: a gravity model approach." Master's thesis, 2019. http://hdl.handle.net/10773/26286.
Full textEstudamos os principais determinantes dos fluxos de comércio internacional de mercadorias entre a Ucrânia e os países membros da UE-28 no período 1995-2017. Estimamos o modelo de gravidade aumentada do comércio internacional com robustos Quadrados Mínimos Ordinários (QMO) e Poisson (PPML), e o PPML com efeitos fixos também estima as elasticidades do comércio para as funções de comércio total, exportações e importações. Os resultados da regressão são utilizados para recomendar os decisores políticos comerciais sobre a implementação do Acordo de Comércio Livre da Associação com a Ucrânia. As principais conclusões revelam que o rendimento da Ucrânia, o rendimento dos parceiros comerciais da UE-28, distância, mas também as diferenças de rendimento entre a Ucrânia e os seus parceiros comerciais (hipótese de Linder) e a taxa de câmbio real são determinantes importantes do comércio internacional. Estes resultados são robustos para diferentes funções de especificação comercial do modelo de gravidade aumentada do comércio internacional.
Mestrado em Economia