To see the other types of publications on this topic, follow the link: Pseudo maximum likelihood.

Dissertations / Theses on the topic 'Pseudo maximum likelihood'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Pseudo maximum likelihood.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hu, Huilin. "Large sample theory for pseudo-maximum likelihood estimates in semiparametric models /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/8936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

IANNACE, MAURO. "COGARCH processes: theory and asymptotics for the pseudo-maximum likelihood estimator." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/55528.

Full text
Abstract:
COGARCH processes are Lévy driven continuous time version of well known GARCH models for modeling high-fequency financial returns. We firstly discuss the properties about Lévy processes such as symmetric and asymmetric COGARCH models. These results are prerequisites to draw statistical inference from irregularly spaced observations. In particular, we focus on the pseudo-maximum likelihood method in order to extend some asymptotic results to the asymmetric model.
APA, Harvard, Vancouver, ISO, and other styles
3

Fauske, Johannes. "An empirical study of the maximum pseudo-likelihood for discrete Markov random fields." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9949.

Full text
Abstract:

In this text we will look at two parameter estimation methods for Markov random fields on a lattice. They are maximum pseudo-likelihood estimation and maximum general pseudo-likelihood estimation, which we abbreviate MPLE and MGPLE. The idea behind them is that by maximizing an approximation of the likelihood function, we avoid computing cumbersome normalising constants. In MPLE we maximize the product of the conditional distributions for each variable given all the other variables. In MGPLE we use a compromise between pseudo-likelihood and the likelihood function as the approximation. We evaluate and compare the performance of MPLE and MGPLE on three different spatial models, which we have generated observations of. We are specially interested to see what happens with the quality of the estimates when the number of observations increases. The models we use are the Ising model, the extended Ising model and the Sisim model. All the random variables in the models have two possible states, black or white. For the Ising and extended Ising model we have one and three parameters respectively. For Sisim we have $13$ parameters. The quality of both methods get better when the number of observations grow, and MGPLE gives better results than MPLE. However certain parameter combinations of the extended Ising model give worse results.

APA, Harvard, Vancouver, ISO, and other styles
4

Campos, Fábio Alexandre. "Estimação de elasticidades constantes : deveremos logaritmizar?" Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/10297.

Full text
Abstract:
Mestrado Decisão Económica e Empresarial
Há muito que os Economistas ignoram as implicações da desigualdade de Jensen. Na estimação de modelos económicos não lineares, a prática habitual consiste em log-linearizar o modelo. Para que este procedimento seja válido é necessário assumir um conjunto de hipóteses que na realidade revelam-se muito restritas. Neste trabalho, e seguindo de perto a abordagem de Santos Silva e Tenreyro (2006), procura-se analisar as implicações inerentes à estimação de elasticidades constantes a partir do modelo não linear e do seu equivalente linear. Estas implicações serão analisadas dos pontos de vista teórico e empírico. Do ponto de vista teórico, demonstra-se que a prática de estimar modelos linearizados pode levar a estimativas enviesadas. Por outro lado, a aplicação empírica, não conduz a uma conclusão tão assertiva. Todavia, a complexidade dos métodos de estimação de modelos não lineares torna a sua utilização menos atractiva face ao OLS. No entanto, as razões teóricas são suficientemente fortes para se concluir que o modelo não deverá ser logaritmizado. Contudo, tal decisão cabe em última análise naturalmente ao utilizador, e caso este decida não logaritmizar deverá ter em conta as respectivas implicações, realizar todos os testes de especificação disponíveis e interpretar e analisar as estimativas obtidas com cautela.
Economists have long ignored the implications of Jensen's Inequality. In the estimation of non-linear economic models, the usual practice is to log-linearize the model. For this procedure to be valid it´s necessary to take a set of assumptions which turn out to be very strict. This work, follows closely the approach of Santos Silva and Tenreyro (2006), and seeks to analyze the implications inherent in the estimation of elasticity constants from the nonlinear model and its linear equivalent. These implications will be considered in a theoretical and empirical point of view. From the theoretical point of view, it?s demonstrated that the practice of estimating linearized models can lead to biased estimates. On the other hand, the empirical application does not lead to a conclusion so assertive. However, the complexity of the estimation methods of nonlinear models makes their use less attractive compared to OLS. However, the theoretical reasons are strong enough to conclude that the model should not be taken in logarithmic form. However, ultimately this decision belongs to the user, if it should decide to apply the logarithm form, it should take into account the implications, perform all the specification tests available, interpret and analyze the estimates obtained carefully.
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.

Full text
Abstract:
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
APA, Harvard, Vancouver, ISO, and other styles
6

GHOLAMI, MAHDI. "Essays in Applied Economics: Disease Outbreaks and Gravity Model Approach to Bovines movement network in Italy." Doctoral thesis, Università di Siena, 2017. http://hdl.handle.net/11365/1005912.

Full text
Abstract:
Generally, the movement of cattle within any country including Italy is very essential to the economics of livestock industry. However, this transmission can also carry and spread the risk of contracting the disease (infectious diseases) by other cattle in various geographical areas. For example, this pattern of animal movements brought about an outbreak of foot-and-mouth infectious disease throughout the UK in 2001. Also as Taylor et. al (2001) mentioned that these diseases can give rise to the productivity decline and even can lead to some threats to human health. Therefore, to reduce the risk and economic loss of this kind of contagious diseases, authorities should be able to manage and control them (Andesron, 2002). This control measures can consist of monitoring the animal trades, inspecting entry to and exit from premises, adopting some eradication programs (say, sending sick cattle to slaughterhouses), quarantining cattle, and etc. To properly assess this kind of control measures, we need a comprehensive and detailed information and regulation about the cattle movement pattern. To address these issues, European Economic Community (EEC) devised some regulations and imposed them on its member states to adhere to these measures (EU traceability framework). Actually, these measures originate from the public health and food health concerns, which are related to animal health and the economic impacts of the outbreak of infectious diseases. The EEC issued Council Directive 92/102/EEC in 1992 (with latest modifications in 2013), which obliged member states to record the origin and destination point of each cattle. Also, each cattle should be tagged by an ear tag to be traceable . Specifically, European Parliament and European council in 2000 tried to simplify and implement this process through a digital framework that allows countries to identify and register their bovines . In Italy, this procedure was done by Italian National Animal Identification and Registration Database which proposes us a rich and valuable dataset, including the all required parameters, to perform and analyze some important determinants of Bovines movements and the effects of this kind of disease outbreaks on the pattern of bovines trade among different holdings. In recognition of the importance of national and international bovines’ trade, this thesis attempts to assess and investigate some relevant determinants of bovines movements among Italian holdings and Italian provinces. This study consists of three chapters. In the first chapter, we introduced a structural gravity model of trade and then we linked it to the Italian bovine trade system. Then, we assessed two important determinants of any animal movements, i.e. feed prices and financial literacy rate of farmers. In addition, we tried to analyze the interaction of these determinants on the movements of bovines among Italian provinces. We found that feed (corn, in our case) price shocks and financial literacy rate of farmers could significantly affect on the pattern of bovines movement. Furthermore, our findings suggest that this two factors have a close relationship together and can offset each other's effects, in the sense that the enhancing financial literacy rate of farmers can somehow immune them to the unexpected price shocks and actually undermine the effects of unfavorable price shocks on their business. In the second chapter, we used again the structural gravity model and employed it to investigate the risk of the outbreak of bovine diseases among Italian provinces. We found that the disease incidence rate has a significant positive effect of the movement of bovines from origin nodes to destination points. Also, we tried to merge our findings with the feed (corn) price effects and saw that these two factors act in a different direction. That is the more is the incidence rate, the effect of feed prices become less relevant to the movement of bovines among different provinces. In chapter 3, we tried to have a more detailed view on the effects of disease outbreaks on the movements of bovines among farms and slaughterhouses. In general, we found that the disease status has a negative effect on bovines movement from farms to farms, and a positive influence on the movements of bovines from farms to slaughterhouses. In addition, we found that the ownership has a significant role in determining the pattern of trade among holdings, in the sense that the results are reliable only if two trading partners have different owner/keeper. Although, if the effects were driven by movements between farms with the same owner, it would be a rational decision by the owners in order to separate and protect healthy bovines from sick bovines. However, we found that in the case of positive disease tests, distance has a positive effect on the movement of bovines between various farms. This somehow shows us that some farms may act in such an opportunistic behaviors that can even lead to the spread of diseases among different regions. Finally, we analyzed some network characteristics of the bovines movements to see the effect of network structure and its interaction with disease status on the pattern of movements. More specifically, we used the most commonly used feature of any network i.e. indegree, outdegree, and degree and found that in the case of diseases the farms tend to send more bovines to the slaughterhouses. Actually, the results on the interaction of indegree and disease tests show us that the farms that have received more (probably sick) bovines from more suppliers in the previous period, tend to send more bovines to other farms in the current time period. Also, we should note that the indegree analysis can help us tracing back the diseases to the nodes that are more exposed to receiving bovines and consequently more subject to contracting diseases. On the other hand, we found the outdegree is also important only in case of the trade from farms to other farms. This tells us that in the case of diseases, the farms tend to not change significantly the pattern of their trading partners (other farms), and probably they tend to send their (probably sick) bovines to the other farms and this situation can even decrease the number of heads which transfer from farms to slaughterhouses. This confirms the importance of such nodes in the network in the sense that those with higher outdegree can be considered as most dangerous nodes and can act as a source of spreading diseases among different premises.
APA, Harvard, Vancouver, ISO, and other styles
7

Nora, Elisabete da Conceição Pires de Almeida. "Sistema de Bonus-Malus para frotas de veículos." Master's thesis, Instituto Superior de Economia e Gestão, 2004. http://hdl.handle.net/10400.5/686.

Full text
Abstract:
Mestrado em Ciências Actuariais
Esta dissertação tem como objectivo a construção de um sistema de Bonus-Malus para frotas de veículos, tendo por base o conhecimento da sinistralidade histórica e utilizando os factores individuais dos veículos e das empresas a que correspondem as frotas. Os coeficientes de bonus-malus são obtidos através das credibilidades específicas do veículo e da frota, tendo em atenção o “turnover” esperado para os veículos de cada frota. A expressão “turnover” indica-nos a percentagem de veículos da frota que, por hipótese, poderão entrar em rotatividade, isto é, supõe-se a possibilidade de existir entradas e saídas de veículos. As frotas são indexadas por f = 1,...,F, e os veículos são indexados por i = 1,..., mf, onde mf é a dimensão, ou seja, o número de veículos da frota f. Supondo que o número de sinistros N fi ~ Pλfi  segue uma distribuição de Poisson, o parâmetro      fi fi f fi  d exp x  z será uma função dos factores de avaliação observados ao nível da frota (xf ) e do veículo (zfi ), onde dfi é a duração de observação do veículo i da frota f. Obtemos o conjunto de estimadores ˆ e ˆ , utilizando a Pseudo-Máxima Verosimilhança e o método proposto por Mexia/Corte Real, que se baseia nos Estimadores Extremais, para um conjunto de dados Portugueses, relativos ao período de Novembro de 1997 a Janeiro de 2003. Algumas conclusões serão apresentadas, de acordo com os dados analisados.
The purpose of this thesis is to provide Bonus-Malus System for fleets of vehicles from the history of claims, using the individual characteristics of both the vehicles and the carriers. Bonus-malus coefficients are obtained from vehicle-specific and fleet-specific credibilities. Coefficients take into account an expected turnover for the vehicles within the fleets. The expression “turnover“ means the percentage of vehicles within the fleet that, by assumption, could take in rotation, because we suppose that exists the possibility of getting in and going out vehicles in the fleet. Indexing the fleets by f = 1,...,F, and the vehicles by I = 1,..., mf, where mf is the size, that is, the number of vehicles of the fleet f, if the number of claims N fi ~ Pλfi  follows a Poisson distribution, we obtain the estimator of the parameter      fi fi f fi  d exp x  z , which will be a function of rating factors observed at the fleet level (xf) and at the vehicle level (zfi), with dfi the duration of the observation period for the vehicle i in the fleet f. We obtain a set of estimators ˆ and ˆ using the pseudo maximum-likelihood and the method proposed by Mexia/Corte Real, which is based on extremal estimators, for a set of Portuguese data, considering the period from November 1997 to January 2003. Some conclusions are drawn regarding the data analyzed.
APA, Harvard, Vancouver, ISO, and other styles
8

Ribeiro, Patrick de Matos [Verfasser], Martin [Akademischer Betreuer] Wagner, and Walter [Gutachter] Krämer. "Pseudo maximum likelihood estimation of cointegrated multiple frequency I(1) VARMA processes using the state space framework / Patrick de Matos Ribeiro ; Gutachter: Walter Krämer ; Betreuer: Martin Wagner." Dortmund : Universitätsbibliothek Dortmund, 2020. http://d-nb.info/1229193693/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Carrasco, Jalmar Manuel Farfan. "Modelos de regressão beta com erro nas variáveis." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-15082012-093632/.

Full text
Abstract:
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros.
In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
APA, Harvard, Vancouver, ISO, and other styles
10

Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186/document.

Full text
Abstract:
Grâce aux progrès de la recherche, on sait aujourd’hui guérir près d’un cancer sur deux. Cependant, certaines tumeurs, telles que les glioblastomes restent parmi les plus agressives et les plus difficiles à traiter. La cause de cette résistance aux traitements pourrait provenir d’une sous-population de cellules ayant des caractéristiques communes aux cellules souches que l’on appelle cellules souches cancéreuses. De nombreux modèles mathématiques et numériques de croissance tumorale existent déjà mais peu tiennent compte de l’hétérogénéité intra-tumorale, qui est aujourd’hui un véritable challenge. Cette thèse s’intéresse à la dynamique des différentes sous-populations cellulaires d’un glioblastome. Elle consiste en l’élaboration d’un modèle mathématique de croissance tumorale reposant sur un processus de branchement de Bellman-Harris, à la fois multi-type et dépendant de l’âge. Ce modèle permet d’intégrer l’hétérogénéité cellulaire. Des simulations numériques reproduisent l’évolution des différents types de cellules et permettent de tester l’action de différents schémas thérapeutiques sur le développement tumoral. Une méthode d’estimation des paramètres du modèle numérique fondée sur le pseudo-maximum de vraisemblance a été adaptée. Cette approche est une alternative au maximum de vraisemblance dans le cas où la distribution de l’échantillon est inconnue. Enfin, nous présentons les expérimentations biologiques qui ont été mises en place dans le but de valider le modèle numérique
The latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
APA, Harvard, Vancouver, ISO, and other styles
11

Oberhofer, Harald, and Michael Pfaffermayr. "Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6020/1/wp259.pdf.

Full text
Abstract:
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely to decline within a range between 7.2% and 45.7% (5.9% and 38.2%) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UKs real income between 1.4% and 5.7% under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero.
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
12

Marchildon, Miguel. "An Application of the Gravity Model to International Trade in Narcotics." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37258.

Full text
Abstract:
The transnational traffic of narcotics has had undeniable impacts on international development, for instance, stagnant economic growth in Myanmar (Chin, 2009), unsustainable agricultural practices in Yemen (Robins, 2016), and human security threats in Columbia (Thoumi, 2013). Furthermore, globalization is a catalyst for the transnational narcotics traffic (Robins, 2016; Aas, 2007; Kelly, Maghan & Serio, 2005). Several qualitative studies exist on the transnational narcotics traffic, yet few quantitative studies examine the issue. There is thus an opportunity for novel quantitative studies on the general question: “what are the main economic factors that influence the transnational traffic of narcotics between countries?” This study looked at the specific question: “are distance and economic size correlated with the volume of narcotics traffic between countries?” This study chose the gravity model as it centres on bilateral trade (Tinbergen, 1962), accounts for trade barriers (Kalirajan, 2008) and is empirically robust (Anderson 2011). This study defined a basic functional gravity model relating a proxy of the narcotics traffic to distance and economic size. Four augmented functional gravity models were also advanced to address omitted variable bias. The research was limited conceptually to cross sectional and pooled time series data. In addition, the data was also limited practically to a convenience sample of secondary data drawn from: the United Nations Office on Drugs and Crime’s (UNODC) (2016a) Individual Drug Seizures (IDS); the World Bank’s (2016) World Development Indicators; and the CEPII’s GeoDist (2016) datasets. This study used a novel “dosage” approach to unit standardization to overcome the challenge posed by the many measures and forms of narcotics. The study used the Poisson pseudo maximum likelihood (PPML) estimator as its estimations of the gravity model are consistent (Gourieroux et al., 1984), allow heteroscedasticity (Silva & Tenreyro, 2006) and avoid back transformation bias (Cox et al., 2008). The evidence analyzed in this study seem to indicate that the gravity model may not be applicable in its current form to the transnational narcotics traffic among countries that report drug seizures to the UNODC. However, the sampling method and the choice of proxy are likely to influence these findings. Moreover, the low explanatory power of the gravity model for the narcotics traffic, reflected in the values of the pseudo-R-squared coefficient of determination, indicates that other factors are at play. For instance, authors such as Asad and Harris (2003) and Thoumi (2003) argue that institutions could be a key factor in the narcotics traffic. Future empirical research into this topic could build on the theses findings to introduce new proxies and to explore alternate theoretical frameworks.
APA, Harvard, Vancouver, ISO, and other styles
13

Levada, Alexandre Luis Magalhães. "Combinação de modelos de campos aleatórios markovianos para classificação contextual de imagens multiespectrais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-11052010-165642/.

Full text
Abstract:
Este projeto de doutorado apresenta uma nova abordagem MAP-MRF para a classificação contextual de imagens multiespectrais utilizando combinação de modelos de Campos Aleatórios Markovianos definidos em sistemas de ordens superiores. A modelagem estatística para o problema de classificação segue o paradigma Bayesiano, com a definição de um modelo Markoviano para os dados observados (Gaussian Markov Random Field multiespectral) e outro modelo para representar o conhecimento a priori (Potts). Nesse cenário, o parâmetro β do modelo de Potts atua como um parâmetro de regularização, tendo papel fundamental no compromisso entre as observações e o conhecimento a priori, de modo que seu correto ajuste é necessário para a obtenção de bons resultados. A introdução de sistemas de vizinhança de ordens superiores requer a definição de novos métodos para a estimação dos parâmetros dos modelos Markovianos. Uma das contribuições desse trabalho é justamente propor novas equações de pseudo-verossimilhança para a estimação desses parâmetros no modelo de Potts em sistemas de segunda e terceira ordens. Apesar da abordagem por máxima pseudo-verossimilhança ser amplamente utilizada e conhecida na literatura de campos aleatórios, pouco se conhece acerca da acurácia dessa estimação. Foram derivadas aproximações para a variância assintótica dos estimadores propostos, caracterizando-os completamente no caso limite, com o intuito de realizar inferências e análises quantitativas sobre os parâmetros dos modelos Markovianos. A partir da definição dos modelos e do conhecimento dos parâmetros, o próximo estágio é a classificação das imagens multiespectrais. A solução para esse problema de inferência Bayesiana é dada pelo critério de estimação MAP, onde a solução ótima é determinada maximizando a probabilidade a posteriori, o que define um problema de otimização. Como não há solução analítica para esse problema no caso de prioris Markovianas, algoritmos iterativos de otimização combinatória foram empregados para aproximar a solução ótima. Nesse trabalho, adotam-se três métodos sub-ótimos: Iterated Conditional Modes, Maximizer of the Posterior Marginals e Game Strategy Approach. Porém, é demonstrado na literatura que tais métodos convergem para máximos locais e não globais, pois são altamente dependentes de sua condição inicial. Isto motivou o desenvolvimento de uma nova abordagem para combinação de classificadores contextuais, que utiliza múltiplas inicializações simultâneas providas por diferentes classificadores estatísticos pontuais. A metodologia proposta define um framework MAP-MRF bastante robusto para solução de problemas inversos, pois permite a utilização e a integração de diferentes condições iniciais em aplicações como classificação, filtragem e restauração de imagens. Como medidas quantitativas de desempenho, são adotados o coeficiente Kappa de Cohen e o coeficiente Tau de Kendall para verificar a concordância entre as saídas dos classificadores e a verdade terrestre (amostras pré-rotuladas). Resultados obtidos mostram que a inclusão de sistemas de vizinhança de ordens superiores é de fato capaz de melhorar significativamente não apenas o desempenho da classificação como também a estimação dos parâmetros dos modelos Markovianos, reduzindo tanto o erro de estimação quanto a variância assintótica. Além disso, a combinação de classificadores contextuais através da utilização de múltiplas inicializações simultâneas melhora significativamente o desempenho da classificação se comparada com a abordagem tradicional com apenas uma inicialização.
This work presents a novel MAP-MRF approach for multispectral image contextual classification by combining higher-order Markov Random Field models. The statistical modeling follows the Bayesian paradigm, with the definition of a multispectral Gaussian Markov Random Field model for the observations and a Potts MRF model to represent the a priori knowledge. In this scenario, the Potts MRF model parameter (β) plays the role of a regularization parameter by controlling the tradeoff between the likelihood and the prior knowledge, in a way that a suitable tunning for this parameter is required for a good performance in contextual classification. The introduction of higher-order MRF models requires the specification of novel parameter estimation methods. One of the contributions of this work is the definition of novel pseudo-likelihood equations for the estimation of these MRF parameters in second and third order neighborhood systems. Despite its widely usage in practical MRF applications, little is known about the accuracy of maximum pseudo-likelihood approach. Approximations for the asymptotic variance of the proposed MPL estimators were derived, completely characterizing their behavior in the limiting case, allowing statistical inference and quantitative analysis. From the statistical modeling and having the model parameters estimated, the next step is the multispectral image classification. The solution for this Bayesian inference problem is given by the MAP criterion, where the optimal solution is obtained by maximizing the a posteriori distribution, defining an optimization problem. As there is no analytical solution for this problem in case of Markovian priors, combinatorial optimization algorithms are required to approximate the optimal solution. In this work, we use three suboptimal methods: Iterated Conditional Modes, Maximizer of the Posterior Marginals and Game Strategy Approach, a variant approach based on non-cooperative game theory. However, it has been shown that these methods converge to local maxima solutions, since they are extremelly dependent on the initial condition. This fact motivated the development of a novel approach for combination of contextual classifiers, by making use of multiple initializations at the same time, where each one of these initial conditions is provided by different pointwise pattern classifiers. The proposed methodology defines a robust MAP-MRF framework for the solution of general inverse problems since it allows the use and integration of several initial conditions in a variety of applications as image classification, denoising and restoration. To evaluate the performance of the classification results, two statistical measures are used to verify the agreement between the classifiers output and the ground truth: Cohens Kappa and Kendalls Tau coefficient. The obtained results show that the use of higher-order neighborhood systems is capable of significantly improve not only the classification performance, but also the MRF parameter estimation by reducing both the estimation error and the asymptotic variance. Additionally, the combination of contextual classifiers through the use of multiple initializations also improves the classificatoin performance, when compared to the traditional single initialization approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186.

Full text
Abstract:
Grâce aux progrès de la recherche, on sait aujourd’hui guérir près d’un cancer sur deux. Cependant, certaines tumeurs, telles que les glioblastomes restent parmi les plus agressives et les plus difficiles à traiter. La cause de cette résistance aux traitements pourrait provenir d’une sous-population de cellules ayant des caractéristiques communes aux cellules souches que l’on appelle cellules souches cancéreuses. De nombreux modèles mathématiques et numériques de croissance tumorale existent déjà mais peu tiennent compte de l’hétérogénéité intra-tumorale, qui est aujourd’hui un véritable challenge. Cette thèse s’intéresse à la dynamique des différentes sous-populations cellulaires d’un glioblastome. Elle consiste en l’élaboration d’un modèle mathématique de croissance tumorale reposant sur un processus de branchement de Bellman-Harris, à la fois multi-type et dépendant de l’âge. Ce modèle permet d’intégrer l’hétérogénéité cellulaire. Des simulations numériques reproduisent l’évolution des différents types de cellules et permettent de tester l’action de différents schémas thérapeutiques sur le développement tumoral. Une méthode d’estimation des paramètres du modèle numérique fondée sur le pseudo-maximum de vraisemblance a été adaptée. Cette approche est une alternative au maximum de vraisemblance dans le cas où la distribution de l’échantillon est inconnue. Enfin, nous présentons les expérimentations biologiques qui ont été mises en place dans le but de valider le modèle numérique
The latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
APA, Harvard, Vancouver, ISO, and other styles
15

Inês, Mónica Sofia Inácio Duarte. "Econometric analysis of private medicines expenditure in Portugal." Master's thesis, Instituto Superior de Economia e Gestão, 2007. http://hdl.handle.net/10400.5/653.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
The Portuguese National Health System states that access to health care should depend mainly on need. Conditional on need, access to pharmaceuticals should not depend on socio-economic factors such as income, social class, education or geographical factors such as the access to pharmacies. This study uses data from the last two waves of National Health Survey (1995/1996 and 1998/1999) and focuses on equity issues testing for the existence of insurance inequalities, income-related and pharmacies density related inequalities. A two-part model was adopted. To model the probability of occurrence of medicines private expenditure, a modified LOGIT model was specified accounting for the double nature of the zeros of the dependent variable and asymmetry. In the second part a Poisson pseudo maximum likelihood estimator was adopted. No misspecification was detected in the two-part model. The main results showed inequity in Portuguese private medicines expenditures with respect to supplementary health insurance (private and job related), income and pharmacies density.
O Serviço Nacional de Saúde Português estabelece que o acesso a cuidados de saúde deve depender essencialmente das necessidades clínicas. Condicionado nas necessidades individuais, o acesso e utilização de medicamentos não deveria depender de factores económicos como rendimento, classe social, nível de educação ou o acesso a farmácias ou postos de vendas de medicamentos. Utilizando dados das últimas duas realizações do Inquérito Nacional de Saúde (1995/96 e 1998/1999), este estudo testa a existência de inequidades nas despesas com medicamentos, condicionadas na necessidade, relacionadas com o rendimento, com a densidade de farmácias e com possuir seguro de saúde privado ou relacionado com o local de trabalho. Foi aplicado um modelo em duas partes. Para a probabilidade individual de efectuar despesas com medicamentos, foi adoptado um estimador LOGIT modificado para acomodar a dupla natureza dos zeros da variável dependente e que permitisse assimetria. Para modelar as despesas positivas com medicamentos foram utilizadas as propriedades da pseudo verosimilhança através da utilização de um modelo de Poisson. Não se detectou má especificação do modelo em duas partes e concluiu-se que existem inequidades na despesa privada com medicamentos relacionadas com a existência de seguro de saúde privado ou relacionado com o local de trabalho, o rendimento e a densidade de farmácias.
APA, Harvard, Vancouver, ISO, and other styles
16

Krisztin, Tamás, and Manfred M. Fischer. "The gravity model for international trade: Specification and estimation issues in the prevalence of zero flows." WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4453/3/TheGravityModelForInternationalTrade2.pdf.

Full text
Abstract:
The gravity model for international trade is one of the most successful empirical models in trade literature. There is a long tradition to log-linearise the multiplicative model and to estimate the parameters of interest by least squares. But this practice is inappropriate for several reasons. First of all, bilateral trade flows are frequently zero and disregarding countries that do not trade with each other produces biased results. Second, log-linearisation in the presence of heteroscedasticity leads to inconsistent estimates in general. In recent years, the Poisson gravity model along with pseudo maximum likelihood estimation methods have become popular as a way of dealing with such econometric issues as arise when dealing with origin-destination flows. But the standard Poisson model specification is vulnerable to problems of overdispersion and excess zero flows. To overcome these problems, this paper presents zero-inflated extensions of the Poisson and negative binomial specifications as viable alternatives to both the log-linear and the standard Poisson specifications of the gravity model. The performance of the alternative model specifications is assessed on a real world example, where more than half of country-level trade flows are zero. (authors' abstract)
Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles
17

Lovreta, Lidija. "Structural Credit Risk Models: Estimation and Applications." Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9180.

Full text
Abstract:
El risc de crèdit s'associa a l'eventual incompliment de les obligacions de pagament per part dels creditors. En aquest cas, l'interès principal de les institucions financeres és mesurar i gestionar amb precisió aquest risc des del punt de vista quantitatiu. Com a resposta a l'interès esmentat, aquesta tesi doctoral, titulada "Structural Credit Risk Models: Estimation and Applications", se centra en l'ús pràctic dels anomenats "models estructurals de risc de crèdit". Aquests models es caracteritzen perquè estableixen una relació explícita entre el risc de crèdit i diverses variables fonamentals, la qual cosa permet un ventall ampli d'aplicacions. Concretament, la tesi analitza el contingut informatiu tant del mercat d'accions com del mercat de CDS sobre la base dels models estructurals esmentats.

El primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.

El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.

El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida.
El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.

El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.

El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.

El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida.
Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.

Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.

Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
APA, Harvard, Vancouver, ISO, and other styles
18

Bosquet, Clément. "Commerce international et économie de la science : distances, agglomération, effets de pairs et discrimination." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM1097/document.

Full text
Abstract:
Cette thèse rassemble principalement des contributions en économie de la science à laquelle les deux premières parties sont consacrées. La première teste l'importance des choix méthodologiques dans la mesure de la production scientifique et étudie les canaux de diffusion de la connaissance. La deuxième s'intéresse aux déterminants individuels et locaux de la productivité des chercheurs et au différentiel de promotion entre hommes et femmes sur le marché du travail académique. Sont établis les résultats suivants : les choix méthodologiques dans la mesure de la production scientifique n'affectent que très peu les classements des institutions de recherche. Les citations et les poids associés à la qualité des journaux mesurent globalement la même productivité de la recherche. La localisation des chercheurs a un impact sur leur productivité dans la mesure où certaines universités génèrent davantage d'externalités que d'autres. Ces externalités sont plus importantes là où les chercheurs sont homogènes en terme de performances, où la diversité thématique est grande, et dans une moindre mesure dans les grands centres de recherche, lorsqu'il y a plus de femmes, de chercheurs âgés, de stars et là où les chercheurs sont connectés à des co-auteurs à l'étranger. Si les femmes sont moins souvent Professeur des Universités (par opposition à Maître de Conférences) que les hommes, ce n'est ni parce qu'elles sont discriminées dans le processus de promotion, ni que le coût de promotion (mobilité) est plus important pour elles, ni qu'elles ont des préférences différentes concernant le salaire et le prestige des institutions dans lesquelles elles travaillent
The core of this thesis lies in the field of economics of science to which the two first parts are devoted. The first part questions the impact of methodological choices in the measurement of research productivity and studies the channels of knowledge diffusion. The second part studies the impact on individual publication records of both individual and departments' characteristics and analyse the gender gap in occupations on the academic labour market. The main results are the following: methodological choices in the measurement of research productivity do not impact the estimated hierarchy of research institutions. Citations and journal quality weights measure the same dimension of publication productivity. Location matters in the academic research activity: some departments generate more externalities than others. Externalities are higher where academics are homogeneous in terms of publication performance and have diverse research fields, and, to a lower extent, if the department is large, with more women, older academics, stars and co-authors connection to foreign departments. If women are less likely to be full Professor (with respect to Assistant Professor) than men, this is neither because they are discriminated against in the promotion process, neither because the promotion cost (mobility) is higher for them, nor because they have different preferences for salaries versus department prestige. A possible, but not tested, explanation is that women self-select themselves by participating less in or exerting lower effort during the promotion process
APA, Harvard, Vancouver, ISO, and other styles
19

Coucke, Alice. "Statistical modeling of protein sequences beyond structural prediction : high dimensional inference with correlated data." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE034/document.

Full text
Abstract:
Grâce aux progrès des techniques de séquençage, les bases de données génomiques ont connu une croissance exponentielle depuis la fin des années 1990. Un grand nombre d'outils statistiques ont été développés à l'interface entre bioinformatique, apprentissage automatique et physique statistique, dans le but d'extraire de l'information de ce déluge de données. Plusieurs approches de physique statistique ont été récemment introduites dans le contexte précis de la modélisation de séquences de protéines, dont l'analyse en couplages directs. Cette méthode d'inférence statistique globale fondée sur le principe d'entropie maximale, s'est récemment montrée d'une efficacité redoutable pour prédire la structure tridimensionnelle de protéines, à partir de considérations purement statistiques.Dans cette thèse, nous présentons les méthodes d'inférence en question, et encouragés par leur succès, explorons d'autres domaines complexes dans lesquels elles pourraient être appliquées, comme la détection d'homologies. Contrairement à la prédiction des contacts entre résidus qui se limite à une information topologique sur le réseau d'interactions, ces nouveaux champs d'application exigent des considérations énergétiques globales et donc un modèle plus quantitatif et détaillé. À travers une étude approfondie sur des donnéesartificielles et biologiques, nous proposons une meilleure interpretation des paramètres centraux de ces méthodes d'inférence, jusqu'ici mal compris, notamment dans le cas d'un échantillonnage limité. Enfin, nous présentons une nouvelle procédure plus précise d'inférence de modèles génératifs, qui mène à des avancées importantes pour des données réelles en quantité limitée
Over the last decades, genomic databases have grown exponentially in size thanks to the constant progress of modern DNA sequencing. A large variety of statistical tools have been developed, at the interface between bioinformatics, machine learning, and statistical physics, to extract information from these ever increasing datasets. In the specific context of protein sequence data, several approaches have been recently introduced by statistical physicists, such as direct-coupling analysis, a global statistical inference method based on the maximum-entropy principle, that has proven to be extremely effective in predicting the three-dimensional structure of proteins from purely statistical considerations.In this dissertation, we review the relevant inference methods and, encouraged by their success, discuss their extension to other challenging fields, such as sequence folding prediction and homology detection. Contrary to residue-residue contact prediction, which relies on an intrinsically topological information about the network of interactions, these fields require global energetic considerations and therefore a more quantitative and detailed model. Through an extensive study on both artificial and biological data, we provide a better interpretation of the central inferred parameters, up to now poorly understood, especially in the limited sampling regime. Finally, we present a new and more precise procedure for the inference of generative models, which leads to further improvements on real, finitely sampled data
APA, Harvard, Vancouver, ISO, and other styles
20

Koch, Erwan. "Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10138/document.

Full text
Abstract:
Cette thèse s’attache à développer des outils et modèles adaptés a l’étude de certains risques spatiaux et en réseaux. Elle est divisée en cinq chapitres. Le premier consiste en une introduction générale, contenant l’état de l’art au sein duquel s’inscrivent les différents travaux, ainsi que les principaux résultats obtenus. Le Chapitre 2 propose un nouveau générateur de précipitations multi-site. Il est important de disposer de modèles capables de produire des séries de précipitations statistiquement réalistes. Alors que les modèles précédemment introduits dans la littérature concernent essentiellement les précipitations journalières, nous développons un modèle horaire. Il n’implique qu’une seule équation et introduit ainsi une dépendance entre occurrence et intensité, processus souvent considérés comme indépendants dans la littérature. Il comporte un facteur commun prenant en compte les conditions atmosphériques grande échelle et un terme de contagion auto-regressif multivarié, représentant la propagation locale des pluies. Malgré sa relative simplicité, ce modèle reproduit très bien les intensités, les durées de sècheresse ainsi que la dépendance spatiale dans le cas de la Bretagne Nord. Dans le Chapitre 3, nous proposons une méthode d’estimation des processus maxstables, basée sur des techniques de vraisemblance simulée. Les processus max-stables sont très adaptés à la modélisation statistique des extrêmes spatiaux mais leur estimation s’avère délicate. En effet, la densité multivariée n’a pas de forme explicite et les méthodes d’estimation standards liées à la vraisemblance ne peuvent donc pas être appliquées. Sous des hypothèses adéquates, notre estimateur est efficace quand le nombre d’observations temporelles et le nombre de simulations tendent vers l’infini. Cette approche par simulation peut être utilisée pour de nombreuses classes de processus max-stables et peut fournir de meilleurs résultats que les méthodes actuelles utilisant la vraisemblance composite, notamment dans le cas où seules quelques observations temporelles sont disponibles et où la dépendance spatiale est importante
This thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is high
APA, Harvard, Vancouver, ISO, and other styles
21

Rodrigues, Agatha Sacramento. "Regressão logística com erro de medida: comparação de métodos de estimação." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-23082013-172348/.

Full text
Abstract:
Neste trabalho estudamos o modelo de regressão logística com erro de medida nas covariáveis. Abordamos as metodologias de estimação de máxima pseudoverossimilhança pelo algoritmo EM-Monte Carlo, calibração da regressão, SIMEX e naïve (ingênuo), método este que ignora o erro de medida. Comparamos os métodos em relação à estimação, através do viés e da raiz do erro quadrático médio, e em relação à predição de novas observações, através das medidas de desempenho sensibilidade, especificidade, verdadeiro preditivo positivo, verdadeiro preditivo negativo, acurácia e estatística de Kolmogorov-Smirnov. Os estudos de simulação evidenciam o melhor desempenho do método de máxima pseudoverossimilhança na estimação. Para as medidas de desempenho na predição não há diferença entre os métodos de estimação. Por fim, utilizamos nossos resultados em dois conjuntos de dados reais de diferentes áreas: área médica, cujo objetivo está na estimação da razão de chances, e área financeira, cujo intuito é a predição de novas observações.
We study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
APA, Harvard, Vancouver, ISO, and other styles
22

Grazian, Clara. "Contributions aux méthodes bayésiennes approchées pour modèles complexes." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED001.

Full text
Abstract:
Récemment, la grande complexité des applications modernes, par exemple dans la génétique, l’informatique, la finance, les sciences du climat, etc. a conduit à la proposition des nouveaux modèles qui peuvent décrire la réalité. Dans ces cas,méthodes MCMC classiques ne parviennent pas à rapprocher la distribution a posteriori, parce qu’ils sont trop lents pour étudier le space complet du paramètre. Nouveaux algorithmes ont été proposés pour gérer ces situations, où la fonction de vraisemblance est indisponible. Nous allons étudier nombreuses caractéristiques des modèles complexes: comment éliminer les paramètres de nuisance de l’analyse et faire inférence sur les quantités d’intérêt,dans un cadre bayésienne et non bayésienne et comment construire une distribution a priori de référence
Recently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior
APA, Harvard, Vancouver, ISO, and other styles
23

McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.

Full text
Abstract:
Les données manquantes sont fréquentes dans les études médicales. Dans les modèles de régression, les réponses manquantes limitent notre capacité à faire des inférences sur les effets des covariables décrivant la distribution de la totalité des réponses prévues sur laquelle porte l'intérêt médical. Outre la perte de précision, toute inférence statistique requière qu'une hypothèse sur le mécanisme de manquement soit vérifiée. Rubin (1976, Biometrika, 63:581-592) a appelé le mécanisme de manquement MAR (pour les sigles en anglais de « manquant au hasard ») si la probabilité qu'une réponse soit manquante ne dépend pas des réponses manquantes conditionnellement aux données observées, et MNAR (pour les sigles en anglais de « manquant non au hasard ») autrement. Cette distinction a des implications importantes pour la modélisation, mais en général il n'est pas possible de déterminer si le mécanisme de manquement est MAR ou MNAR à partir des données disponibles. Par conséquent, il est indispensable d'effectuer des analyses de sensibilité pour évaluer la robustesse des inférences aux hypothèses de manquement.Pour les données multivariées incomplètes, c'est-à-dire, lorsque l'intérêt porte sur un vecteur de réponses dont certaines composantes peuvent être manquantes, plusieurs méthodes de modélisation sous l'hypothèse MAR et, dans une moindre mesure, sous l'hypothèse MNAR ont été proposées. En revanche, le développement de méthodes pour effectuer des analyses de sensibilité est un domaine actif de recherche. Le premier objectif de cette thèse était de développer une méthode d'analyse de sensibilité pour les données longitudinales continues avec des sorties d'étude, c'est-à-dire, pour les réponses continues, ordonnées dans le temps, qui sont complètement observées pour chaque individu jusqu'à la fin de l'étude ou jusqu'à ce qu'il sorte définitivement de l'étude. Dans l'approche proposée, on évalue les inférences obtenues à partir d'une famille de modèles MNAR dits « de mélange de profils », indexés par un paramètre qui quantifie le départ par rapport à l'hypothèse MAR. La méthode a été motivée par un essai clinique étudiant un traitement pour le trouble du maintien du sommeil, durant lequel 22% des individus sont sortis de l'étude avant la fin.Le second objectif était de développer des méthodes pour la modélisation de risques concurrents avec des causes d'évènement manquantes en s'appuyant sur la théorie existante pour les données multivariées incomplètes. Les risques concurrents apparaissent comme une extension du modèle standard de l'analyse de survie où l'on distingue le type d'évènement ou la cause l'ayant entrainé. Les méthodes pour modéliser le risque cause-spécifique et la fonction d'incidence cumulée supposent en général que la cause d'évènement est connue pour tous les individus, ce qui n'est pas toujours le cas. Certains auteurs ont proposé des méthodes de régression gérant les causes manquantes sous l'hypothèse MAR, notamment pour la modélisation semi-paramétrique du risque. Mais d'autres modèles n'ont pas été considérés, de même que la modélisation sous MNAR et les analyses de sensibilité. Nous proposons des estimateurs pondérés et une approche par imputation multiple pour la modélisation semi-paramétrique de l'incidence cumulée sous l'hypothèse MAR. En outre, nous étudions une approche par maximum de vraisemblance pour la modélisation paramétrique du risque et de l'incidence sous MAR. Enfin, nous considérons des modèles de mélange de profils dans le contexte des analyses de sensibilité. Un essai clinique étudiant un traitement pour le cancer du sein de stade II avec 23% des causes de décès manquantes sert à illustrer les méthodes proposées
Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
APA, Harvard, Vancouver, ISO, and other styles
25

Shih, Chu-Fu, and 石儲輔. "Pseudo Maximum Likelihood in Hidden Markov Model." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/88961318150457809796.

Full text
Abstract:
碩士
國立臺灣大學
應用數學科學研究所
104
Hidden Markov models are a fundamental tool in applied statistics, econometrics, and machine learning for treating data taken from multiple subpopulations. When the sequence of observations is from a discrete-time, finite-state hidden Markov model, the current practice for estimating the parameters of such models relies on local search heuristics such as the EM algorithm. A new method named as pairing method is proposed to serve as an initial estimate of the transition matrix and parameters in hidden Markov models. Under regularity conditions, it can be shown that EM leads to the maximum likelihood estimator by given a suitable initial estimate. However, there is no method of finding suitable initial points in hidden Markov model. Pairing method can provide a good initial parameter estimate which can expedite EM in terms of computing time.When the underlying state transition matrix is not taken into consideration, the marginal distribution will be a mixture distribution while only limited information on state transition matrix is kept for inference. In order to recover full information contained in the data on transition matrix, we utilize characteristics of stochastic matrix by enlarging the Markov chain to recover information governing dynamic of transition matrix. Consistent and asymptotic normal estimators of hidden transition matrix are provided.
APA, Harvard, Vancouver, ISO, and other styles
26

ZHUO, YING-ZHEN, and 卓穎蓁. "Pseudo maximum likelihood estimation for Cox Model with Doubly Truncated Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/33415305069710995121.

Full text
Abstract:
碩士
東海大學
統計學系
104
The partial likelihood (PL) function has been mainly used for proportional hazards models with censored data. The PL approach can also be used for analyzing left-truncated or left-truncated and right-censored data. However, when data is subject to double truncation, the PL approach no longer works due to the complexities of risk sets. In this article, we propose pseudo maximum likelihood approach for estimating regression coefficients and baseline hazard function for the Cox model with doubly truncated data. We propose expectationmaximization algorithms for obtaining the pseudo maximum likelihood estimators (PMLE). The consistency property of the PMLE is established. Simulations are performed to evaluate the finite-sample performance of the PMLE. The proposed method is illustrated using an AIDS data set.
APA, Harvard, Vancouver, ISO, and other styles
27

劉奕. "The Pseudo-Maximum Likelihood Estimators of Semipatametric Transformation models with Left-truncated and Right-censored Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54488163699767094798.

Full text
Abstract:
碩士
東海大學
統計學系
104
In this paper, we consider the maximum likelihood estimators (MLE) of regression coecients and cumulated baseline hazard for semiparametric transformation model with left-truncated and right-censored (LTRC) data when the distribution of the truncation times is assumed to belong to a given parametric family. Based on pseudo-likelihood involving innite-dimensional parameters, we propose expectation maximization algorithms for obtaining the pseudo-MLE. Simulations are performed to evaluate the finite-sample performance of the MLE.
APA, Harvard, Vancouver, ISO, and other styles
28

Sotáková, Martina. "Zobecněné odhadovací rovnice (GEE)." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-434538.

Full text
Abstract:
In this thesis we are interested in generalized estimating equations (GEE). First, we introduce the term of generalized linear model, on which generalized estimating equations are based. Next we present the methos of pseudo maximum likelyhood and quasi-pseudo maximum likelyhood, from which we move on to the methods of generalized estimating equations. Finally, we perform simulation studies, which demonstrates the theoretical results presented in the thesis. 1
APA, Harvard, Vancouver, ISO, and other styles
29

Liao, Wei-Jhih, and 廖偉志. "On Predicting the Stage of Breast Cancer- Using a Modified Two-Stage Pseudo Maximum Likelihood Estimation (MTSPMLE)." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/46565255562280707810.

Full text
Abstract:
碩士
國立彰化師範大學
統計資訊研究所
102
In Tradition Chinese Medicine (TCM), there are four diagnostics - inspection, listening (smelling), inquiring and palpation. Among some commonly used methods of the “inspection” diagnosis, tongue inspection plays an irreplaceable role. In the theory of TCM diagnosis, the tongue can be viewed as the projection of the internal organs through transmission meridians. It also reflects the health conditions of organs. Tongue diagnosis mainly focuses on the shape, the color, the number of red spots, and the coating of the tongues. Conventionally, the Western diagnostics mostly comprise of doctors’ inquiring, blood testing, urine testing, ultrasound scans, and/or radioactive rays as instruments for diagnoses of diseases. Due to the unwillingness of accepting these invasive medical treatments, there might be occasions of collecting data sets with missing observations. According to the literature, approaches such as regression method, mean imputation, EM algorithm and pseudo maximum likelihood, among others, are viable on dealing with a data set with missing values. In this study four methods are used to the implementation: the two-stage pseudo maximum likelihood estimation based on the regression imputations of GOT on the information from tongue inspection, the two-stage pseudo maximum likelihood estimation based on the regression imputations of GPT on the information from tongue inspection, the conventional EM algorithm, and regression imputations using both GOT and GPT as responses and the information of tongue inspections as explanatory variables. Our goal is to predict the probabilities of the stages of the patients’ breast cancer. One hundred and sixty two sets of tongue images were taken and analyzed. Based on the analysis, we find that the two-stage pseudo maximum likelihood estimation based on the regression imputations of GOT on the information from tongue inspection outperforms the other approaches.
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Sheng-Yuan, and 林聖淵. "A Hybrid DOA Estimation of Frequency Hopping Signal Source Using Pseudo Doppler Direction Finder and Maximum Likelihood Estimator." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/75853378177160337880.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
103
In this thesis, we propose a blind direction of arrival (DOA) estimation of a frequency hopping signal source without a priori information of its hopping pattern. The DOA estimation can be divided into 3 stages, including frequency hopping signal detection, preliminary DOA estimation, and accurate DOA estimation. First, we use low complexity short-time Fourier transform (STFT) time-frequency analysis to detect the hopping pattern of a frequency hopping signal, and then applying the estimated hopping pattern to a quick pseudo Doppler direction finder to perform the preliminary DOA estimation. Finally, we substitute the preliminary estimation result into a maximum likelihood (ML) scheme as the initial value and obtain the accurate estimation result. Computer simulations are used to verify the performance of the hybrid DOA estimation, where Bluetooth signals generated by Simulink software are used as sources. After comparing the maximum likelihood estimation with or without pseudo Doppler scheme, the results show that hybrid DOA estimation not only can accelerate the convergence speed of maximum likelihood estimation, but also prevent the maximum likelihood estimation from converging to a local minimum problem. Consequently, the hybrid DOA estimation can effectively improve the accuracy of DOA estimation of a frequency hopping signal.
APA, Harvard, Vancouver, ISO, and other styles
31

Prehn, Sören. "Agriculture & New New Trade Theory." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-0015-52E0-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Florindo, Emmanuel Fernandes. "Analysing Ukraine’s bilateral trade flows in goods with the European Union: a gravity model approach." Master's thesis, 2019. http://hdl.handle.net/10773/26286.

Full text
Abstract:
We study the main determinants of international trade flows in goods between Ukraine and the EU-28 member states in the period 1995-2017. We estimate the augmented gravity model of international trade with robust Ordinary Least Squares (OLS) and Poisson (PPML), and PPML with fixed effects too estimate trade elasticities for total trade, exports and imports functions. The regression results are used to recommend trade policy makers on the implementation of the EU Association Free-Trade Agreement with Ukraine. The main findings reveal that, the income of Ukraine, the income of the EU-28 trading partners, distance but also income differences between Ukraine and its trading partners (Linder hypothesis), and the real exchange rate are important determinants of international trade. These results are robust to different trade specification functions of the augmented gravity model of international trade.
Estudamos os principais determinantes dos fluxos de comércio internacional de mercadorias entre a Ucrânia e os países membros da UE-28 no período 1995-2017. Estimamos o modelo de gravidade aumentada do comércio internacional com robustos Quadrados Mínimos Ordinários (QMO) e Poisson (PPML), e o PPML com efeitos fixos também estima as elasticidades do comércio para as funções de comércio total, exportações e importações. Os resultados da regressão são utilizados para recomendar os decisores políticos comerciais sobre a implementação do Acordo de Comércio Livre da Associação com a Ucrânia. As principais conclusões revelam que o rendimento da Ucrânia, o rendimento dos parceiros comerciais da UE-28, distância, mas também as diferenças de rendimento entre a Ucrânia e os seus parceiros comerciais (hipótese de Linder) e a taxa de câmbio real são determinantes importantes do comércio internacional. Estes resultados são robustos para diferentes funções de especificação comercial do modelo de gravidade aumentada do comércio internacional.
Mestrado em Economia
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography