To see the other types of publications on this topic, follow the link: Log-likelihood.

Dissertations / Theses on the topic 'Log-likelihood'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Log-likelihood.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cule, Madeleine. "Maximum likelihood estimation of a multivariate log-concave density." Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/237061.

Full text
Abstract:
Density estimation is a fundamental statistical problem. Many methods are eithersensitive to model misspecification (parametric models) or difficult to calibrate, especiallyfor multivariate data (nonparametric smoothing methods). We propose an alternativeapproach using maximum likelihood under a qualitative assumption on the shape ofthe density, specifically log-concavity. The class of log-concave densities includes manycommon parametric families and has desirable properties. For univariate data, theseestimators are relatively well understood, and are gaining in popularity in theory andpractice. We discuss extensions for multivariate data, which require different techniques. After establishing existence and uniqueness of the log-concave maximum likelihoodestimator for multivariate data, we see that a reformulation allows us to compute itusing standard convex optimization techniques. Unlike kernel density estimation, orother nonparametric smoothing methods, this is a fully automatic procedure, and noadditional tuning parameters are required. Since the assumption of log-concavity is non-trivial, we introduce a method forassessing the suitability of this shape constraint and apply it to several simulated datasetsand one real dataset. Density estimation is often one stage in a more complicatedstatistical procedure. With this in mind, we show how the estimator may be used forplug-in estimation of statistical functionals. A second important extension is the use oflog-concave components in mixture models. We illustrate how we may use an EM-stylealgorithm to fit mixture models where the number of components is known. Applicationsto visualization and classification are presented. In the latter case, improvement over aGaussian mixture model is demonstrated. Performance for density estimation is evaluated in two ways. Firstly, we considerHellinger convergence (the usual metric of theoretical convergence results for nonparametricmaximum likelihood estimators). We prove consistency with respect to this metricand heuristically discuss rates of convergence and model misspecification, supportedby empirical investigation. Secondly, we use the mean integrated squared error todemonstrate favourable performance compared with kernel density estimates using avariety of bandwidth selectors, including sophisticated adaptive methods. Throughout, we emphasise the development of stable numerical procedures able tohandle the additional complexity of multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
2

Pan, Juming. "Adaptive LASSO For Mixed Model Selection via Profile Log-Likelihood." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1466633921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Modarres-Mousavi, Shabnam. "Monitoring Markov Dependent Binary Observations with a Log-Likelihood Ratio Based CUSUM Control Chart." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26235.

Full text
Abstract:
Our objective is to monitor the changes in a proportion with correlated binary observations. All of the published work on this subject used the first-order Markov chain model for the data. Increasing the order of dependence above one by extending a standard Markov chain model entails an exponential increase of both the number of parameters and the dimension of the transition probability matrix. In this dissertation, we develop a particular Markov chain structure, the Multilevel Model (MLM), to model the correlation between binary data. The basic idea is to assign a lower probability to observing a 1 when all previous correlated observations are 0â s, and a higher probability to observing a 1 as the last observed 1 gets closer to the current observation. We refer to each of the distinct situations of observing a 1 as a â levelâ . For a given order of dependence, , at most different values of conditional probabilities of observing a 1 can be assigned. So the number of levels is always less than or equal to . Compared to a direct extension of the first-order Markov model to higher orders, our model is considerably parsimonious. The number of parameters for the MLM is only one plus the number of levels, and the transition probability matrix is . We construct a CUSUM control chart for monitoring a proportion with correlated binary observations. First, we use the probability structure of a first-order Markov chain to derive a log-likelihood ratio based CUSUM control statistic. Then, we model this CUSUM statistic itself as a Markov chain, which in turn allows for designing a control chart with specified statistical properties: the Markov Binary CUSUM (MBCUSUM) chart. We generalize the MBCUSUM to account for any order of dependence between binary observations through implying MLM to the data and to our CUSUM control statistic. We verify that the MBCUSUM has a better performance than a curtailed Shewhart chart. Also, we show that except for extremely large changes in the proportion (of interest) the MBCUSUM control chart detects the changes faster than the Bernoulli CUSUM control chart, which is designed for independent observations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Foroughi, pour Ali. "Linear Approximations for Second Order High Dimensional Model Representation of the Log Likelihood Ratio." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555419601408423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fei, Jia. "On a turbo decoder design for low power dissipation." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/34090.

Full text
Abstract:
A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key innovations in turbo coding are parallel concatenated encoding and iterative decoding. A soft-in soft-out component decoder can be implemented using the maximum a posteriori (MAP) or the maximum likelihood (ML) decoding algorithm. While the MAP algorithm offers better performance than the ML algorithm, the computation is complex and not suitable for hardware implementation. The log-MAP algorithm, which performs necessary computations in the logarithm domain, greatly reduces hardware complexity. With the proliferation of the battery powered devices, power dissipation, along with speed and area, is a major concern in VLSI design. In this thesis, we investigated a low-power design of a turbo decoder based on the log-MAP algorithm. Our turbo decoder has two component log-MAP decoders, which perform the decoding process alternatively. Two major ideas for low-power design are employment of a variable number of iterations during the decoding process and shutdown of inactive component decoders. The number of iterations during decoding is determined dynamically according to the channel condition to save power. When a component decoder is inactive, the clocks and spurious inputs to the decoder are blocked to reduce power dissipation. We followed the standard cell design approach to design the proposed turbo decoder. The decoder was described in VHDL, and then synthesized to measure the performance of the circuit in area, speed and power. Our decoder achieves good performance in terms of bit error rate. The two proposed methods significantly reduce power dissipation and energy consumption.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Hatzinger, Reinhold, and Walter Katzenbeisser. "Log-linear Rasch-type models for repeated categorical data with a psychobiological application." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/126/1/document.pdf.

Full text
Abstract:
The purpose of this paper is to generalize regression models for repeated categorical data based on maximizing a conditional likelihood. Some existing methods, such as those proposed by Duncan (1985), Fischer (1989), and Agresti (1993, and 1997) are special cases of this latent variable approach, used to account for dependencies in clustered observations. The generalization concerns the incorporation of rather general data structures such as subject-specific time-dependent covariates, a variable number of observations per subject and time periods of arbitrary length in order to evaluate treatment effects on a categorical response variable via a linear parameterization. The response may be polytomous, ordinal or dichotomous. The main tool is the log-linear representation of appropriately parameterized Rasch-type models, which can be fitted using standard software, e.g., R. The proposed method is applied to data from a psychiatric study on the evaluation of psychobiological variables in the therapy of depression. The effects of plasma levels of the antidepressant drug Clomipramine and neuroendocrinological variables on the presence or absence of anxiety symptoms in 45 female patients are analyzed. The individual measurements of the time dependent variables were recorded on 2 to 11 occasions. The findings show that certain combinations of the variables investigated are favorable for the treatment outcome. (author´s abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
7

Verbeek, Benjamin. "Maximum Likelihood Estimation of Hyperon Parameters in Python : Facilitating Novel Studies of Fundamental Symmetries with Modern Software Tools." Thesis, Uppsala universitet, Institutionen för materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446041.

Full text
Abstract:
In this project, an algorithm has been implemented in Python to estimate the parameters describing the production and decay of a spin 1/2 baryon - antibaryon pair. This decay can give clues about a fundamental asymmetry between matter and antimatter. A model-independent formalism developed by the Uppsala hadron physics group and previously implemented in C++, has been shown to be a promising tool in the search for physics beyond the Standard Model (SM) of particle physics. The program developed in this work provides a more user-friendly alternative, and is intended to motivate further use of the formalism through a more maintainable, customizable and readable implementation. The hope is that this will expedite future research in the area of charge parity (CP)-violation and eventually lead to answers to questions such as why the universe consists of matter. A Monte-Carlo integrator is used for normalization and a Python library for function minimization. The program returns an estimation of the physics parameters including error estimation. Tests of statistical properties of the estimator, such as consistency and bias, have been performed. To speed up the implementation, the Just-In-Time compiler Numba has been employed which resulted in a speed increase of a factor 400 compared to plain Python code.
APA, Harvard, Vancouver, ISO, and other styles
8

Mendoza, Natalie Verónika Rondinel. "A distribuição log-logística exponenciada geométrica: dupla ativação." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-26102012-150929/.

Full text
Abstract:
Neste trabalho é proposta uma nova distribuição de quatro parâmetros denominada distribuição log-logística exponenciada geométrica, baseada em um mecanismo de dupla ativação para modelar dados de tempo de vida. Para esta nova distribuição, foi realizado um estudo da função de densidade de probabilidade, da função de distribuição acumulada, da função de sobrevivência e da função de taxa de falha, a qual apresenta formas que podem modelar dados de tempo de vida, tais como: forma crescente, decrescente, unimodal, bimodal e forma de U. Obteve-se expansões da função de densidade, expressões para os momentos de probabilidade ponderada, função geradora de momentos, desvios médios e as curvas de Bonferroni e de Lorenz. Considerando dados censurados, foi utilizado o método de máxima verossimilhança para estimação dos parâmetros. Analogamente também é proposto um modelo de regressão baseado no logaritmo da distribuição log-logística exponenciada geométrica com dupla ativação, que é uma extensão dos modelos de regressão logística exponenciada e logística. Este modelo pode ser usado na análise de dados reais, por fornecer um melhor ajuste que os modelos de regressão particulares, logística exponenciada e logística. Finalmente, são apresentados duas aplicações para ilustrar a utilização da nova distribuição.
In this work, we propose a new distribution with four parameters the so called exponentiated log-logistic geometric distribution based on a double mechanism of activation for modeling lifetime data. For this new distribution, we study the density function, cumulative distribution, survival function and the failure rate function which allows major harzad rates: increasing, decreasing, bathtub, unimodal and bimodal failure rates. We also obtain the density function expansions and the expressions for the probability-weighted moments, moment generating function, mean deviation and Bonferroni and Lorenz curves. Considering censored data, we use the maximum likelihood method for estimating the parameters. Similarly, we also propose the regression model based on the logarithm of the exponentiated log-logistic geometric distribution with double activation, which is an extension of the exponential logistic and logistic regression models. This new model could be widely used in the analysis of real data to provide a better fit than exponetial logistic and logistic regression models. Finally, two applications are presented to illustrate the application of the new distribution.
APA, Harvard, Vancouver, ISO, and other styles
9

Cruz, José Nilton da. "A nova família de distribuições odd log-logística: teoria e aplicações." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-03052016-183138/.

Full text
Abstract:
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.
In this study, a new family of distributions was proposed, which allows to model survival data when the function of risk has unimodal shapes and U (bathtub). Modifications of the Weibull, Fréchet, generalized half-normal, log-logistic and lognormal distributions were considered. Taking censored and non-censored data, we consider the maximum likelihood estimators for the proposed model, in order to check the flexibility of the new family. Also, it was considered a location-scale regression model, to verify the influence of covariates on survival times. Additionally, a residual analysis was conducted based on modified deviance residuals. For different parameters fixed, percentages of censoring and sample sizes, several simulation studies were performed with the objective of verify the empirical distribution of the martingale type and modified deviance residuals. To detect influential observations, measures of local influence were used, which are diagnostic measures based on small perturbations in the data or in the proposed model. It can occur situations in which the assumption of independence between the failure and censoring times is not valid. Thus, another objective of this work is to consider the informative censoring mechanism based on the marginal likelihood, considering the log-odd log-logistic Weibull distribution in modelling. Finally, the methodologies described are applied to sets of real data.
APA, Harvard, Vancouver, ISO, and other styles
10

Chaudhari, Pragat. "Analytical Methods for the Performance Evaluation of Binary Linear Block Codes." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/904.

Full text
Abstract:
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
APA, Harvard, Vancouver, ISO, and other styles
11

Lu, Min. "A Study of the Calibration Regression Model with Censored Lifetime Medical Cost." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/14.

Full text
Abstract:
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
APA, Harvard, Vancouver, ISO, and other styles
12

Sazak, Hakan Savas. "Estimation And Hypothesis Testing In Stochastic Regression." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/3/724294/index.pdf.

Full text
Abstract:
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
13

Braga, Altemir da Silva. "Extensions of the normal distribution using the odd log-logistic family: theory and applications." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-02102017-092313/.

Full text
Abstract:
In this study we propose three new distributions and a study with longitudinal data. The first was the Odd log-logistic normal distribution: theory and applications in analysis of experiments, the second was Odd log-logistic t Student: theory and applications, the third was the Odd log-logistic skew normal: the new distribution skew-bimodal with applications in analysis of experiments and the fourth regression model with random effect of the Odd log-logistic skew normal distribution: an application in longitudinal data. Some have been demonstrated such as symmetry, quantile function, some expansions, ordinary incomplete moments, mean deviation and the moment generating function. The estimation of the model parameters were approached by the method of maximum likelihood. In applications were used regression models to data from a completely randomized design (CRD) or designs completely randomized in blocks (DBC). Thus, the models can be used in practical situations for as a completely randomized designs or completely randomized blocks designs, mainly, with evidence of asymmetry, kurtosis and bimodality.
A distribuição normal é uma das mais importantes na área de estatística. Porém, não é adequada para ajustar dados que apresentam características de assimetria ou de bimodalidade, uma vez que tal distribuição possui apenas os dois primeiros momentos, diferentes de zero, ou seja, a média e o desvio-padrão. Por isso, muitos estudos são realizados com a finalidade de criar novas famílias de distribuições que possam modelar ou a assimetria ou a curtose ou a bimodalidade dos dados. Neste sentido, é importante que estas novas distribuições tenham boas propriedades matemáticas e, também, a distribuição normal como um submodelo. Porém, ainda, são poucas as classes de distribuições que incluem a distribuição normal como um modelo encaixado. Dentre essas propostas destacam-se: a skew-normal, a beta-normal, a Kumarassuamy-normal e a gama-normal. Em 2013 foi proposta a nova família X de distribuições Odd log-logística-G com o objetivo de criar novas distribuições de probabildade. Assim, utilizando as distribuições normal e a skew-normal como função base foram propostas três novas distribuições e um quarto estudo com dados longitudinais. A primeira, foi a distribuição Odd log-logística normal: teoria e aplicações em dados de ensaios experimentais; a segunda foi a distribuição Odd log-logística t Student: teoria e aplicações; a terceira foi a distribuição Odd log-logística skew-bimodal com aplicações em dados de ensaios experimentais e o quarto estudo foi o modelo de regressão com efeito aleatório para a distribuição distribuição Odd log-logística skew-bimodal: uma aplicação em dados longitudinais. Estas distribuições apresentam boas propriedades tais como: assimetria, curtose e bimodalidade. Algumas delas foram demonstradas como: simetria, função quantílica, algumas expansões, os momentos incompletos ordinários, desvios médios e a função geradora de momentos. A flexibilidade das novas distrições foram comparada com os modelos: skew-normal, beta-normal, Kumarassuamy-normal e gama-normal. A estimativas dos parâmetros dos modelos foram obtidas pelo método da máxima verossimilhança. Nas aplicações foram utilizados modelos de regressão para dados provenientes de delineamentos inteiramente casualizados (DIC) ou delineamentos casualizados em blocos (DBC). Além disso, para os novos modelos, foram realizados estudos de simulação para verificar as propriedades assintóticas das estimativas de parâmetros. Para verificar a presença de valores extremos e a qualidade dos ajustes foram propostos os resíduos quantílicos e a análise de sensibilidade. Portanto, os novos modelos estão fundamentados em propriedades matemáticas, estudos de simulação computacional e com aplicações para dados de delineamentos experimentais. Podem ser utilizados em ensaios inteiramente casualizados ou em blocos casualizados, principalmente, com dados que apresentem evidências de assimetria, curtose e bimodalidade.
APA, Harvard, Vancouver, ISO, and other styles
14

CAMPOS, Joelson da Cruz. "Modelos de regressão log-Birnbaum-Saunders generalizados para dados com censura intervalar." Universidade Federal de Campina Grande, 2011. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1332.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-02T21:04:42Z No. of bitstreams: 1 JOELSON DA CRUZ CAMPOS - DISSERTAÇÃO PPGMAT 2011..pdf: 1767673 bytes, checksum: bf8d4ae8ead7124a54cdc7b566c2a9ff (MD5)
Made available in DSpace on 2018-08-02T21:04:42Z (GMT). No. of bitstreams: 1 JOELSON DA CRUZ CAMPOS - DISSERTAÇÃO PPGMAT 2011..pdf: 1767673 bytes, checksum: bf8d4ae8ead7124a54cdc7b566c2a9ff (MD5) Previous issue date: 2011-12
Capes
Neste trabalho, propomos o modelo de regressão log-Birnbaum-Saunders generalizado para analisar dados com censura intervalar. As funções escore e a matriz de informação de Fisher observada foram obtidas, bem como foi discutido o processo de estimação dos parâmetros do modelo. Como medida de influência, consideramos o afastamento pela verossimilhança (likelihood displacement) sob vários esquemas de perturbação. Derivamos as matrizes apropriadas para obter a influência local nos parâmetros estimados do modelo e realizamos uma análise residual baseada nos resíduos de Cox-Snell ajustado, Martingale e componente do desvio modificado. Apresentamos também um estudo de simulação de Monte Carlo a fim de investigar o comportamento da distribuição empírica dos resíduos propostos. Finalmente, uma aplicação com dados reais é apresentada.
In this work, we propose the model of regression log-Birnbaum-Saunders generalized to analyze data with interval censored. The score functions and the observed Fisher information matrix were obtained, as well as the process for estimating of the parameters was discussed. As measure of influence, we considered the likelihood displacement under several schemes of perturbation. The normal curvatures of local influence were derived and we conducted a residual analysis based on residuals Cox-Snell adjusted, Martingale and modified deviance. A Monte Carlo simulation was carried in order to investigate the behavior of empiric distribution of the proposed residuals. Finally, an application with real data is presented.
APA, Harvard, Vancouver, ISO, and other styles
15

SILVA, Priscila Gonçalves da. "Inferência e diagnóstico em modelos não lineares Log-Gama generalizados." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18637.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-25T14:46:06Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5)
Made available in DSpace on 2017-04-25T14:46:06Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5) Previous issue date: 2016-11-04
Young e Bakir (1987) propôs a classe de Modelos Lineares Log-Gama Generalizados (MLLGG) para analisar dados de sobrevivência. No nosso trabalho, estendemos a classe de modelos propostapor Young e Bakir (1987) permitindo uma estrutura não linear para os parâmetros de regressão. A nova classe de modelos é denominada como Modelos Não Lineares Log-Gama Generalizados (MNLLGG). Com o objetivo de obter a correção de viés de segunda ordem dos estimadores de máxima verossimilhança (EMV) na classe dos MNLLGG, desenvolvemos uma expressão matricial fechada para o estimador de viés de Cox e Snell (1968). Analisamos, via simulação de Monte Carlo, os desempenhos dos EMV e suas versões corrigidas via Cox e Snell (1968) e através da metodologia bootstrap (Efron, 1979). Propomos também resíduos e técnicas de diagnóstico para os MNLLGG, tais como: alavancagem generalizada, influência local e influência global. Obtivemos, em forma matricial, uma expressão para o fator de correção de Bartlett à estatística da razão de verossimilhanças nesta classe de modelos e desenvolvemos estudos de simulação para avaliar e comparar numericamente o desempenho dos testes da razão de verossimilhanças e suas versões corrigidas em relação ao tamanho e poder em amostras finitas. Além disso, derivamos expressões matriciais para os fatores de correção tipo-Bartlett às estatísticas escore e gradiente. Estudos de simulação foram feitos para avaliar o desempenho dos testes escore, gradiente e suas versões corrigidas no que tange ao tamanho e poder em amostras finitas.
Young e Bakir (1987) proposed the class of generalized log-gamma linear regression models (GLGLM) to analyze survival data. In our work, we extended the class of models proposed by Young e Bakir (1987) considering a nonlinear structure for the regression parameters. The new class of models is called generalized log-gamma nonlinear regression models (GLGNLM). We also propose matrix formula for the second-order bias of the maximum likelihood estimate of the regression parameter vector in the GLGNLM class. We use the results by Cox and Snell (1968) and bootstrap technique [Efron (1979)] to obtain the bias-corrected maximum likelihood estimate. Residuals and diagnostic techniques were proposed for the GLGNLM, such as generalized leverage, local and global influence. An general matrix notation was obtained for the Bartlett correction factor to the likelihood ratio statistic in this class of models. Simulation studies were developed to evaluate and compare numerically the performance of likelihood ratio tests and their corrected versions regarding size and power in finite samples. Furthermore, general matrix expressions were obtained for the Bartlett-type correction factor for the score and gradient statistics. Simulation studies were conducted to evaluate the performance of the score and gradient tests with their corrected versions regarding to the size and power in finite samples.
APA, Harvard, Vancouver, ISO, and other styles
16

LeSage, James P., and Manfred M. Fischer. "MCMC estimation of panel gravity models in the presence of network dependence." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6550/1/2018%2D10%2D2_WU%2DPub__panel_gravity_model.pdf.

Full text
Abstract:
Past focus in the panel gravity literature has been on multidimensional fixed effects specifications in an effort to accommodate heterogeneity. After introducing fixed effects for each origin- destination dyad and time-period speciffic effects, we find evidence of cross-sectional dependence in flows. We propose a simultaneous dependence gravity model that allows for network dependence in flows, along with computationally efficient MCMC estimation methods that produce a Monte Carlo integration estimate of log-marginal likelihood useful for model comparison. Application of the model to a panel of trade flows points to network spillover effects, suggesting the presence of network dependence and biased estimates from conventional trade flow specifications.
Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles
17

Ureten, Suzan. "Single and Multiple Emitter Localization in Cognitive Radio Networks." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35692.

Full text
Abstract:
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
APA, Harvard, Vancouver, ISO, and other styles
18

Sundström, David. "On specification and inference in the econometrics of public procurement." Doctoral thesis, Umeå universitet, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-121681.

Full text
Abstract:
In Paper [I] we use data on Swedish public procurement auctions for internal regularcleaning service contracts to provide novel empirical evidence regarding green publicprocurement (GPP) and its effect on the potential suppliers’ decision to submit a bid andtheir probability of being qualified for supplier selection. We find only a weak effect onsupplier behavior which suggests that GPP does not live up to its political expectations.However, several environmental criteria appear to be associated with increased complexity,as indicated by the reduced probability of a bid being qualified in the postqualificationprocess. As such, GPP appears to have limited or no potential to function as an environmentalpolicy instrument. In Paper [II] the observation is made that empirical evaluations of the effect of policiestransmitted through public procurements on bid sizes are made using linear regressionsor by more involved non-linear structural models. The aspiration is typically to determinea marginal effect. Here, I compare marginal effects generated under both types ofspecifications. I study how a political initiative to make firms less environmentally damagingimplemented through public procurement influences Swedish firms’ behavior. Thecollected evidence brings about a statistically as well as economically significant effect onfirms’ bids and costs. Paper [III] embarks by noting that auction theory suggests that as the number of bidders(competition) increases, the sizes of the participants’ bids decrease. An issue in theempirical literature on auctions is which measurement(s) of competition to use. Utilizinga dataset on public procurements containing measurements on both the actual and potentialnumber of bidders I find that a workhorse model of public procurements is bestfitted to data using only actual bidders as measurement for competition. Acknowledgingthat all measurements of competition may be erroneous, I propose an instrumental variableestimator that (given my data) brings about a competition effect bounded by thosegenerated by specifications using the actual and potential number of bidders, respectively.Also, some asymptotic results are provided for non-linear least squares estimatorsobtained from a dependent variable transformation model. Paper [VI] introduces a novel method to measure bidders’ costs (valuations) in descending(ascending) auctions. Based on two bounded rationality constraints bidders’costs (valuations) are given an imperfect measurements interpretation robust to behavioraldeviations from traditional rationality assumptions. Theory provides no guidanceas to the shape of the cost (valuation) distributions while empirical evidence suggeststhem to be positively skew. Consequently, a flexible distribution is employed in an imperfectmeasurements framework. An illustration of the proposed method on Swedishpublic procurement data is provided along with a comparison to a traditional BayesianNash Equilibrium approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Rizzato, Fernanda Bührer. "Modelos de regressão log-gama generalizado com fração de cura." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-19032007-152443/.

Full text
Abstract:
Neste trabalho considera-se uma reparametrização no modelo log-gama generalizado para a inclusão de dados com sobreviventes de longa duração. Os modelos tentam estimar separadamente os efeitos das covariáveis na aceleração ou desaceleração no tempo e na fração de sobreviventes que é a proporção da população para o qual o evento não ocorre. A função logística é usada para o modelo de regressão com fração de cura. Os parâmetros do modelo, serão estimados através do método de máxima verossimilhança. Alguns métodos de influência, como a influência local e a influência local total de um indivíduo, serão introduzidos, calculados, analisados e discutidos. Finalmente, um conjunto de dados médicos será analisado sob o modelo log-gama generalizado com fração de cura. Uma análise de resíduos será executada para verificar a qualidade de ajuste do modelo.
In this work the generalized log-gama model is modified for possibility that long-term survivors are present in the data . The models attempt to estimate separately the effects of covariates on the accelaration/decelaration of the timing of a given event and surviving fraction; that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence, total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under log-gama generalized mixture model. A residual analysis is performed in order to select an appropriate model.
APA, Harvard, Vancouver, ISO, and other styles
20

Andrés, Ferrer Jesús. "Statistical approaches for natural language modelling and monotone statistical machine translation." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7109.

Full text
Abstract:
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural y traducción automática estadística. En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad. En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida. Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico .
Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
Palancia
APA, Harvard, Vancouver, ISO, and other styles
21

Couto, Epaminondas de Vasconcellos. "Modelo de regressão log-gama generalizado exponenciado com dados censurados." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-16032010-112500/.

Full text
Abstract:
No presente trabalho, e proposto um modelo de regressão utilizando a distribuição gama generalizada exponenciada (GGE) para dados censurados, esta nova distribuição e uma extensão da distribuição gama generalizada. A distribuição GGE (CORDEIRO et al., 2009) que tem quatro parâmetros pode modelar dados de sobrevivência quando a função de risco tem forma crescente, decrescente, forma de U e unimodal. Neste trabalho apresenta-se uma expansão natural da distribuição GGE para dados censurados, esta distribuição desperta o interesse pelo fato de representar uma família paramétrica que possui como casos particulares outras distribuições amplamente utilizadas na analise de dados de tempo de vida, como as distribuições gama generalizada (STACY, 1962), Weibull, Weibull exponenciada (MUDHOLKAR et al., 1995, 1996), exponencial exponenciada (GUPTA; KUNDU, 1999, 2001), Rayleigh generalizada (KUNDU; RAKAB, 2005), dentre outras, e mostra-se útil na discriminação entre alguns modelos probabilísticos alternativos. Considerando dados censurados, e abordado o método de máxima verossimilhança para estimar os parâmetros do modelo proposto. Outra proposta deste trabalho e introduzir um modelo de regressão log-gama generalizado exponenciado com efeito aleatório. Por fim, são apresentadas três aplicações para ilustrar a distribuição proposta.
In the present study, we propose a regression model using the exponentiated generalized gama (EGG) distribution for censored data, this new distribution is an extension of the generalized gama distribution. The EGG distribution (CORDEIRO et al., 2009) that has four parameters it can model survival data when the risk function is increasing, decreasing, form of U and unimodal-shaped. In this work comes to a natural expansion of the EGG distribution for censored data, is awake distribution the interest for the fact of representing a parametric family that has, as particular cases, other distributions which are broadly used in lifetime data analysis, as the generalized gama (STACY, 1962), Weibull, exponentiated Weibull (MUDHOLKAR et al., 1995, 1996), exponentiated exponential (GUPTA; KUNDU, 1999, 2001), generalized Rayleigh (KUNDU; RAKAB, 2005), among others, and it is shown useful in the discrimination among some models alternative probabilistics. Considering censored data, the maximum likelihood estimator is considered for the proposed model parameters. Another proposal of this work was to introduce a log-exponentiated generalized gamma regression model with random eect. Finally, three applications were presented to illustrate the proposed distribution.
APA, Harvard, Vancouver, ISO, and other styles
22

Saaidia, Noureddine. "Sur les familles des lois de fonction de hasard unimodale : applications en fiabilité et analyse de survie." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14794/document.

Full text
Abstract:
En fiabilité et en analyse de survie, les distributions qui ont une fonction de hasard unimodale ne sont pas nombreuses, qu'on peut citer: Gaussienne inverse ,log-normale, log-logistique, de Birnbaum-Saunders, de Weibull exponentielle et de Weibullgénéralisée. Dans cette thèse, nous développons les tests modifiés du Chi-deux pour ces distributions tout en comparant la distribution Gaussienne inverse avec les autres. Ensuite nousconstruisons le modèle AFT basé sur la distribution Gaussienne inverse et les systèmes redondants basés sur les distributions de fonction de hasard unimodale
In reliability and survival analysis, distributions that have a unimodalor $\cap-$shape hazard rate function are not too many, they include: the inverse Gaussian,log-normal, log-logistic, Birnbaum-Saunders, exponential Weibull and power generalized Weibulldistributions. In this thesis, we develop the modified Chi-squared tests for these distributions,and we give a comparative study between the inverse Gaussian distribution and the otherdistributions, then we realize simulations. We also construct the AFT model based on the inverseGaussian distribution and redundant systems based on distributions having a unimodal hazard ratefunction
APA, Harvard, Vancouver, ISO, and other styles
23

Jotta, César Augusto Degiato. "Análise de variância multivariada nas estimativas dos parâmetros do modelo log-logístico para susceptibilidade do capim-pé-de-galinha ao glyphosate." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-29112016-163511/.

Full text
Abstract:
O cenário agrícola nacional tem se tornado cada vez mais competitivo ao longo dos anos, manter o crescimento da produtividade a um baixo custo operacional e com baixo impacto ambiental tem sido os três ingredientes de maior relevância na área. A produtividade por sua vez, é função de várias variáveis, sendo o controle de plantas daninhas uma dessas variáveis a ser considerada. Nesse trabalho é analisado um conjunto de dados de um experimento realizado no departamento de Produção Vegetal da ESALQ-USP, Piracicaba - SP. Foram avaliadas 4 biótipos de capim-pé-de-galinha provenientes de três estados brasileiros e em três estágios morfológicos com 4 repetições para cada biótipo, a variável resposta utilizada foi massa seca (g) e como variável regressora foi utilizada a dose de glyphosate nas concentrações variando de 1/16 D a 16 D mais a testemunha, sem aplicação de herbicida, em que D varia de 480 gramas de equivalente ácido de glyphosate por hectare (g .e a. ha-1) para o estágio de 2 a 3 perfilhos, 720 (g .e a. ha-1) para o estágio de 6 a 8 perfilhos e de 960 para o estágio de 10-12 perfilhos. O trabalho teve como objetivo primário avaliar se, ao longo dos anos, as populações de capim-pé-de-galinha tem se tornado resistentes ao herbicida glyphosate, visando detecção de biótipos resistentes. O experimento foi instalado segundo o delineamento inteiramente aleatorizado, sendo feito em três estágios diferentes. Para a análise dos dados foi utilizado o modelo não-linear log-logístico proposto em Knezevic, S. e Ritz (2007) como método univariado, foi utilizado ainda o método da máxima verossimilhança para verificar a igualdade do parâmetro e. O modelo utilizado convergiu para quase todas as repetições, mas não houve um comportamento sistemático observado que explicasse a não convergência de uma repetição em particular. Num segundo momento, as estimativas dos três parâmetros do modelo foram tomadas como variáveis dependentes em uma análise de variância multivariada. Observando que as três, conjuntamente, foram significativas pelos testes de Pillai, Wilks, Roy e Hotelling-Lawley, foi realizado o teste de Tukey para o mesmo parâmetro e comparado com o primeiro método utilizado. Esse procedimento apresentou, com o mesmo coeficiente de significância, menor capacidade de identificar diferença entre as médias dos parâmetros das variedades de capim do que o método proposto por Regazzi (2015).
The national agricultural scenery has become increasingly competitive over the years, maintaining productivity growth at a low operating cost and low environmental impact has been the three most important ingredients in the area. Productivity in turn is a function of several variables, and the weed control is one of these variables to be considered. In this work it is analyzed a dataset of an experiment conducted in the Plant Production Department of ESALQ-USP, Piracicaba - SP. Were evaluated 4 grass chicken\'s feet biotypes from three Brazilian states in three morphological stages with 4 repetitions for each biotype, the response variable used was dry mass (g) and as regressor variable were used the dose of glyphosate in concentrations ranging from 1/16 D to 16 D plus the control without herbicide, wherein D ranges from 480 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 2 to 3 stage tillers, 720 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 6 to 8 tillers and 960 for stage 10-12 tillers. The work had as main objective to evaluate , if over the years, populations of grass chicken\'s feet has become resistant to glyphosate, aiming detection of resistant biotypes. The experiment was conducted under completely randomized design being done in three stages. For data analysis was used the non-linear log-logistic proposed in Knezevic, S. e Ritz (2007) as univariate method, it was still used the maximum likelihood method to verify the equality of the parameter e. The model converged to almost all repetitions, but there was an observed systematic behavior to explain the non-convergence of a particular repetition. Secondly, estimates of the three model parameters were taken as dependent variables in a multivariate analysis of variance. Noting that all three together, were significant by Pillai, Wilks, Roy and Hotelling-Lawley tests, was performed Tukey test for the same parameter e and compared with the first method. This procedure presented, with the same coefficient of significance, less able to identify differences between the means of the parameters of grass varieties than the method proposed by Regazzi (2015).
APA, Harvard, Vancouver, ISO, and other styles
24

Vargas, Paredero David Eduardo. "Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/66081.

Full text
Abstract:
[EN] Multiple-Input Multiple-Output (MIMO) technology in Digital Terrestrial Television (DTT) networks has the potential to increase the spectral efficiency and improve network coverage to cope with the competition of limited spectrum use (e.g., assignment of digital dividend and spectrum demands of mobile broadband), the appearance of new high data rate services (e.g., ultra-high definition TV - UHDTV), and the ubiquity of the content (e.g., fixed, portable, and mobile). It is widely recognised that MIMO can provide multiple benefits such as additional receive power due to array gain, higher resilience against signal outages due to spatial diversity, and higher data rates due to the spatial multiplexing gain of the MIMO channel. These benefits can be achieved without additional transmit power nor additional bandwidth, but normally come at the expense of a higher system complexity at the transmitter and receiver ends. The final system performance gains due to the use of MIMO directly depend on physical characteristics of the propagation environment such as spatial correlation, antenna orientation, and/or power imbalances experienced at the transmit aerials. Additionally, due to complexity constraints and finite-precision arithmetic at the receivers, it is crucial for the overall system performance to carefully design specific signal processing algorithms. This dissertation focuses on transmit and received signal processing for DTT systems using MIMO-BICM (Bit-Interleaved Coded Modulation) without feedback channel to the transmitter from the receiver terminals. At the transmitter side, this thesis presents investigations on MIMO precoding in DTT systems to overcome system degradations due to different channel conditions. At the receiver side, the focus is given on design and evaluation of practical MIMO-BICM receivers based on quantized information and its impact in both the in-chip memory size and system performance. These investigations are carried within the standardization process of DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) the handheld evolution of DVB-T2 (Terrestrial - Second Generation), and ATSC 3.0 (Advanced Television Systems Committee - Third Generation), which incorporate MIMO-BICM as key technology to overcome the Shannon limit of single antenna communications. Nonetheless, this dissertation employs a generic approach in the design, analysis and evaluations, hence, the results and ideas can be applied to other wireless broadcast communication systems using MIMO-BICM.
[ES] La tecnología de múltiples entradas y múltiples salidas (MIMO) en redes de Televisión Digital Terrestre (TDT) tiene el potencial de incrementar la eficiencia espectral y mejorar la cobertura de red para afrontar las demandas de uso del escaso espectro electromagnético (e.g., designación del dividendo digital y la demanda de espectro por parte de las redes de comunicaciones móviles), la aparición de nuevos contenidos de alta tasa de datos (e.g., ultra-high definition TV - UHDTV) y la ubicuidad del contenido (e.g., fijo, portable y móvil). Es ampliamente reconocido que MIMO puede proporcionar múltiples beneficios como: potencia recibida adicional gracias a las ganancias de array, mayor robustez contra desvanecimientos de la señal gracias a la diversidad espacial y mayores tasas de transmisión gracias a la ganancia por multiplexado del canal MIMO. Estos beneficios se pueden conseguir sin incrementar la potencia transmitida ni el ancho de banda, pero normalmente se obtienen a expensas de una mayor complejidad del sistema tanto en el transmisor como en el receptor. Las ganancias de rendimiento finales debido al uso de MIMO dependen directamente de las características físicas del entorno de propagación como: la correlación entre los canales espaciales, la orientación de las antenas y/o los desbalances de potencia sufridos en las antenas transmisoras. Adicionalmente, debido a restricciones en la complejidad y aritmética de precisión finita en los receptores, es fundamental para el rendimiento global del sistema un diseño cuidadoso de algoritmos específicos de procesado de señal. Esta tesis doctoral se centra en el procesado de señal, tanto en el transmisor como en el receptor, para sistemas TDT que implementan MIMO-BICM (Bit-Interleaved Coded Modulation) sin canal de retorno hacia el transmisor desde los receptores. En el transmisor esta tesis presenta investigaciones en precoding MIMO en sistemas TDT para superar las degradaciones del sistema debidas a diferentes condiciones del canal. En el receptor se presta especial atención al diseño y evaluación de receptores prácticos MIMO-BICM basados en información cuantificada y a su impacto tanto en la memoria del chip como en el rendimiento del sistema. Estas investigaciones se llevan a cabo en el contexto de estandarización de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), la evolución portátil de DVB-T2 (Second Generation Terrestrial), y ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporan MIMO-BICM como clave tecnológica para superar el límite de Shannon para comunicaciones con una única antena. No obstante, esta tesis doctoral emplea un método genérico tanto para el diseño, análisis y evaluación, por lo que los resultados e ideas pueden ser aplicados a otros sistemas de comunicación inalámbricos que empleen MIMO-BICM.
[CAT] La tecnologia de múltiples entrades i múltiples eixides (MIMO) en xarxes de Televisió Digital Terrestre (TDT) té el potencial d'incrementar l'eficiència espectral i millorar la cobertura de xarxa per a afrontar les demandes d'ús de l'escàs espectre electromagnètic (e.g., designació del dividend digital i la demanda d'espectre per part de les xarxes de comunicacions mòbils), l'aparició de nous continguts d'alta taxa de dades (e.g., ultra-high deffinition TV - UHDTV) i la ubiqüitat del contingut (e.g., fix, portàtil i mòbil). És àmpliament reconegut que MIMO pot proporcionar múltiples beneficis com: potència rebuda addicional gràcies als guanys de array, major robustesa contra esvaïments del senyal gràcies a la diversitat espacial i majors taxes de transmissió gràcies al guany per multiplexat del canal MIMO. Aquests beneficis es poden aconseguir sense incrementar la potència transmesa ni l'ample de banda, però normalment s'obtenen a costa d'una major complexitat del sistema tant en el transmissor com en el receptor. Els guanys de rendiment finals a causa de l'ús de MIMO depenen directament de les característiques físiques de l'entorn de propagació com: la correlació entre els canals espacials, l'orientació de les antenes, i/o els desequilibris de potència patits en les antenes transmissores. Addicionalment, a causa de restriccions en la complexitat i aritmètica de precisió finita en els receptors, és fonamental per al rendiment global del sistema un disseny acurat d'algorismes específics de processament de senyal. Aquesta tesi doctoral se centra en el processament de senyal tant en el transmissor com en el receptor per a sistemes TDT que implementen MIMO-BICM (Bit-Interleaved Coded Modulation) sense canal de tornada cap al transmissor des dels receptors. En el transmissor aquesta tesi presenta recerques en precoding MIMO en sistemes TDT per a superar les degradacions del sistema degudes a diferents condicions del canal. En el receptor es presta especial atenció al disseny i avaluació de receptors pràctics MIMO-BICM basats en informació quantificada i al seu impacte tant en la memòria del xip com en el rendiment del sistema. Aquestes recerques es duen a terme en el context d'estandardització de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), l'evolució portàtil de DVB-T2 (Second Generation Terrestrial), i ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporen MIMO-BICM com a clau tecnològica per a superar el límit de Shannon per a comunicacions amb una única antena. No obstant açò, aquesta tesi doctoral empra un mètode genèric tant per al disseny, anàlisi i avaluació, per la qual cosa els resultats i idees poden ser aplicats a altres sistemes de comunicació sense fils que empren MIMO-BICM.
Vargas Paredero, DE. (2016). Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/66081
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
25

Feng-ChengWu and 吳灃宸. "Unbiased Estimation of Numerical Derivative on Log-likelihood." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/459k47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Shu-Fen, and 李淑芬. "Behavior of Log-likelihood Ratio Statistics in Non-smooth Models." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/70132076600164983716.

Full text
Abstract:
碩士
國立交通大學
統計所
87
It is well-known that twice a log-likelihood ratio statistic follows asymptotically a chisquare-distribution. The result is usually understood and proved via Taylor's expansions of likelihood functions and by assuming asymptotic normality of maximum likelihood estimators.We contend thatmore fundamental insights can be obtained for the likelihood ratio statistics: the result holds as long as likelihood contour sets are of fan-shape. The classical Wilks theorem corresponds to the situations where the likelihood contour sets are ellipsoid. This provides an insightful geometric understanding and a useful extension of the likelihood ratio theory. As a result, even if the MLEs are not asymptotically normal,the likelihood ratio statistics can still be asymptotically gamma-distributed. Even in finite sample situation, we can also use the gamma type distributions to approximate the true distribution.Our technical arguments are simple and can easily be understood.
APA, Harvard, Vancouver, ISO, and other styles
27

Jhang, Jia-Hao, and 張家豪. "P-Value Approximation For The Log-Likelihood Ratio Statistic To Vector Autoregression." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95296147643680130531.

Full text
Abstract:
碩士
臺灣大學
數學研究所
98
We are interested in the probability that the maximal value of a stochastic process exceeds a value a. The change-point detection is an example. A p-value approximation is obtained as a is large enough for testing a null hypothesis that all observations from the standard normal distribution are independent on the multi-dimensional index set against an alternative that they have a specific form on a particular subregion of the multi-dimensional index set, which is assigned to a vector autoregressive model in this paper. The VAR model is a natural extension of the univariate autoregressive model when multiple time series is concerned. Many methods have been developed to approximate the tail probabilities of the distribution of the maximum under null hypothesis. We use the method introduced by Yakir and Pollak to find a representation for the p-value approximation as a is large.
APA, Harvard, Vancouver, ISO, and other styles
28

Tarng, Chwu-Shiun. "Third order likelihood based inference for the log-normal and the Weibull models /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:NR19821.

Full text
Abstract:
Thesis (Ph.D.)--York University, 2006. Graduate Programme in Economics.
Typescript. Includes bibliographical references (leaves 137-141). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:NR19821
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Sheng-Wei, and 林聖偉. "Log-likelihood Ratio Based Power Allocation Scheme for Distributed Detection in Wireless Sensor Networks." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/54201216611481096257.

Full text
Abstract:
碩士
國立清華大學
通訊工程研究所
96
In this thesis, we propose the power allocation schemes for distributed detection in wireless sensor networks and consider the amplifier gain based on the log-likelihood ratio (LLR) of the instantaneous observation signal received by the sensor node. If the absolute value of LLR is large, the instantaneous observation signal could be more reliable and the transmission power should be large. On the other hand, if the absolute value of LLR is too small, the instantaneous observation signal node could be less reliable and the transmission power should be small. But, if the absolute value of LLR is extremely small, we consider the sensor node should not transmit data to the fusion center so as to save power. Therefore, the idea of “censoring” is included in the proposed power allocation schemes. We also propose the power allocation scheme for the sensor nodes with the constrained transmission power since the transmission power of the senor node can not transmit with unlimited power. In addition, we consider the power allocation scheme with fusion weighting according to the different channel noise variances and expect the less total transmission power. Finally, the transmission power of the proposed schemes is compared to the equal power allocation scheme and find that the less total transmission power is required for achieving the same detection error probability.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Tung-lin, and 吳東霖. "A High Speed Antenna-Configurable Soft-output MIMO Detector Based on Log-likelihood Ratio Algorithm." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/86889802830083028894.

Full text
Abstract:
碩士
國立中央大學
電機工程學系
103
A high speed antenna-configurable soft-output MIMO detector based on log-likelihood ratio algorithm is proposed in this thesis. It can support three various modulation methods such as QPSK, 16QAM, 64QAM, and seven antenna modes such as 22 88. The soft-output MIMO detector can be simplified to two blocks which are candidate list generator and soft value generator. The candidate list generator can provide multiple signal paths with high confidence, and the soft value generator offer the soft value output of each bit by referencing these signal paths. The candidate list generator which is used in this thesis is based on traditional K-best algorithm and generates the K sets of high confidence signal paths for the back-end soft value generator by several extending and sorting. In order to recover the received signal into the correct signal, the soft value generator realizes the log-likelihood algorithm and generates the soft value output of each bit by using the high confidence signal paths which are provided by front-end candidate list generator. The proposed hardware implementation in this thesis is the parallel architecture. The proposed hardware Implementation in this thesis not only can provide highly-same throughput but also can achieve 100% hardware utilization in different antenna modes.
APA, Harvard, Vancouver, ISO, and other styles
31

Zavadilová, Barbora. "Logaritmicko-konkávní rozděleni pravděpodobnosti a jejich aplikace." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-335098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Soleimani, Morteza, I. Felician Campean, and Daniel Neagu. "Integration of Hidden Markov Modelling and Bayesian Network for Fault Detection and Prediction of Complex Engineered Systems." 2021. http://hdl.handle.net/10454/18518.

Full text
Abstract:
yes
This paper presents a methodology for fault detection, fault prediction and fault isolation based on the integration of hidden Markov modelling (HMM) and Bayesian networks (BN). This addresses the nonlinear and non-Gaussian data characteristics to support fault detection and prediction, within an explainable hybrid framework that captures causality in the complex engineered system. The proposed methodology is based on the analysis of the pattern of similarity in the log-likelihood (LL) sequences against the training data for the mixture of Gaussians HMM (MoG-HMM). The BN model identifies the root cause of detected/predicted faults, using the information propagated from the HMM model as empirical evidence. The feasibility and effectiveness of the presented approach are discussed in conjunction with the application to a real-world case study of an automotive exhaust gas Aftertreatment system. The paper details the implementation of the methodology to this case study, with data available from real-world usage of the system. The results show that the proposed methodology identifies the fault faster and attributes the fault to the correct root cause. While the proposed methodology is illustrated with an automotive case study, its applicability is much wider to the fault detection and prediction problem of any similar complex engineered system.
The full text will be available at the end of the publisher's embargo: 28th May 2023
APA, Harvard, Vancouver, ISO, and other styles
33

Sabeti, Avideh. "Bayesian Inference for Bivariate Conditional Copula Models with Continuous or Mixed Outcomes." Thesis, 2013. http://hdl.handle.net/1807/35948.

Full text
Abstract:
The main goal of this thesis is to develop Bayesian model for studying the influence of covariate on dependence between random variables. Conditional copula models are flexible tools for modelling complex dependence structures. We construct Bayesian inference for the conditional copula model adapted to regression settings in which the bivariate outcome is continuous or mixed (binary and continuous) and the copula parameter varies with covariate values. The functional relationship between the copula parameter and the covariate is modelled using cubic splines. We also extend our work to additive models which would allow us to handle more than one covariate while keeping the computational burden within reasonable limits. We perform the proposed joint Bayesian inference via adaptive Markov chain Monte Carlo sampling. The deviance information criterion and cross-validated marginal log-likelihood criterion are employed for three model selection problems: 1) choosing the copula family that best fits the data, 2) selecting the calibration function, i.e., checking if parametric form for copula parameter is suitable and 3) determining the number of independent variables in the additive model. The performance of the estimation and model selection techniques are investigated via simulations and demonstrated on two data sets: 1) Matched Multiple Birth and 2) Burn Injury. In which of interest is the influence of gestational age and maternal age on twin birth weights in the former data, whereas in the later data we are interested in investigating how patient’s age affects the severity of burn injury and the probability of death.
APA, Harvard, Vancouver, ISO, and other styles
34

Dufek, Ondřej. "Kritická analýza jazykových ideologií v českém veřejném diskurzu." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-383154.

Full text
Abstract:
The thesis deals with language ideologies in Czech public discourse. After introducing its topic, motivation and structure in the opening chapter, it devotes the second chapter to a thorough analysis of the research field of language ideologies. It presents various ways of defining them, two different approaches to them and a few key features which characterize language ideologies. The relation of language ideologies and other related notions is outlined, possibilities and ways of investigation are surveyed. Some remarks focus on existing lists or glossaries of language ideologies. The core of this chapter is an original, complex definition of language ideologies grounded in a critical reflection of approaches up to now. The third chapter summarizes relevant existing findings and on that basis, it formulates the main aim of the thesis - to make a contribution to knowledge on the foundations and ways of conceptualizing language in Czech public discourse. The fourth chapter elaborates the methodological frame of the thesis. Critical discourse analysis is chosen as a basis - its basics are summarized, main critical comments are considered and a partial solutions are proposed in use of corpus linguistics' tools. Another part of this chapter concerns with keyness as one of the dominant principles used...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography