To see the other types of publications on this topic, follow the link: Fisher information matrix (FIM).

Dissertations / Theses on the topic 'Fisher information matrix (FIM)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Fisher information matrix (FIM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Roy, Prateep Kumar. "Analysis & design of control for distributed embedded systems under communication constraints." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00534012.

Full text
Abstract:
Les Systèmes de Contrôle Embarqués Distribués (SCED) utilisent les réseaux de communication dans les boucles de rétroaction. Étant donné que les systèmes SCED ont une puissance de batterie, une bande passante de communication et une puissance de calcul limitée, les débits des données ou des informations transmises sont bornées et ils peuvent affecter leur stabilité. Ceci nous amène à élargir le spectre de notre étude et y intégrer une étude sur la relation entre la théorie du contrôle d'un coté et celle de l'information de l'autre. La contrainte de débit de données induit la quantification des signaux tandis que les aspects de calcul temps réel et de communication induit des événements asynchrones qui ne sont plus réguliers ou périodiques. Ces deux phénomènes donnent au SCED une double nature, continue et discrète, et en font des cas d'étude spécifiques. Dans cette thèse, nous analysons la stabilité et la performance de SCED du point de vue de la théorie de l'information et du contrôle. Pour les systèmes linéaires, nous montrons l'importance du compromis entre la quantité d'information communiquée et les objectifs de contrôle, telles que la stabilité, la contrôlabilité/observabilité et les performances. Une approche de conception conjointe de contrôle et de communication (en termes de débit d'information au sens de Shannon) des SCED est étudiée. Les principaux résultats de ces travaux sont les suivants : nous avons prouvé que la réduction d'entropie (ce qui correspond à la réduction d'incertitude) dépend du Grammien de contrôlabilité. Cette réduction est également liée à l'information mutuelle de Shannon. Nous avons démontré que le Grammien de contrôlabilité constitue une métrique de l'entropie théorique de l'information en ce qui concerne les bruits induits par la quantification. La réduction de l'influence de ces bruits est équivalente à la réduction de la norme du Grammien de contrôlabilité. Nous avons établi une nouvelle relation entre la matrice d'information de Fisher (FIM) et le Grammien de Contrôlabilité (CG) basé sur la théorie de l'estimation et la théorie de l'information. Nous proposons un algorithme qui distribue de manière optimale les capacités de communication du réseau entre un nombre "n" d'actionneurs et/ou systèmes concurrents se basant sur la réduction de la norme du Grammien de Contrôlabilité
APA, Harvard, Vancouver, ISO, and other styles
2

Roy, Prateep Kumar. "Analyse et conception de la commande des systèmes embarqués distribués sous des contraintes de communication." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00532883.

Full text
Abstract:
Les Systèmes de Contrôle Embarqués Distribués (SCED) utilisent les réseaux de communication dans les boucles de rétroaction. Étant donné que les systèmes SCED ont une puissance de batterie, une bande passante de communication et une puissance de calcul limitée, les débits des données ou des informations transmises sont bornées et ils peuvent affecter leur stabilité. Ceci nous amène à élargir le spectre de notre étude et y intégrer une étude sur la relation entre la théorie du contrôle d'un coté et celle de l'information de l'autre. La contrainte de débit de données induit la quantification des signaux tandis que les aspects de calcul temps réel et de communication induit des événements asynchrones qui ne sont plus réguliers ou périodiques. Ces deux phénomènes donnent au SCED une double nature, continue et discrète, et en font des cas d'étude spécifiques. Dans cette thèse, nous analysons la stabilité et la performance de SCED du point de vue de la théorie de l'information et du contrôle. Pour les systèmes linéaires, nous montrons l'importance du compromis entre la quantité d'information communiquée et les objectifs de contrôle, telles que la stabilité, la contrôlabilité/observabilité et les performances. Une approche de conception conjointe de contrôle et de communication (en termes de débit d'information au sens de Shannon) des SCED est étudiée. Les principaux résultats de ces travaux sont les suivants : nous avons prouvé que la réduction d'entropie (ce qui correspond à la réduction d'incertitude) dépend du Grammien de contrôlabilité. Cette réduction est également liée à l'information mutuelle de Shannon. Nous avons démontré que le Grammien de contrôlabilité constitue une métrique de l'entropie théorique de l'information en ce qui concerne les bruits induits par la quantification. La réduction de l'influence de ces bruits est équivalente à la réduction de la norme du Grammien de contrôlabilité. Nous avons établi une nouvelle relation entre la matrice d'information de Fisher (FIM) et le Grammien de Contrôlabilité (CG) basé sur la théorie de l'estimation et la théorie de l'information. Nous proposons un algorithme qui distribue de manière optimale les capacités de communication du réseau entre un nombre "n" d'actionneurs et/ou systèmes concurrents se basant sur la réduction de la norme du Grammien de Contrôlabilité
APA, Harvard, Vancouver, ISO, and other styles
3

Pazman, Andrej. "Correlated optimum design with parametrized covariance function. Justification of the Fisher information matrix and of the method of virtual noise." Institut für Statistik und Mathematik, WU Vienna University of Economics and Business, 2004. http://epub.wu.ac.at/562/1/document.pdf.

Full text
Abstract:
We consider observations of a random field (or a random process), which is modeled by a nonlinear regression with a parametrized mean (or trend) and a parametrized covariance function. In the first part we show that under the assumption that the errors are normal with small variances, even when the number of observations is small, the ML estimators of both parameters are approximately unbiased, uncorrelated, with variances given by the inverse of the Fisher information matrix. In the second part we are extending the result of Pazman & Müller (2001) to the case of parametrized covariance function, namely we prove that the optimum designs with and without the presence of the virtual noise are identical. This in principle justify the use the method of virtual noise as a computational device also in this case. (authors' abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
4

Strömberg, Eric. "Faster Optimal Design Calculations for Practical Applications." Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-150802.

Full text
Abstract:
PopED is a software developed by the Pharmacometrics Research Group at the Department of Pharmaceutical Biosiences, Uppsala University written mainly in MATLAB. It uses pharmacometric population models to describe the pharmacokinetics and pharmacodynamics of a drug and then estimates an optimal design of a trial for that drug. With optimization calculations in average taking a very long time, it was desirable to increase the calculation speed of the software by parallelizing the serial calculation script. The goal of this project was to investigate different methods of parallelization and implement the method which seemed the best for the circumstances.The parallelization was implemented in C/C++ by using Open MPI and tested on the UPPMAX Kalkyl High-Performance Computation Cluster. Some alterations were made in the original MATLAB script to adapt PopED to the new parallel code. The methods which where parallelized included the Random Search and the Line Search algorithms. The testing showed a significant performance increase, with effectiveness per active core rangingfrom 55% to 89% depending on model and number of evaluated designs.
APA, Harvard, Vancouver, ISO, and other styles
5

Panas, Dagmara. "Model-based analysis of stability in networks of neurons." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28883.

Full text
Abstract:
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
APA, Harvard, Vancouver, ISO, and other styles
6

Perez-Ramirez, Javier. "An Opportunistic Relaying Scheme for Optimal Communications and Source Localization." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581448.

Full text
Abstract:
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California
The selection of relay nodes (RNs) for optimal communication and source location estimation is studied. The RNs are randomly placed at fixed and known locations over a geographical area. A mobile source senses and collects data at various locations over the area and transmits the data to a destination node with the help of the RNs. The destination node not only needs to collect the sensed data but also the location of the source where the data is collected. Hence, both high quality data collection and the correct location of the source are needed. Using the measured distances between the relays and the source, the destination estimates the location of the source. The selected RNs must be optimal for joint communication and source location estimation. We show in this paper how this joint optimization can be achieved. For practical decentralized selection, an opportunistic RN selection algorithm is used. Bit error rate performance as well as mean squared error in location estimation are presented and compared to the optimal relay selection results.
APA, Harvard, Vancouver, ISO, and other styles
7

Perez-Ramirez, Javier. "Relay Selection for Multiple Source Communications and Localization." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579585.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
Relay selection for optimal communication as well as multiple source localization is studied. We consider the use of dual-role nodes that can work both as relays and also as anchors. The dual-role nodes and multiple sources are placed at fixed locations in a two-dimensional space. Each dual-role node estimates its distance to all the sources within its radius of action. Dual-role selection is then obtained considering all the measured distances and the total SNR of all sources-to-destination channels for optimal communication and multiple source localization. Bit error rate performance as well as mean squared error of the proposed optimal dual-role node selection scheme are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Maltauro, Tamara Cantú. "Algoritmo genético aplicado à determinação da melhor configuração e do menor tamanho amostral na análise da variabilidade espacial de atributos químicos do solo." Universidade Estadual do Oeste do Paraná, 2018. http://tede.unioeste.br/handle/tede/3920.

Full text
Abstract:
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-09-10T17:23:20Z No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-09-10T17:23:20Z (GMT). No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-02-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
It is essential to determine a sampling design with a size that minimizes operating costs and maximizes the results quality throughout a trial setting that involves the study of spatial variability of chemical attributes on soil. Thus, this trial aimed at resizing a sample configuration with the least possible number of points for a commercial area composed of 102 points, regarding the information on spatial variability of soil chemical attributes to optimize the process. Initially, Monte Carlo simulations were carried out, assuming Gaussian, isotropic, and exponential model for semi-variance function and three initial sampling configurations: systematic, simple random and lattice plus close pairs. The Genetic Algorithm (GA) was used to obtain simulated data and chemical attributes of soil, in order to resize the optimized sample, considering two objective-functions. They are based on the efficiency of spatial prediction and geostatistical model estimation, which are respectively: maximization of global accuracy precision and minimization of functions based on Fisher information matrix. It was observed by the simulated data that for both objective functions, when the nugget effect and range varied, samplings usually showed the lowest values of objectivefunction, whose nugget effect was 0 and practical range was 0.9. And the increase in practical range has generated a slight reduction in the number of optimized sampling points for most cases. In relation to the soil chemical attributes, GA was efficient in reducing the sample size with both objective functions. Thus, sample size varied from 30 to 35 points in order to maximize global accuracy precision, which corresponded to 29.41% to 34.31% of the initial mesh, with a minimum spatial prediction similarity to the original configuration, equal to or greater than 85%. It is noteworthy that such data have reflected on the optimization process, which have similarity between the maps constructed with sample configurations: original and optimized. Nevertheless, the sample size of the optimized sample varied from 30 to 40 points to minimize the function based on Fisher information matrix, which corresponds to 29.41% and 39.22% of the original mesh, respectively. However, there was no similarity between the constructed maps when considering the initial and optimum sample configuration. For both objective functions, the soil chemical attributes showed mild spatial dependence for the original sample configuration. And, most of the attributes showed mild or strong spatial dependence for optimum sample configuration. Thus, the optimization process was efficient when applied to both simulated data and soil chemical attributes.
É necessário determinar um esquema de amostragem com um tamanho que minimize os custos operacionais e maximize a qualidade dos resultados durante a montagem de um experimento que envolva o estudo da variabilidade espacial de atributos químicos do solo. Assim, o objetivo deste trabalho foi redimensionar uma configuração amostral com o menor número de pontos possíveis para uma área comercial composta por 102 pontos, considerando a informação sobre a variabilidade espacial de atributos químicos do solo no processo de otimização. Inicialmente, realizaram-se simulações de Monte Carlo, assumindo as variáveis estacionárias Gaussiana, isotrópicas, modelo exponencial para a função semivariância e três configurações amostrais iniciais: sistemática, aleatória simples e lattice plus close pairs. O Algoritmo Genético (AG) foi utilizado para a obtenção dos dados simulados e dos atributos químicos do solo, a fim de se redimensionar a amostra otimizada, considerando duas funções-objetivo. Essas estão baseadas na eficiência quanto à predição espacial e à estimação do modelo geoestatístico, as quais são respectivamente: a maximização da medida de acurácia exatidão global e a minimização de funções baseadas na matriz de informação de Fisher. Observou-se pelos dados simulados que, para ambas as funções-objetivo, quando o efeito pepita e o alcance variaram, em geral, as amostragens apresentaram os menores valores da função-objetivo, com efeito pepita igual a 0 e alcance prático igual a 0,9. O aumento do alcance prático gerou uma leve redução do número de pontos amostrais otimizados para a maioria dos casos. Em relação aos atributos químicos do solo, o AG, com ambas as funções-objetivo, foi eficiente quanto à redução do tamanho amostral. Para a maximização da exatidão global, tem-se que o tamanho amostral da nova amostra reduzida variou entre 30 e 35 pontos que corresponde respectivamente a 29,41% e a 34,31% da malha inicial, com uma similaridade mínima de predição espacial, em relação à configuração original, igual ou superior a 85%. Vale ressaltar que tais dados refletem no processo de otimização, os quais apresentam similaridade entres os mapas construídos com as configurações amostrais: original e otimizada. Todavia, o tamanho amostral da amostra otimizada variou entre 30 e 40 pontos para minimizar a função baseada na matriz de informaçãode Fisher, a qual corresponde respectivamente a 29,41% e 39,22% da malha original. Mas, não houve similaridade entre os mapas elaborados quando se considerou a configuração amostral inicial e a otimizada. Para ambas as funções-objetivo, os atributos químicos do solo apresentaram moderada dependência espacial para a configuração amostral original. E, a maioria dos atributos apresentaram moderada ou forte dependência espacial para a configuração amostral otimizada. Assim, o processo de otimização foi eficiente quando aplicados tanto nos dados simulados como nos atributos químicos do solo.
APA, Harvard, Vancouver, ISO, and other styles
9

Achanta, Hema Kumari. "Optimal sensing matrices." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1421.

Full text
Abstract:
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process. The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications. This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead. The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
APA, Harvard, Vancouver, ISO, and other styles
10

Bastian, Michael R. "Neural Networks and the Natural Gradient." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/539.

Full text
Abstract:
Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
11

Strömberg, Eric. "Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-308452.

Full text
Abstract:
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
APA, Harvard, Vancouver, ISO, and other styles
12

Florez, Guillermo Domingo Martinez. "Extensões do modelo -potência." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.

Full text
Abstract:
Em analise de dados que apresentam certo grau de assimetria a suposicao que as observações seguem uma distribuição normal, pode resultar ser uma suposição irreal e a aplicação deste modelo pode ocultar características importantes do modelo verdadeiro. Este tipo de situação deu forca á aplicação de modelo assimétricos, destacando-se entre estes a família de distribuições skew-symmetric, desenvolvida por Azzalini (1985). Neste trabalho nos apresentamos uma segunda proposta para a anàlise de dados com presença importante de assimetria e/ou curtose, comparado com a distribuição normal. Nós apresentamos e estudamos algumas propriedades dos modelos alfa-potência e log-alfa-potência, onde também estudamos o problema de estimação, as matrizes de informação observada e esperada de Fisher e o grau do viés dos estimadores mediante alguns processos de simulação. Nós introduzimos um modelo mais estável que o modelo alfa- potência do qual derivamos o caso bimodal desta distribuição e introduzimos os modelos bimodal simêtrico e assimêtrico alfa-potencia. Posteriormente nós estendemos a distribuição alfa-potência para o caso do modelo Birnbaum-Saunders, estudamos as propriedades deste novo modelo, desenvolvemos estimadores para os parametros e propomos estimadores com viés corrigido. Também introduzimos o modelo de regressão alfa-potência para dados censurados e não censurados e para o modelo de regressão log-linear Birnbaum-Saunders; aqui nós derivamos os estimadores dos parâmetros e estudamos algumas técnicas de validação dos modelos. Por ultimo nós fazemos a extensão multivariada do modelo alfa-potência e estudamos alguns processos de estimação dos parâmetros. Para todos os casos estudados apresentam-se ilustrações com dados já analisados previamente com outras suposições de distribuições.
In data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Zhonggai. "Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28121.

Full text
Abstract:
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Votavová, Helena. "Statistická analýza výběrů ze zobecněného exponenciálního rozdělení." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231401.

Full text
Abstract:
Diplomová práce se zabývá zobecněným exponenciálním rozdělením jako alternativou k Weibullovu a log-normálnímu rozdělení. Jsou popsány základní charakteristiky tohoto rozdělení a metody odhadu parametrů. Samostatná kapitola je věnována testům dobré shody. Druhá část práce se zabývá cenzorovanými výběry. Jsou uvedeny ukázkové příklady pro exponenciální rozdělení. Dále je studován případ cenzorování typu I zleva, který dosud nebyl publikován. Pro tento speciální případ jsou provedeny simulace s podrobným popisem vlastností a chování. Dále je pro toto rozdělení odvozen EM algoritmus a jeho efektivita je porovnána s metodou maximální věrohodnosti. Vypracovaná teorie je aplikována pro analýzu environmentálních dat.
APA, Harvard, Vancouver, ISO, and other styles
15

Veloso, Ana C. A. "Optimização de estratégias de alimentação para identificação de parâmetros de um modelo de E. coli. utilização do modelo em monitorização e controlo." Doctoral thesis, Universidade do Minho, 2007. http://hdl.handle.net/10198/1049.

Full text
Abstract:
Os principais objectivos desta tese são: o desenho óptimo de experiências para a identificação de coeficientes de rendimento de um modelo não estruturado de um processo de fermentação semicontínua de Escherichia coli; a verificação experimental das trajectórias de alimentação obtidas por simulação; o desenvolvimento de estratégias de monitorização avançada para a estimação em linha de variáveis de estado e parâmetros cinéticos; e por fim o desenvolvimento de uma lei de controlo adaptativo para controlar a taxa específica de crescimento, com base em estratégias de alimentação de substrato com vista à maximização do crescimento e/ou produção. São apresentadas metodologias para o desenho óptimo de experiências, que visam a optimização da riqueza informativa das mesmas, quantificada por índices relativos à Matriz de Informação de Fisher. Embora, o modelo utilizado para descrever a fermentação semi-contínua de E. coli não esteja ainda optimizado em termos cinéticos e de algumas dificuldades encontradas na implementação prática dos resultados obtidos por simulação para o desenho óptimo de experiências, a qualidade da estimativa dos parâmetros, especialmente os do regime respirativo, é promissora. A incerteza das estimativas foi avaliada através de índices relacionados com o modelo de regressão linear múltipla, índices relativos à matriz de Fisher e pelo desenho das correspondentes elipses dos desvios. Os desvios associados a cada coeficiente mostram que ainda não foram encontrados os melhores valores. Procedeu-se também à investigação do papel do modelo dinâmico geral no desenho de sensores por programação. Foram aplicados três observadores – observador estendido de Kalman, observador assimptótico e observador por intervalo – para estimar a concentração de biomassa, tendo sido avaliado e comparado o seu desempenho bem como a sua flexibilidade. Os observadores estudados mostraram-se robustos, apresentando comportamentos complementares. Os observadores assimptóticos apresentam, em geral, um melhor desempenho que os observadores estendidos de Kalman. Os observadores por intervalo apresentam vantagens em termos de implementação prática, apresentando-se bastante promissores embora a sua validação experimental seja necessária. É apresentada uma lei de controlo adaptativo com modelo de referência que se traduz num controlo por antecipação/retroacção cuja acção de retroacção é do tipo PI, para controlar a taxa específica de crescimento. A robustez do algoritmo de controlo foi estudada por simulação numérica gerando dados “pseudo reais”, por aplicação de um ruído branco às variáveis medidas em linha, por alteração do valor de referência, por alteração do valor da concentração da glucose na alimentação e variando os valores nominais dos parâmetros do modelo. O estudo realizado permite concluir que a resposta do controlador é em geral satisfatória, sendo capaz de manter o valor da taxa específica de crescimento na vizinhança do valor de referência pretendido e inferior a um valor que conduz à formação de acetato, revestindo-se este facto de grande importância numa situação real, em especial, numa fermentação cujo objectivo seja a produção, nomeadamente de proteínas recombinadas. Foram ainda, analisados diferentes métodos de sintonização dos parâmetros do controlador, podendo concluir-se que, em geral, o método de sintonização automática com recurso à regra de adaptação dos parâmetros em função do erro relativo do controlador foi o que apresentou um melhor desempenho global. Este mecanismo de sintonização automática demonstrou capacidade para melhorar o desempenho do controlador ajustando continuamente os seus parâmetros.
The main objectives of this thesis are: the optimal experiment design for yield coefficients estimation in an unstructured growth model for Escherichia coli fed-batch fermentation; the experimental validation of the simulated feed trajectories; the development of advanced monitoring strategies for the on-line estimation of state variables and kinetic parameters; and at last the development of an adaptive control law, based on optimal substrate feed strategies in order to increase the growth and/or the production. Methodologies for the optimal experimental design are presented, in order to optimise the richness of data coming out from experiments, quantified by indexes based on the Fisher Information Matrix. Although the model used to describe the E. coli fed-batch fermentation is not optimised from the kinetic properties point of view and the fact that some difficulties were encountered in practical implementation of the simulated results obtained with the optimal experimental design, the estimated parameter quality, especially for the oxidative regimen, is promising. The estimation uncertainty was evaluated by means of indexes related with multiple linear regression model, indexes related to the Fisher matrix as well as by the construction of the related deviation ellipses. The deviations associated to each coefficient show that the best values were not yet found. The role of the general dynamical model was also investigated in which concerns the design of state observers, also called software sensors. The performance of three observer classes was compared: Kalman extended observer, assimptotic observer and interval observer. The studied observers showed good performance and robustness, being complementary of each other. Assimptotic observers showed, in general, a better performance than the Kalman extended observer. Interval observers presented advantages concerning practical implementation, showing a promising behaviour although experimental validation is needed. A model reference adaptive control law is presented and can be interpreted as a PI like feedforward/feedback controller, for specific growth rate control. Algorithm robustness was studied using “pseudo real” data obtained by numerical simulation, by applying a white noise to the on-line measured variables, by modifying the set-point value, by changing the glucose concentration value of the feed rate and varying the nominal model parameter value. The study made allowed to conclude that the controller response is, generally, satisfactory being able to keep the specific growth rate value in the proximity of the desired set-point and lower than the value that permits acetate formation, which is of major importance namely for real cases, specially, in a fermentation which objective was the production of recombinant proteins. Different tuning devices for controller parameters were analysed being the better performance achieved by the automatic tuning method with an adaptation rate as a function of the controller relative error. This automatic tuning mechanism was able to improve the controller performance adjusting continuously its parameters.
APA, Harvard, Vancouver, ISO, and other styles
16

Figueiredo, Cléber da Costa. "Calibração linear assimétrica." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-08032013-141153/.

Full text
Abstract:
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos.
This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
APA, Harvard, Vancouver, ISO, and other styles
17

Veloso, Ana C. A. "Optimização de estratégias de alimentação para identificação de parâmetros de um modelo de E. coli. utilização do modelo em monitorização e controlo." Doctoral thesis, Universidade do Minho, 2007. http://hdl.handle.net/1822/6289.

Full text
Abstract:
Doutoramento em Engenharia Química e Biológica
Os principais objectivos desta tese são: o desenho óptimo de experiências para a identificação de coeficientes de rendimento de um modelo não estruturado de um processo de fermentação semicontínua de Escherichia coli; a verificação experimental das trajectórias de alimentação obtidas por simulação; o desenvolvimento de estratégias de monitorização avançada para a estimação em linha de variáveis de estado e parâmetros cinéticos; e por fim o desenvolvimento de uma lei de controlo adaptativo para controlar a taxa específica de crescimento, com base em estratégias de alimentação de substrato com vista à maximização do crescimento e/ou produção. São apresentadas metodologias para o desenho óptimo de experiências, que visam a optimização da riqueza informativa das mesmas, quantificada por índices relativos à Matriz de Informação de Fisher. Embora, o modelo utilizado para descrever a fermentação semi-contínua de E. coli não esteja ainda optimizado em termos cinéticos e de algumas dificuldades encontradas na implementação prática dos resultados obtidos por simulação para o desenho óptimo de experiências, a qualidade da estimativa dos parâmetros, especialmente os do regime respirativo, é promissora. A incerteza das estimativas foi avaliada através de índices relacionados com o modelo de regressão linear múltipla, índices relativos à matriz de Fisher e pelo desenho das correspondentes elipses dos desvios. Os desvios associados a cada coeficiente mostram que ainda não foram encontrados os melhores valores. Procedeu-se também à investigação do papel do modelo dinâmico geral no desenho de sensores por programação. Foram aplicados três observadores – observador estendido de Kalman, observador assimptótico e observador por intervalo – para estimar a concentração de biomassa, tendo sido avaliado e comparado o seu desempenho bem como a sua flexibilidade. Os observadores estudados mostraram-se robustos, apresentando comportamentos complementares. Os observadores assimptóticos apresentam, em geral, um melhor desempenho que os observadores estendidos de Kalman. Os observadores por intervalo apresentam vantagens em termos de implementação prática, apresentando-se bastante promissores embora a sua validação experimental seja necessária. É apresentada uma lei de controlo adaptativo com modelo de referência que se traduz num controlo por antecipação/retroacção cuja acção de retroacção é do tipo PI, para controlar a taxa específica de crescimento. A robustez do algoritmo de controlo foi estudada por simulação numérica gerando dados “pseudo reais”, por aplicação de um ruído branco às variáveis medidas em linha, por alteração do valor de referência, por alteração do valor da concentração da glucose na alimentação e variando os valores nominais dos parâmetros do modelo. O estudo realizado permite concluir que a resposta do controlador é em geral satisfatória, sendo capaz de manter o valor da taxa específica de crescimento na vizinhança do valor de referência pretendido e inferior a um valor que conduz à formação de acetato, revestindo-se este facto de grande importância numa situação real, em especial, numa fermentação cujo objectivo seja a produção, nomeadamente de proteínas recombinadas. Foram ainda, analisados diferentes métodos de sintonização dos parâmetros do controlador, podendo concluir-se que, em geral, o método de sintonização automática com recurso à regra de adaptação dos parâmetros em função do erro relativo do controlador foi o que apresentou um melhor desempenho global. Este mecanismo de sintonização automática demonstrou capacidade para melhorar o desempenho do controlador ajustando continuamente os seus parâmetros.
The main objectives of this thesis are: the optimal experiment design for yield coefficients estimation in an unstructured growth model for Escherichia coli fed-batch fermentation; the experimental validation of the simulated feed trajectories; the development of advanced monitoring strategies for the on-line estimation of state variables and kinetic parameters; and at last the development of an adaptive control law, based on optimal substrate feed strategies in order to increase the growth and/or the production. Methodologies for the optimal experimental design are presented, in order to optimise the richness of data coming out from experiments, quantified by indexes based on the Fisher Information Matrix. Although the model used to describe the E. coli fed-batch fermentation is not optimised from the kinetic properties point of view and the fact that some difficulties were encountered in practical implementation of the simulated results obtained with the optimal experimental design, the estimated parameter quality, especially for the oxidative regimen, is promising. The estimation uncertainty was evaluated by means of indexes related with multiple linear regression model, indexes related to the Fisher matrix as well as by the construction of the related deviation ellipses. The deviations associated to each coefficient show that the best values were not yet found. The role of the general dynamical model was also investigated in which concerns the design of state observers, also called software sensors. The performance of three observer classes was compared: Kalman extended observer, assimptotic observer and interval observer. The studied observers showed good performance and robustness, being complementary of each other. Assimptotic observers showed, in general, a better performance than the Kalman extended observer. Interval observers presented advantages concerning practical implementation, showing a promising behaviour although experimental validation is needed. A model reference adaptive control law is presented and can be interpreted as a PI like feedforward/feedback controller, for specific growth rate control. Algorithm robustness was studied using “pseudo real” data obtained by numerical simulation, by applying a white noise to the on-line measured variables, by modifying the set-point value, by changing the glucose concentration value of the feed rate and varying the nominal model parameter value. The study made allowed to conclude that the controller response is, generally, satisfactory being able to keep the specific growth rate value in the proximity of the desired set-point and lower than the value that permits acetate formation, which is of major importance namely for real cases, specially, in a fermentation which objective was the production of recombinant proteins. Different tuning devices for controller parameters were analysed being the better performance achieved by the automatic tuning method with an adaptation rate as a function of the controller relative error. This automatic tuning mechanism was able to improve the controller performance adjusting continuously its parameters.
APA, Harvard, Vancouver, ISO, and other styles
18

Purutcuoglu, Vilda. "Unit Root Problems In Time Series Analysis." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12604701/index.pdf.

Full text
Abstract:
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem. However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash
statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n. In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash
moment chi &ndash
square or four &ndash
moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo
s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
APA, Harvard, Vancouver, ISO, and other styles
19

Ley, Christophe. "Univariate and multivariate symmetry: statistical inference and distributional aspects." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210029.

Full text
Abstract:
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.

The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.

The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.

Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./

Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.

La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.

La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.

Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Wen-ShanHuang and 黃文姍. "Batch Mode Active Learning for Ising Models Using Fisher Information Matrix." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/d2zu7c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography