To see the other types of publications on this topic, follow the link: Quadrature de Gauss.

Dissertations / Theses on the topic 'Quadrature de Gauss'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'Quadrature de Gauss.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tang, Tunan. "Extensions of Gauss, block Gauss, and Szego quadrature rules, with applications." Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1460403903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alqahtani, Hessah Faihan. "GAUSS-TYPE QUADRATURE RULES, WITH APPLICATIONSIN LINEAR ALGEBRA." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1521760018029109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Assoufi, Abdelaziz. "Les formules de quadrature numérique dans un domaine de IR2 ou IR3." Pau, 1985. http://www.theses.fr/1985PAUUA001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barajas, Freddy Hernandez. "Modelos multiníveis Weibull com efeitos aleatórios." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07052013-203915/.

Full text
Abstract:
Os modelos multiníveis são uma classe de modelos úteis na análise de bases de dados com estrutura hierárquica. No presente trabalho propõem-se os modelos multiníveis com resposta Weibull, nos quais são considerados interceptos aleatórios na modelagem dos dois parâmetros da distribuição da variável resposta. Os modelos aqui propostos são flexíveis devido a que a distribuição dos interceptos aleatórios pode der escolhida entre uma das seguintes quatro distribuições: normal, log--gama, logística e Cauchy. Uma extensão dos modelos é apresentada na qual é possível incluir na parte sistemática dos dois parâmetros da distribuição da variável resposta interceptos e inclinações aleatórias com distribuição normal bivariada. A estimação dos parâmetros é realizada pelo método de máxima verossimilhança usando a quadratura de Gauss--Hermite para aproximar a função de verossimilhança. Um pacote em linguagem R foi desenvolvido especialmente para a estimação dos parâmetros, predição dos efeitos aleatórios e para a obtenção dos resíduos nos modelos propostos. Adicionalmente, por meio de um estudo de simulação foi avaliado o impacto nas estimativas dos parâmetros do modelo ao assumir incorretamente a distribuição dos interceptos aleatórios.
Multilevel models are a class of models useful in the analysis of datasets with hierarchical structure. In the present work we propose multilevel Weibull models in which random intercepts are considered to model the two parameters of the Weibull distribution. The proposed models are flexible due to random intercepts distribution can be chosen from one of the four following distributions: normal, log-gamma, logistics and Cauchy. An extension of the models is presented in which we can include, in the systematic part of the two parameters of the distribution, random intercepts and slopes with a bivariate normal distribution. The parameter estimation is performed by maximum likelihood method using the Gauss Hermite quadrature to approximate the likelihood function. A package in R language was especially developed to obtain parameter estimation, random effects predictions and residuals for the proposed models. Additionally, through a simulation study we investigated the misspecification random effect distribution on estimated parameter for the proposed model
APA, Harvard, Vancouver, ISO, and other styles
5

Pujet, Alphonse Christophe. "Des quadratures Suivi de Sur les mouvements simultanés d'un système de points matériels assujettis à rester constamment dans un plan passant par l'origine des coordonnées /." Paris : Bibliothèque universitaire Pierre et Marie Curie (BUPMC), 2009. http://jubil.upmc.fr/sdx/pl/toc.xsp?id=TH_000277_001&fmt=upmc&idtoc=TH_000277_001-pleadetoc&base=fa.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ezzirani, Abdelkrim. "Construction de formules de quadrature pour des systèmes de Tchebyshev avec applications aux méthodes spectrales." Pau, 1996. http://www.theses.fr/1996PAUU3035.

Full text
Abstract:
L'objectif principal de cette thèse est la construction des éléments spectraux, qui assureront la condensation de la matrice de masse, c'est à dire une opération conduisant à remplacer cette matrice, sans nuire à la précision des calculs. Cette propriété est très importante lorsque l'approximation est une étape de la résolution numérique d'un problème non stationnaire. On aboutit alors à des schémas réellement explicites, après discrétisation en temps, et ceci sans inverser la matrice de masse. L'idée clef de cette démarche repose sur l'utilisation de nouvelles formules de quadrature bien adaptées à ce type de problèmes. On présente aussi un algorithme pour la construction de formules de quadrature de type Gauss pour les fonctions splines et on mène notamment une étude comparative avec des méthodes existantes.
APA, Harvard, Vancouver, ISO, and other styles
7

Guessab, Allal. "Sur les formules de quadrature numérique à nombre minimal de noeuds dans un domaine de IR(n)." Pau, 1987. http://www.theses.fr/1988PAUU3011.

Full text
Abstract:
Ce travail a pour objet la recherche, à partir de la théorie des polynomes orthogonaux, de conditions permettant l'obtention de formules de quadrature numérique sur des domaines de r(n), avec fonction poids, à nombre minimal de noeuds et exactes sur les espaces r(k(1),k(2),. . . ,k(n)) (d) de degré inférieur ou égal à k(i) par rapport à la variabilité x(i).
APA, Harvard, Vancouver, ISO, and other styles
8

Kzaz, Mustapha. "Accélération de la convergence de la formule de quadrature de Gauss-Jacobi dans le cas des fonctions analytiques." Lille 1, 1992. http://www.theses.fr/1992LIL10174.

Full text
Abstract:
La méthode de quadrature de gauss est une méthode numérique d'intégration très puissante. Néanmoins, elle présente certains inconvénients : la suite générée par cette méthode ne vérifie aucune relation de récurrence, le calcul de chaque terme de cette suite est assez couteux, et enfin, la convergence de cette suite est tres lente lorsque la fonction à intégrer présente des singularités assez voisines de l'intervalle d'intégration. Afin de remédier a ce problème, nous allons proposer des algorithmes d'accélération nous permettant d'avoir une meilleure approximation de la valeur exacte de l'intégrale. Comme dans toutes les méthodes d'extrapolation, on aura à déterminer le ou les premiers termes du développement asymptotique de l'erreur. Dans le premier chapitre, des résultats concernant l'expression de l'erreur, les polynomes orthonormaux de jacobi, ainsi que certains résultats d'analyse seront presentés. Dans le deuxieme et le troisieme chapitre, on etudiera les cas ou l'integrand appartient à certaines classes de fonctions. Dans le dernier chapitre, le cas des intégrales à valeur principale de cauchy sera étudié pour le cas de fonctions étudiées dans les deux chapitres précédents
APA, Harvard, Vancouver, ISO, and other styles
9

MOURA, Márcio José das Chagas. "Novel and faster ways for solving semi-markov processes: mathematical and numerical issues." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/4939.

Full text
Abstract:
Made available in DSpace on 2014-06-12T17:35:03Z (GMT). No. of bitstreams: 2 arquivo3630_1.pdf: 2374215 bytes, checksum: 64f9cdc75ffa8167dff3140c0b1e48a2 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Petróleo Brasileiro S/A
Processos semi-Markovianos (SMP) contínuos no tempo são importantes ferramentas estocásticas para modelagem de métricas de confiabilidade ao longo do tempo para sistemas para os quais o comportamento futuro depende dos estados presente e seguinte assim como do tempo de residência. O método clássico para resolver as probabilidades intervalares de transição de SMP consiste em aplicar diretamente um método geral de quadratura às equações integrais. Entretanto, esta técnica possui um esforço computacional considerável, isto é, N2 equações integrais conjugadas devem ser resolvidas, onde N é o número de estados. Portanto, esta tese propõe tratamentos matemáticos e numéricos mais eficientes para SMP. O primeiro método, o qual é denominado 2N-, é baseado em densidades de frequência de transição e métodos gerais de quadratura. Basicamente, o método 2N consiste em resolver N equações integrais conjugadas e N integrais diretas. Outro método proposto, chamado Lap-, é baseado na aplicação de transformadas de Laplace as quais são invertidas por um método de quadratura Gaussiana, chamado Gauss Legendre, para obter as probabilidades de estado no domínio do tempo. Formulação matemática destes métodos assim como descrições de seus tratamentos numéricos, incluindo questões de exatidão e tempo para convergência, são desenvolvidas e fornecidas com detalhes. A efetividade dos novos desenvolvimentos 2N- e Lap- serão comparados contra os resultados fornecidos pelo método clássico por meio de exemplos no contexto de engenharia de confiabilidade. A partir destes exemplos, é mostrado que os métodos 2N- e Lap- são significantemente menos custosos e têm acurácia comparável ao método clássico
APA, Harvard, Vancouver, ISO, and other styles
10

Roman, Jean. "Complexité d'algorithmes de séparation de graphes pour des implémentations séquentielles et réparties de l'élimination de Gauss." Bordeaux 1, 1987. http://www.theses.fr/1987BOR10582.

Full text
Abstract:
En appliquant l'etude de l'elimination de gauss a la resolution de grands systemes creux d'equations lineaires, on introduit un solveur par blocs pour lequel on demontre des resultats de complexite en temps et en espace. La methode etant de type "diviser pour gagner", elle induit un parallelisme naturel, etudie dans une deuxieme partie. Une implementation repartie a faible couplage du solveur par blocs pour un calculateur de type "message passing" est proposee
APA, Harvard, Vancouver, ISO, and other styles
11

Manco, Olga Cecilia Usuga. "Modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10072013-234405/.

Full text
Abstract:
A classe de modelos de regressão beta tem sido estudada amplamente. Porém, para esta classe de modelos existem poucos trabalhos sobre a inclusão de efeitos aleatórios e a flexibilização da distribuição dos efeitos aleatórios, além de métodos de predição e de diagnóstico no ponto de vista dos efeitos aleatórios. Neste trabalho são propostos modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais. Os métodos de estimação de parâmetros e de predição dos efeitos aleatórios usados no trabalho são o método de máxima verossimilhança e o método do melhor preditor de Bayes empírico. Para aproximar a função de verossimilhança foi utilizada a quadratura de Gauss-Hermite. Métodos de seleção de modelos e análise de resíduos também foram propostos. Foi implementado o pacote BLMM no R para a realização de todos os procedimentos. O processo de estimação os parâmetros dos modelos e a distribuição empírica dos resíduos propostos foram analisados por meio de estudos de simulação. Foram consideradas várias distribuições para os efeitos aleatórios, valores para o número de indivíduos, número de observações por indivíduo e estruturas de variância-covariância para os efeitos aleatórios. Os resultados dos estudos de simulação mostraram que o processo de estimação obtém melhores resultados quando o número de indivíduos e o número de observações por indivíduo aumenta. Estes estudos também mostraram que o resíduo quantil aleatorizado segue uma distribuição aproximadamente normal. A metodologia apresentada é uma ferramenta completa para analisar dados longitudinais contínuos que estão restritos ao intervalo limitado (0; 1).
The class of beta regression models has been studied extensively. However, there are few studies on the inclusion of random effects and models with flexible random effects distributions besides prediction and diagnostic methods. In this work we proposed a beta regression models with normal and not normal random effects for longitudinal data. The maximum likelihood method and the empirical Bayes approach are used to obtain the estimates and the best prediction. Also, the Gauss-Hermite quadrature is used to approximate the likelihood function. Model selection methods and residual analysis were also proposed.We implemented a BLMM package in R to perform all procedures. The estimation procedure and the empirical distribution of residuals were analyzed through simulation studies considering differents random effects distributions, values for the number of individuals, number of observations per individual and covariance structures for the random effects. The results of simulation studies showed that the estimation procedure obtain better results when the number of individuals and the number of observations per individual increase. These studies also showed that the empirical distribution of the quantile randomized residual follows a normal distribution. The methodolgy presented is a tool for analyzing longitudinal data restricted to a interval (0; 1).
APA, Harvard, Vancouver, ISO, and other styles
12

Nicu, Ana-Maria. "Approximation et représentation des fonctions sur la sphère. Applications à la géodésie et à l'imagerie médicale." Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00671453.

Full text
Abstract:
Cette thèse est construite autour de l'approximation et la représentation des fonctions sur la sphère avec des applications pour des problèmes inverses issues de la géodésie et de l'imagerie médicale. Le plan de la thèse est structuré de la façon suivante. Dans le premier chapitre, on donne le cadre général d'un problème inverse ainsi que la description du problème de la géophysique et de la M/EEG. L'idée d'un problème inverse est de retrouver une densité à l'intérieur d'un domaine (la boule unité modélisant la terre ou le cerveau humain), à partir des données des mesures d'un certain potentiel à la surface du domaine. On continue par donner les principales définitions et théorèmes qu'on utilisera tout au long de la thèse. De plus, la résolution du problème inverse consiste dans la résolution de deux problèmes : transmission de données et localisation de sources à l'intérieur de la boule. En pratique, les données mesurées sont disponibles que sur des parties de la sphère : calottes sphériques, hémisphère nord de la tête (M/EEG), continents (géodésie). Pour représenter ce type de données, on construit la base de Slepian qui a des bonnes propriétés sur les régions étudiées. Dans le Chapitre 4 on s'intéresse au problème d'estimation de données sur la sphère entière (leur développement sous la base des harmoniques sphériques) à partir des mesures partielles bruitées. Une fois qu'on connait ce développement, on applique la méthode du meilleur approximant rationnel sur des sections planes de la sphère (Chapitre 5). Ce chapitre traite trois types de densité : monopolaire, dipolaire et inclusions pour la modélisation des problèmes, ainsi que des propriétés de la densité et du potentiel associé, quantités mises en relation par un certain opérateur. Dans le Chapitre 6 on regarde les Chapitres 3, 4 et 5 du point de vue numérique. On présente des tests numériques pour la localisation de sources dans la géodésie et la M/EEG lorsqu'on dispose des données partielles sur la sphère.
APA, Harvard, Vancouver, ISO, and other styles
13

Joulak, Hédi. "Quasi-orthogonalité : avancées et applications." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-Joulak.pdf.

Full text
Abstract:
Dans un premier temps, j'expose de nouveaux résultats sur la quasi-orthogonalité en traitant de nouvelles formes d'écriture (comme la forme déterminantale ou de nouvelles relations de récurrence). J'ai pu aussi dégager une génération de ces polynômes quasi-orthogonaux d'ordre r en m'appuyant sur un ensemble de r+1 polynômes particuliers (grâce à une caractérisation par les zéros). Dans le chapitre suivant, j'ai généralisé des résultats déjà connus pour le cas où l'ordre valait 1 ou 2. J'ai pu aussi énoncer des entrelacements pour les zéros des polynômes quasi-orthogonaux d'ordre 3. Une avancée notable dans ce chapitre concerne l'étude des zéros pour un ordre quelconque. J'ai mis en exergue le fait qu'il existait toujours un polynôme quasi-orthogonal ayant toutes ses racines réelles et distinctes et que, dans cette configuration, on pouvait donner des conditions nécessaires et suffisantes pour les places du premier et dernier zéro. Les résultats de cette partie sont mis en situation avec les polynômes de Jacobi et de Laguerre-Sonin dans le Chapitre 4 afin d'obtenir de nouvelles relations de récurrence et de nouveaux entrelacements (en faisant varier les degrés et les paramètres les définissant). Je finis par deux parties traitant des méthodes de quadrature, et plus particulièrement des méthodes généralisées de Gauss-Radau et Gauss-Lobatto. Ces parties montrent le lien que l'on peut faire entre les zéros d'un polynôme quasi-orthogonal et les nœuds de ces quadratures. Elles répondent aussi à une question ouverte laissée par W. Gautschi qui nous dit que tous les poids de ces méthodes généralisées de Gauss-Radau et Gauss-Lobatto sont strictement positifs.
APA, Harvard, Vancouver, ISO, and other styles
14

Lobos, Cristian Marcelo Villegas. "Modelos log-Birnbaum-Saunders mistos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-05112010-114755/.

Full text
Abstract:
O objetivo principal deste trabalho é introduzir os modelos log-Birnbaum-Saunders mistos (log-BS mistos) e estender os resultados para os modelos log-Birnbaum-Saunders t-Student mistos (log-BS-t mistos). Os modelos log-BS são bastante conhecidos desde o trabalho de Rieck e Nedelman (1991) e particularmente receberam uma grande atenção nos últimos 10 anos com vários trabalhos publicados em periódicos internacionais. Contudo, o enfoque desses trabalhos tem sido em modelos log-BS ou log-BS generalizados com efeitos fixos, não havendo muita atenção para modelos com efeitos aleatórios. Inicialmente, apresentamos no trabalho uma revisão das distribuições Birnbaum-Saunders e Birnbaum-Saunders generalizada (BSG) e em seguida discutimos os modelos log-BS e log-BS-t com efeitos fixos, para os quais revisamos alguns resultados de estimação e diagnóstico. Os modelos log-BS mistos são então apresentados precedidos de uma revisão dos métodos de quadratura de Gauss Hermite (QGH). Embora a estimação dos parâmetros nos modelos log-BS mistos seja efetuada através do procedimento Proc NLMIXED do SAS (Littell et al, 1996), aplicamos o método de quadratura não adaptativa a fim de obtermos aproximações para o logaritmo da função de verossimilhança do modelo log-BS de intercepto aleatório. Com essas aproximações derivamos as funções escore e a matriz hessiana, além das curvaturas normais de influência local (Cook, 1986) para alguns esquemas de perturbação usuais. Os mesmos procedimentos são aplicados para os modelos log-BS-t de intercepto aleatório. Discussões sobre a predição dos efeitos aleatórios, teste para o componente de variância dos modelos com intercepto aleatório e análises de resíduos são também apresentados. Finalmente, comparamos os ajustes de modelos log-BS e log-BS mistos a um conjunto de dados reais. Métodos de diagnóstico são utilizados na comparação dos modelos ajustados.
The aim of this work is to introduce the log-Birnbaum-Saunders mixed models (log-BS mixed models) and to extend the results to log-Birnbaum-Saunders Student-t mixed models (log-BS-t mixed models). The log-BS models are well-known since the work by Rieck and Nedelman (1991) and particularly have received great attention in the last 10 years with various published papers in international journals. However, the emphasis given in such works has been in fixed-effects models with few attention given to random-effects models. Firstly, we present in this work a review on Birnbaum-Saunders and generalized Birnbaum-Saunders distributions and so we discuss log-BS and log-BS-t fixed-effects models for which some results on estimation and diagnostic are presented. Then, we introduce the log-BS mixed models preceded by a review on Gauss-Hermite quadrature. Although the parameter estimation of the marginal log-BS and log-BS-t mixed models are performed in the procedure NLMIXED of SAS (Littell et al., 1996), we apply the quadrature methods in order to obtain approximations for the likelihood function of the log-BS and log-BS-t random intercept models. These approximations are used to derive the respective score functions, observed information matrices as well as the normal curvature of local influence (Cook, 1986) under some usual perturbation schemes. Discussions on the prediction of the random effects, variance component tests and residual analysis are also given. Finally, we compare the fits of log-BS and log-BS-t mixed models to a real data set. Diagnostic methods are used in the comparisons.
APA, Harvard, Vancouver, ISO, and other styles
15

Belantari, Abdelhak. "Procédures d'estimation de l'erreur dans l'approximation de type Padé." Lille 1, 1989. http://www.theses.fr/1989LIL10111.

Full text
Abstract:
En 1964, A. S. Kronrod a proposé une méthode heuristique pour estimer l'erreur dans les formules de quadrature de Gauss. Récemment une extension de cette méthode à l'approximation de Padé a été obtenue par Brezinski. Puisque la plupart des approximants rationnels de séries formelles f(t)=#i#0c#it#i, aussi bien dans le cas scalaire que dans le cas vectoriel, peuvent être considérés comme des formules de quadrature formelle ; le but principal de ce travail est d'étendre l'idée de Konrod à l'approximation rationnelle notamment approximation de type-Padé, approximants de Padé dans une algèbre non commutative, approximants de Padé d'une série de Stieltjes et approximants de Padé vectoriels. On a aussi montré l'équivalence entre l'interpolation de la fonction génératrice (e-tx)##1 et l'extension de la procédure de Kronrod à tous ces approximants rationnels. De cette nouvelle interprétation et de la flexibilité de l'expression de l'erreur des formules de quadrature précédentes résultent davantage d'extensions et d'améliorations de la méthode initiale. La plupart des techniques utilisées se rangent sommairement dans deux catégories : 1) interpolation polynomiale (le plus souvent dans le sens d'Hermite) qui fournit un puissant outil pour obtenir une large classe d'estimations de l'erreur ; 2) une nouvelle approche consiste à appliquer certains algorithmes d'accélération de la convergence ce qui étend la notion d'estimation parfaite de l'erreur à l'approximation de Padé et de type-Padé
APA, Harvard, Vancouver, ISO, and other styles
16

Cenzato, Rebecca. "Analisi e soluzione numerica dell'equazione di Lippmann-Schwinger." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16366/.

Full text
Abstract:
In questa tesi è illustrato un procedimento numerico per risolvere l'equazione di Lippmann-Schwinger. Tale procedimento è applicato nella scrittura di un codice in C++, che risolve l'equazione noto il potenziale di interazione tra nucleoni. Il potenziale utilizzato nella scrittura del codice è il potenziale di Entem-Machleidt-Nosyk (ENM-N^4LO). A partire dalla soluzione dell'equazione di Lippmann-Schwinger, sono calcolati gli sfasamenti per diversi valori del momento angolare totale (J = 1,...,8) e diverse tipologie di collisione tra nucleoni (neutrone-protone e protone-protone), per canali singoli ed accoppiati. I valori ottenuti sono infine confrontati con i dati sperimentali ed è eseguita una breve analisi numerica per verificare la convergenza del risultato.
APA, Harvard, Vancouver, ISO, and other styles
17

Farias, Thais Machado. "Determinação de espectros de relaxação e distribuição de massa molar de polímeros lineares por reometria." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/19005.

Full text
Abstract:
A distribuição de massa molar (DMM) e seus parâmetros são de fundamental importância na caracterização dos polímeros. Por este motivo, o desenvolvimento de técnicas que permitam a determinação da DMM de forma mais rápida e a menor custo é de grande importância prática. Os principais objetivos deste trabalho foram a implementação de alguns dos modelos baseados da teoria da reptação dupla propostos na literatura para descrever o mecanismo de relaxação das cadeias poliméricas, a avaliação dessas implementações e a análise de dois passos fundamentais na obtenção da DMM a partir de dados reológicos que são a metodologia de cálculo do espectro de relaxação baseado no modelo de Maxwell e a estratégia para a avaliação numérica das integrais que aparecem nos modelos de relaxação. Foi resolvido o problema denominado problema inverso, ou seja, a determinação da DMM a partir de dados reológicos usando um modelo de relaxação especificado e uma função de distribuição imposta. Foi usada a função Exponencial Generalizada (GEX) para representar a probabilidade de distribuição, sendo consideradas duas abordagens: i) cálculo explícito do espectro de relaxação e ii) aproximações paramétricas de Schwarzl, que evitam a necessidade do cálculo explícito do espectro de relaxação. A metodologia de determinação da DMM foi aplicada para amostras de polietileno e foram estimadas distribuições com boa representação dos dados experimentais do GPC, ao considerarem-se amostras com polidispersões inferiores a 10. Com relação a metodologia de cálculo do espectro de relaxação, foi realizado um estudo comparativo da aplicação de espectros de relaxação discreto e contínuo, com o objetivo de estabelecer critérios para especificação do número ótimo de modos de Maxwell a serem considerados. Ao efetuar-se a comparação entre as técnicas, verificou-se o espectro discreto apresenta como um sistema melhor condicionado, permitindo assim obter maior confiabilidade dos parâmetros estimados. Também é proposta uma modificação da metodologia de determinação da DMM, em que é aplicada a quadratura de Gauss-Hermite para a resolução numérica da integral dos modelos de relaxação.
The molecular weight distribution (MWD) and its parameters are of the fundamental importance in the characterization of polymers. Therefore, the development of techniques for faster and less time consuming determination of the MWD is of great practical relevance. The goals of this work were the implementation of some of the relaxation models from double reptation theory proposed in the literature, the evaluation of these implementations and the analysis of two key points in the recovery of the MWD from rheological data which are the methodology for calculation of the relaxation spectrum based on the Maxwell model and the numeric strategy for the evaluation of the integrals appearing in the relaxation models. The inverse problem, i.e., the determination of the MWD from rheological data using a specified relaxation model and an imposed distribution function, was solved. In the analysis of the inverse problem, the Generalized Exponential (GEX) was used as distribution function and two approaches were considered: i) explicit calculation of the relaxation spectrum and ii) use of the parametric method proposed by Schwarzl to avoid the explicit calculation of the relaxation spectrum. In the test of commercial samples of polyethylene with polidispersity less than 10, the application of this methodology led to MWD curves which provided good fit of the experimental SEC data. Regarding the methodology for calculation of the relaxation spectrum, a comparison between the performance of discrete and continuous relaxation spectrum was performed and some possible a criteria to determine the appropriate number of relaxation modes of Maxwell to be used were evaluated. It was found that the technique of discrete spectrum leads to better conditioned systems and, consequently, greater reliability of the estimated parameters. With relation to the numeric strategy for the evaluation of the integrals appearing in the relaxation models, the use of Gauss-Hermite quadrature using a new change of variables was proposed.
APA, Harvard, Vancouver, ISO, and other styles
18

Falon, Roger Jesus Tovar. "Modelos de regressão lineares mistos sob a classe de distribuições normal-potência." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-15032018-132547/.

Full text
Abstract:
Neste trabalho são apresentadas algumas extensões dos modelos potência-alfa assumindo o contexto em que as observações estão censuradas ou limitadas. Inicialmente propomos um novo modelo assimétrico que estende os modelos t-assimétrico (Azzalini e Capitanio, 2003) e t-potência (Zhao e Kim, 2016) e inclui a distribuição t de Student como caso particular. Este novo modelo é capaz de ajustar dados com alto grau de assimetria e curtose, ainda maior do que os modelos t-assimétrico e t-potência. Em seguida estendemos o modelo t-potência às situações em que os dados apresentam censura, com alto grau de assimetria e caudas pesadas. Este modelo generaliza o modelo de regressão linear t de Student para dados censurados por Arellano-Valle et al. (2012). O trabalho também introduz o modelo linear misto normal-potência para dados assimétricos. Aqui a inferência estatística é realizada desde uma perspectiva clássica usando o método de máxima verossimilhança junto com o método de integração numérica de Gauss-Hermite para aproximar as integrais envolvidas na função de verossimilhança. Mais tarde, o modelo linear com interceptos aleatórios para dados duplamente censurados é estudado. Este modelo é desenvolvido sob a suposição de que os erros e os efeitos aleatórios seguem distribuições normal-potência e normal- assimétrica. Para todos os modelos estudados foram realizados estudos de simulação a fim de estudar as suas bondades de ajuste e limitações. Finalmente, ilustram-se todos os métodos propostos com dados reais.
In this work some extensions of the alpha-power models are presented, assuming the context in which the observations are censored or limited. Initially we propose a new asymmetric model that extends the skew-t (Azzalini e Capitanio, 2003) and power-t (Zhao e Kim, 2016) models and includes the Students t-distribution as a particular case. This new model is able to adjust data with a high degree of asymmetry and cursose, even higher than the skew-t and power-t models. Then we extend the power-t model to situations in which the data present censorship, with a high degree of asymmetry and heavy tails. This model generalizes the Students t linear censored regression model t by Arellano-Valle et al. (2012) The work also introduces the power-normal linear mixed model for asymmetric data. Here statistical inference is performed from a classical perspective using the maximum likelihood method together with the Gauss-Hermite numerical integration method to approximate the integrals involved in the likelihood function. Later, the linear model with random intercepts for doubly censored data is studied. This model is developed under the assumption that errors and random effects follow power-normal and skew-normal distributions. For all the models studied, simulation studies were carried out to study their benefits and limitations. Finally, all proposed methods with real data are illustrated.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Xiongwen. "Two-level lognormal frailty model and competing risks model with missing cause of failure." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2997.

Full text
Abstract:
In clustered survival data, unobservable cluster effects may exert powerful influences on the outcomes and thus induce correlation among subjects within the same cluster. The ordinary partial likelihood approach does not account for this dependence. Frailty models, as an extension to Cox regression, incorporate multiplicative random effects, called frailties, into the hazard model and have become a very popular way to account for the dependence within clusters. We particularly study the two-level nested lognormal frailty model and propose an estimation approach based on the complete data likelihood with frailty terms integrated out. We adopt B-splines to model the baseline hazards and adaptive Gauss-Hermite quadrature to approximate the integrals efficiently. Furthermore, in finding the maximum likelihood estimators, instead of the Newton-Raphson iterative algorithm, Gauss-Seidel and BFGS methods are used to improve the stability and efficiency of the estimation procedure. We also study competing risks models with missing cause of failure in the context of Cox proportional hazards models. For competing risks data, there exists more than one cause of failure and each observed failure is exclusively linked to one cause. Conceptually, the causes are interpreted as competing risks before the failure is observed. Competing risks models are constructed based on the proportional hazards model specified for each cause of failure respectively, which can be estimated using partial likelihood approach. However, the ordinary partial likelihood is not applicable when the cause of failure could be missing for some reason. We propose a weighted partial likelihood approach based on complete-case data, where weights are computed as the inverse of selection probability and the selection probability is estimated by a logistic regression model. The asymptotic properties of the regression coefficient estimators are investigated by applying counting process and martingale theory. We further develop a double robust approach based on the full data to improve the efficiency as well as the robustness.
APA, Harvard, Vancouver, ISO, and other styles
20

Goulard, Michel. "Champs spatiaux et statistique multidimensionnelle." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376138909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Moraru, Laurentiu Eugen. "Numerical Predictions and Measurements in the Lubrication of Aeronautical Engine and Transmission Components." University of Toledo / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1125769629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rehnby, Nicklas. "Performance of alternative option pricing models during spikes in the FTSE 100 volatility index : Empirical evidence from FTSE100 index options." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139718.

Full text
Abstract:
Derivatives have a large and significant role on the financial markets today and the popularity of options has increased. This has also increased the demand of finding a suitable option pricing model, since the ground-breaking model developed by Black & Scholes (1973) have poor pricing performance. Practitioners and academics have over the years developed different models with the assumption of non-constant volatility, without reaching any conclusions regarding which model is more suitable to use. This thesis examines four different models, the first model is the Practitioners Black & Scholes model proposed by Christoffersen & Jacobs (2004b). The second model is the Heston´s (1993) continuous time stochastic volatility model, a modification of the model is also included, which is called the Strike Vector Computation suggested by Kilin (2011). The last model is the Heston & Nandi (2000) Generalized Autoregressive Conditional Heteroscedasticity type discrete model. From a practical point of view the models are evaluated, with the goal of finding the model with the best pricing performance and the most practical usage. The model´s robustness is also tested to see how the models perform in out-of-sample during a high respectively low implied volatility market. All the models are effected in the robustness test, the out-sample ability is negatively affected by a high implied volatility market. The results show that both of the stochastic volatility models have superior performances in the in-sample and out-sample analysis. The Generalized Autoregressive Conditional Heteroscedasticity type discrete model shows surprisingly poor results both in the in-sample and out-sample analysis. The results indicate that option data should be used instead of historical return data to estimate the model’s parameters. This thesis also provides an insight on why overnight-index-swap (OIS) rates should be used instead of LIBOR rates as a proxy for the risk-free rate.
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Eveliny Barroso da. "Contribuições em modelos de regressão com erro de medida multiplicativo." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/7738.

Full text
Abstract:
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-09-23T19:10:12Z No. of bitstreams: 1 TeseEBS.pdf: 936379 bytes, checksum: a7cd0812b331249755b7a9df5447e035 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:48:37Z (GMT) No. of bitstreams: 1 TeseEBS.pdf: 936379 bytes, checksum: a7cd0812b331249755b7a9df5447e035 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:48:44Z (GMT) No. of bitstreams: 1 TeseEBS.pdf: 936379 bytes, checksum: a7cd0812b331249755b7a9df5447e035 (MD5)
Made available in DSpace on 2016-10-10T14:48:50Z (GMT). No. of bitstreams: 1 TeseEBS.pdf: 936379 bytes, checksum: a7cd0812b331249755b7a9df5447e035 (MD5) Previous issue date: 2016-02-04
Não recebi financiamento
In regression models in which a covariate is measured with error, it is common to use structures that correlate the observed covariate with the true non-observed covariate. Such structures are usually additive or multiplicative. In the literature there are several interesting works that deal with regression models having an additive measurement error, many of which are linear models with covariate and measurement error normally distributed. For models having a multiplicative measurement error, one does not find in the literature the same theoretical amount of works as one finds for models in which the measurement error is additive. The same happens in situations where the supositions of normality for the covariates and the measurement errors do not apply. The present work proposes the construction, definition, estimation methods, and diagnostic analysis for the regression models with a multiplicative measurement error in one of the covariates. For these models it is considered that the response variable may belong either to the class of modified power series regression models or to the exponential family. The list of distributions belonging to the family modified power series is rather comprehensive; for this reason this work develops, firstly and in a general way, the models estimation and validation theory, and, as an example, presents the model of negative binomial regression with measurement error. In the case where the response variable belongs to the exponential family, the model of beta regression with multiplicative measurement error is presented. All proposed models were analysed through simulation studies and applied to real data sets.
Em modelos de regressão em que uma covariável é medida com erro, é comum o uso de estruturas que relacionam a covariável observada com a verdadeira covariável não observada. Essas estruturas são usualmente aditivas ou multiplicativas. Na literatura existem diversos trabalhos interessantes que tratam de modelos de regressão com erro de medida aditivo, muitos dos quais são modelos lineares com covariáveis e erro de medida normalmente distribuídos. Para modelos em que o erro de medida é multiplicativo, não se encontra na literatura o mesmo desenvolvimento teórico encontrado para modelos em que o erro de medida é aditivo. O mesmo vale para situações em que as suposições de normalidade para as covariáveis e erro de medida não se aplicam. Este trabalho propõe a construção, definição, métodos de estimação e análise de diagnóstico para modelos de regressão com erro de medida multiplicativo em uma das covariáveis. Para esses modelos, consideramos que a variável resposta possa pertencer ou à classe de modelos de regressão série de potências modificadas ou à família exponencial. O rol de distribuições pertencentes à família série de potências modificada é bem abrangente, portanto, neste trabalho, desenvolvemos a teoria de estimação e validação do modelo primeiramente de forma geral e, para exemplificar, apresentamos o modelo de regressão binomial negativa com erro de medida. Para o caso em que a variável resposta pertença à família exponencial, apresentamos o modelo de regressão beta com erro de medida multiplicativo. Todos os modelos propostos foram analisados através de estudos de simulação e aplicados a conjuntos de dados reais.
APA, Harvard, Vancouver, ISO, and other styles
24

Addam, Mohamed. "Approximation du problème de diffusion en tomographie optique et problème inverse." Littoral, 2010. http://www.theses.fr/2010DUNK0278.

Full text
Abstract:
Cette thèse porte sur l’approximation des équations aux dérivées partielles, en particulier l’équation de diffusion en tomographie optique. Elle peut se présenter en deux parties essentielles. Dans la première partie on discute le problème direct alors que le problème inverse est abordé dans la seconde partie. Pour le problème direct, on suppose que les paramètres optiques et les fonctions sources sont donnés. On résout alors le problème de diffusion où la densité du flux lumineux est considérée comme une fonction inconnue à approcher numériquement. Le plus souvent, pour reconstruire le signal numérique dans ce genre de problème, une discrétisation dans le temps est nécessaire. Nous avons proposé d’utiliser la transformée de Fourier et son inverse afin d’éviter une telle discrétisation. Les techniques que nous avons utilisées sont la quadrature de Gauss-Hermite ainsi que la méthode de Galerkin basée sur les B-splines ou les B-splines tensorielles et sur les fonctions à base radiales. Les B-splines sont utilisées en dimension un alors que les B-splines tensorielles sont utilisées lorsque le domaine est rectangulaire avec un maillage uniforme. Lorsque le domaine n’est plus rectangulaire, nous avons proposé de remplacer la base des B-splines tensorielles par les fonctions à base radiale construites à partir d’un nuage de points dispersés dans le domaine. Du point de vue théorique, nous avons étudié l’existence, l’unicité et la régularité de la solution puis nous avons proposé quelques résultats sur l’estimation de l’erreur dans les espaces de type Sobolev ainsi que sur la convergence de la méthode. Dans la seconde partie de ce notre travail, nous nous sommes intéressés au problème inverse. Il s’agit d’un problème inverse non-linéaire dont la non-linéarité est liée aux paramètres optiques. On suppose qu’on dispose des mesures du flux lumineux aux bords du domaine étudié et des fonctions sources. On veut alors résoudre le problème inverse de façon à simuler numériquement l’indice de réfraction ainsi que les coefficients de diffusion et d’absorption. Du point de vue théorique, nous avons discuté certains résultats tels que la continuité et la dérivabilité, au sens de Fréchet, de l’opérateur mesurant le flux lumineux reçu aux bords. Nous avons établi la propriété lipschitzienne de la dérivée de Fréchet en fonction des paramètres optiques. Du point de vue numérique nous nous sommes intéressés au problème discret dans la base des B-splines et la base des fonctions radiales. Ensuite, nous avons abordé la résolution du problème inverse non-linéaire par la méthode de Gauss-Newton
The purpose of this thesis is to develop and to study numerical methods for the solution of some Partial Differential Equations (PDE) such as the diffusion transport problem in optical tomography. The presented work can be partitioned into two parts. In the first part, we consider the direct problem and in the second part, we treat the inverse problem. For the direct problem, we assume that the optical parameters and the source functions are given. Here, the density of the luminous flow is considered as an unknown function to be approached numerically. Generally, to reconstruct the numerical signal, a mesh-technique (in the time variable) is necessary. To avoid such a discretisation, we will use a technique based on the Fourier transform and its inverse. These methods use the Gauss-Hermite quadrature as well as Galerkin method based on Bsplines, B-splines tensorial and radial basis functions (RBF). The B-splines are used in the one-dimension case while the tensorial B-splines are used when the domain is rectangular with a uniform mesh. When the domain is not rectangular any more, we use the radial basis functions. From the theoretical point of view, we will study the existence, the uniqueness and the regularity of the solution and then we propose some results on the estimation of the error in Sobolev-type spaces. In the second part of this work, we are interested in the diffusion inverse problem : a non-linear inverse problem. We suppose that the measures of the luminous flow in the edges of the domain and the source functions are given. We will give some theoretical results such as the continuity and the differentiability, in the Fréchet sense of the operator defined to measure the luminous flow detected on the edges of the domain. From the numerical point of view adds, we will be interested in the discreet case using B-splines and radial basis functions. We will use the Newton method to solve the non-linear inverse diffusion problem
APA, Harvard, Vancouver, ISO, and other styles
25

Addam, Mohamed. "Approximation du problème diffusion en tomographie optique et problème inverse." Phd thesis, Université du Littoral Côte d'Opale, 2009. http://tel.archives-ouvertes.fr/tel-00579257.

Full text
Abstract:
Cette thèse porte sur l'approximation des équations aux dérivées partielles, en particulier l'équation de diffusion en tomographie optique. Elle peut se présenter en deux parties essentielles. Dans la première partie on discute le problème direct alors que le problème inverse est abordé dans la seconde partie. Pour le problème direct, on suppose que les paramètres optiques et les fonctions sources sont donnés. On résout alors le problème de diffusion dans un domaine où la densité du flux lumineux est considérée comme une fonction inconnue à approcher numériquement. Le plus souvent, pour reconstruire le signal numérique dans ce genre de problème, une discrétisation dans le temps est nécessaire. Nous avons proposé d'utiliser la transformée de Fourier et son inverse afin d'éviter une telle discrétisation. Les techniques que nous avons utilisées sont la quadrature de Gauss-Hermite ainsi que la méthode de Galerkin basée sur les B-splines ou les B-splines tensorielles ainsi que sur les fonctions radiales. Les B-splines sont utilisées en dimension un alors que les B-splines tensorielles sont utilisées lorsque le domaine est rectangulaire avec un maillage uniforme. Lorsque le domaine n'est plus rectangulaire, nous avons proposé de remplacer la base des B-splines tensorielles par les fonctions à base radiale construites à partir d'un nuage de points dispersés dans le domaine. Du point de vue théorique, nous avons étudié l'existence, l'unicité et la régularité de la solution puis nous avons proposé quelques résultats sur l'estimation de l'erreur dans les espaces de type Sobolev ainsi que sur la convergence de la méthode. Dans la seconde partie de notre travail, nous nous sommes intéressés au problème inverse. Il s'agit d'un problème inverse non-linéaire dont la non-linéarité est liée aux paramètres optiques. On suppose qu'on dispose des mesures du flux lumineux aux bords du domaine étudié et des fonctions sources. On veut alors résoudre le problème inverse de façon à simuler numériquement l'indice de réfraction ainsi que les coefficients de diffusion et d'absorption. Du point de vue théorique, nous avons discuté certains résultats tels que la continuité et la dérivabilité, au sens de Fréchet, de l'opérateur mesurant le flux lumineux reçu aux bords. Nous avons établi les propriétés lipschitzienne de la dérivée de Fréchet en fonction des paramètres optiques. Du point de vue numérique nous nous somme intéressés au problème discret dans la base des B-splines et la base des fonctions radiales. En suite, nous avons abordé la résolution du problème inverse non-linéaire par la méthode de Newton-Gauss.
APA, Harvard, Vancouver, ISO, and other styles
26

Fookes, Clinton Brian. "Medical Image Registration and Stereo Vision Using Mutual Information." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15876/.

Full text
Abstract:
Image registration is a fundamental problem that can be found in a diverse range of fields within the research community. It is used in areas such as engineering, science, medicine, robotics, computer vision and image processing, which often require the process of developing a spatial mapping between sets of data. Registration plays a crucial role in the medical imaging field where continual advances in imaging modalities, including MRI, CT and PET, allow the generation of 3D images that explicitly outline detailed in vivo information of not only human anatomy, but also human function. Mutual Information (MI) is a popular entropy-based similarity measure which has found use in a large number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it does not assume the existence of any specific relationship between image intensities. It only assumes a statistical dependence. The basic concept behind any approach using MI is to find a transformation, which when applied to an image, will maximise the MI between two images. This thesis presents research using MI in three major topics encompassed by the computer vision and medical imaging field: rigid image registration, stereo vision, and non-rigid image registration. In the rigid domain, a novel gradient-based registration algorithm (MIGH) is proposed that uses Parzen windows to estimate image density functions and Gauss-Hermite quadrature to estimate the image entropies. The use of this quadrature technique provides an effective and efficient way of estimating entropy while bypassing the need to draw a second sample of image intensities (a procedure required in previous Parzen-based MI registration approaches). It is possible to achieve identical results with the MIGH algorithm when compared to current state of the art MI-based techniques. These results are achieved using half the previously required sample sizes, thus doubling the statistical power of the registration algorithm. Furthermore, the MIGH technique improves algorithm complexity by up to an order of N, where N represents the number of samples extracted from the images. In stereo vision, a popular passive method of depth perception, new extensions have been pro- posed in order to increase the robustness of MI-based stereo matching algorithms. Firstly, prior probabilities are incorporated into the MI measure to considerably increase the statistical power of the matching windows. The statistical power, directly related to the number of samples, can become too low when small matching windows are utilised. These priors, which are calculated from the global joint histogram, are tuned to a two level hierarchical approach. A 2D match surface, in which the match score is computed for every possible combination of template and matching windows, is also utilised to enforce left-right consistency and uniqueness constraints. These additions to MI-based stereo matching significantly enhance the algorithms ability to detect correct matches while decreasing computation time and improving the accuracy, particularly when matching across multi-spectra stereo pairs. MI has also recently found use in the non-rigid domain due to a need to compute multimodal non-rigid transformations. The viscous fluid algorithm is perhaps the best method for re- covering large local mis-registrations between two images. However, this model can only be used on images from the same modality as it assumes similar intensity values between images. Consequently, a hybrid MI-Fluid algorithm is proposed to compute a multimodal non-rigid registration technique. MI is incorporated via the use of a block matching procedure to generate a sparse deformation field which drives the viscous fluid algorithm, This algorithm is also compared to two other popular local registration techniques, namely Gaussian convolution and the thin-plate spline warp, and is shown to produce comparable results. An improved block matching procedure is also proposed whereby a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler is used to optimally locate grid points of interest. These grid points have a larger concentration in regions of high information and a lower concentration in regions of small information. Previous methods utilise only a uniform distribution of grid points throughout the image.
APA, Harvard, Vancouver, ISO, and other styles
27

Boutry, Grégory. "Contributions à l'approximation et à l'algèbre linéaire numérique." Lille 1, 2003. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2003/50376-2003-301.pdf.

Full text
Abstract:
Ce travail porte sur la présentation de résultats en approximation et en algèbre linéaire numérique. Les recherches réalisées concernent principalement l'approximation, les problèmes inverses, les ondelettes, les problèmes de minimisation, les problèmes généralisés de valeurs propres et les systèmes dynamiques linéaires. La première partie présente un nouvel approximant rationnel, la construction de matrices hermitiennes et complexes symétriques à partir de la connaisances des valeurs propres et de la diagonale et la construction d'ondelettes polynomiales orthogonales en tant que problème inverse de valeurs propres. On donne une construction des matrices de Jacobi-Kronrod ainsi qu'une condition nécessaire et suffisante pour les construire. La deuxième partie s'occupe de problèmes de mimimisation et de valeurs propres généralisées pour des matrices rectangulaires. On étudie la relation avec le pseudo-spectre d'un faisceau rectangulaire et on propose une nouvelle factorisation pour ce type de faisceaux. En deuxième lieu, on propose une généralisation de la méthode propose précédemment aux systèmes dynamiques linéaires pour les notions d'incontrôlabilité et d'inobservabilité. On donne les distances relatives à cette notion et les matrices solutions de ce problème.
APA, Harvard, Vancouver, ISO, and other styles
28

Tomaschewski, Fernanda Krüger. "Solução da equação Sn multigrupo de transporte dependente do tempo em meio heterogêneo." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/61142.

Full text
Abstract:
Neste trabalho, apresentamos uma solução analítica para a aproximação SN da equação de transporte dependente do tempo com fonte, tanto para uma placa homogênea quanto heterogênea, assumindo modelo de multigrupo. A ideia principal envolve os seguintes passos: construção da solução para a equação mencionada em uma placa homogênea pela aplicação da técnica da dupla transformada de Laplace. Para tal, inicialmente aplicamos a transformada de Laplace na variável tempo, resolvendo, na sequência, a equação resultante pelo método LTSN. Finalmente determinamos a solução procurada para o fluxo angular usando o teorema de inversão da transformada de Laplace. Por este procedimento a solução é escrita em termos de uma integral de linha na variável tempo, a qual aqui é avaliada pelos seguintes esquemas numéricos: quadratura Gaussiana, Série de Fourier, Gaver-Stehfest e Gaver Wynn-Rho. Uma vez que a solução para o problema homogêneo é conhecida, determinamos a solução para a placa de multi-camadas usando a solução encontrada para uma placa genérica, o que nos possibilita obtermos a solução global para uma placa heterogênea aplicando a condição de contorno e também impondo a condição de continuidade para o fluxo angular nas interfaces. Concluímos apresentando comparações entre os resultados numéricos obtidos pela inversão numérica da transformada de Laplace considerada, bem como o comportamento assintótico desta solução quando o tempo vai para o infinito.
In this dissertation is presented an analytical solution for the approximation SN transport equation with the time dependent power, to homogeneous plates as to heterogeneous, assuming a multigroup model with isotropic scattering. The main idea involves the following steps, in this order: construction of a solution to the equation mentioned in a homogeneous plate by applying the technique of the double Laplace transform. In order to do this, is applied the Laplace transform in time variable, solving the resulting equation by the LTSN method. Finally is determined he solution sought for the angular flux using the theorem of inverted Laplace transform. By this procedure the solution is written in terms of a line integral in the time variable, which here is measured by the following numerical schemes: Gauss quadrature, Fourier series, Gaver-Stehfest and Gaver-Wynn-Rho. Once the solution for the homogeneous problem is known, is determined the solution for the multilayered slab assigning this homogeneous solution for a generic slab, which allow us to obtain the global solution for the one that is heterogeneous applying the boundary condition, and also imposing the continuity condition for the angular flux at interface. Finally is concluded, reporting numerical comparisons among the results attained by the Laplace transform inversion approaches considered, as well the assymptotic behavior of this solution when time goes to infinity.
APA, Harvard, Vancouver, ISO, and other styles
29

Tchiotsop, Daniel. "Modélisations polynomiales des signaux ECG : applications à la compression." Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL088N/document.

Full text
Abstract:
La compression des signaux ECG trouve encore plus d’importance avec le développement de la télémédecine. En effet, la compression permet de réduire considérablement les coûts de la transmission des informations médicales à travers les canaux de télécommunication. Notre objectif dans ce travail de thèse est d’élaborer des nouvelles méthodes de compression des signaux ECG à base des polynômes orthogonaux. Pour commencer, nous avons étudié les caractéristiques des signaux ECG, ainsi que différentes opérations de traitements souvent appliquées à ce signal. Nous avons aussi décrit de façon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux à base des approximations et interpolations polynomiales. Nous avons abordé par la suite, les fondements théoriques des polynômes orthogonaux, en étudiant successivement leur nature mathématique, les nombreuses et intéressantes propriétés qu’ils disposent et aussi les caractéristiques de quelques uns de ces polynômes. La modélisation polynomiale du signal ECG consiste d’abord à segmenter ce signal en cycles cardiaques après détection des complexes QRS, ensuite, on devra décomposer dans des bases polynomiales, les fenêtres de signaux obtenues après la segmentation. Les coefficients produits par la décomposition sont utilisés pour synthétiser les segments de signaux dans la phase de reconstruction. La compression revient à utiliser un petit nombre de coefficients pour représenter un segment de signal constitué d’un grand nombre d’échantillons. Nos expérimentations ont établi que les polynômes de Laguerre et les polynômes d’Hermite ne conduisaient pas à une bonne reconstruction du signal ECG. Par contre, les polynômes de Legendre et les polynômes de Tchebychev ont donné des résultats intéressants. En conséquence, nous concevons notre premier algorithme de compression de l’ECG en utilisant les polynômes de Jacobi. Lorsqu’on optimise cet algorithme en supprimant les effets de bords, il dévient universel et n’est plus dédié à la compression des seuls signaux ECG. Bien qu’individuellement, ni les polynômes de Laguerre, ni les fonctions d’Hermite ne permettent une bonne modélisation des segments du signal ECG, nous avons imaginé l’association des deux systèmes de fonctions pour représenter un cycle cardiaque. Le segment de l’ECG correspondant à un cycle cardiaque est scindé en deux parties dans ce cas: la ligne isoélectrique qu’on décompose en séries de polynômes de Laguerre et les ondes P-QRS-T modélisées par les fonctions d’Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant
Developing new ECG data compression methods has become more important with the implementation of telemedicine. In fact, compression schemes could considerably reduce the cost of medical data transmission through modern telecommunication networks. Our aim in this thesis is to elaborate compression algorithms for ECG data, using orthogonal polynomials. To start, we studied ECG physiological origin, analysed this signal patterns, including characteristic waves and some signal processing procedures generally applied ECG. We also made an exhaustive review of ECG data compression algorithms, putting special emphasis on methods based on polynomial approximations or polynomials interpolations. We next dealt with the theory of orthogonal polynomials. We tackled on the mathematical construction and studied various and interesting properties of orthogonal polynomials. The modelling of ECG signals with orthogonal polynomials includes two stages: Firstly, ECG signal should be divided into blocks after QRS detection. These blocks must match with cardiac cycles. The second stage is the decomposition of blocks into polynomial bases. Decomposition let to coefficients which will be used to synthesize reconstructed signal. Compression is the fact of using a small number of coefficients to represent a block made of large number of signal samples. We realised ECG signals decompositions into some orthogonal polynomials bases: Laguerre polynomials and Hermite polynomials did not bring out good signal reconstruction. Interesting results were recorded with Legendre polynomials and Tchebychev polynomials. Consequently, our first algorithm for ECG data compression was designed using Jacobi polynomials. This algorithm could be optimized by suppression of boundary effects, it then becomes universal and could be used to compress other types of signal such as audio and image signals. Although Laguerre polynomials and Hermite functions could not individually let to good signal reconstruction, we imagined an association of both systems of functions to realize ECG compression. For that matter, every block of ECG signal that matches with a cardiac cycle is split in two parts. The first part consisting of the baseline section of ECG is decomposed in a series of Laguerre polynomials. The second part made of P-QRS-T waves is modelled with Hermite functions. This second algorithm for ECG data compression is robust and very competitive
APA, Harvard, Vancouver, ISO, and other styles
30

Meguellati, Fatima. "Estimation par approximation de Laplace dans les modèles GLM Mixtes : application à la gravité corporelle maximale des accidents de la route." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10204/document.

Full text
Abstract:
Cette thèse est une contribution à la construction de méthodes statistiques applicables à l’évaluation (modélisation et estimation) de certains indices utilisés pour analyser la gravité corporelle des accidents de la route. On se focalise sur quatre points lors du développement de la méthodologie adoptée : la sélection des variables (ou facteurs) présentant un effet aléatoire, la construction de modèles logistique-normaux mixtes, l’estimation des paramètres par approximation de Laplace et PQL (quasi-vraisemblance pénalisée), et la comparaison de la performance des méthodes d’estimation. Dans une première contribution, on construit un modèle logistique-Normal avec « Type de collision » comme variable à effet aléatoire pour analyser la gravité corporelle maximale observée dans un échantillon de véhicules accidentés. Des méthodes d’estimation fondées sur l’approximation de Laplace de la log-vraisemblance sont proposées pour estimer et analyser la contribution des variables présentes dans le modèle. On compare, par simulation, cette approximation Laplacienne à celle basée sur l’adaptation des polynômes de Gauss-Hermite (AGH). On montre que les deux approches sont équivalentes par rapport à la précision de l’estimation bien qu’AGH soit légèrement supérieure. Une deuxième contribution consiste à adapter certains algorithmes de la famille PQL à l’estimation des paramètres d’un deuxième modèle et à comparer sa performance en termes de biais aux méthodes de Laplace et AGH. Deux exemples de données simulées illustrent les résultats obtenus. Dans une troisième et dense contribution, on identifie plusieurs modèles logistique-normaux mixtes avec plus d’un effet aléatoire. La convergence numérique des algorithmes (Laplace, AGH, PQL) ainsi que la précision des estimations sont étudiées. Des simulations ainsi qu’une base de données détaillées d’accidents sont utilisées pour analyser la performance des modèles à détecter des véhicules contenant des usagers ayant des blessures graves corporelles maximales. Une programmation orientée R accompagnent l’ensemble des résultats obtenus. La thèse se termine sur des perspectives relatives aux critères de sélection de modèles GLM Mixtes et à l’extension de ces modèles à la famille multinomiale
This thesis is a contribution to the construction of statistical methods for the evaluation (modeling and estimation) of some indices used to analyze the injury severity of road crashes. We focus on four points during the development of the adopted methodology: the random variables (or factors) selection, the construction of mixed logistic-Normal model, the parameters estimation by Laplace approximation and PQL (penalized quasi-likelihood) and the performance comparison of the estimation methods. In a first contribution, a logistic-Normal model is constructed with "collision type" as random variable to analyze the maximum injury severity observed in a sample of crashed vehicles. Estimation methods based on the Laplace approximation of the log-likelihood are proposed to estimate and analyze the contribution of variables in the model. We compare, by simulation, this Laplacian approximation to those based on the adaptation of Gauss-Hermite polynomials (AGH). We show that the two approaches are equivalent with respect to the accuracy of the estimate although AGH is superior. A second contribution is to adapt some algorithms of PQL family to estimate the parameters of a second model and compare its performance to Laplace and AGH methods in terms of bias. Two examples of simulated data illustrate the obtained results. In a third and dense contribution, we identify several mixed logistic-Normal models with more than one random effect. The convergence of the algorithms (Laplace, AGH, and PQL) and the precision of the estimates are investigated. Simulations as well as a database of detailed crash data are used to analyze the models performance to detect vehicles containing users with maximum injury severity. Programming oriented R accompany all results. The thesis concludes with perspectives on GLM Mixed models selection criteria and the extension of these models to the multinomial family
APA, Harvard, Vancouver, ISO, and other styles
31

Martin, Petitfrere. "EOS based simulations of thermal and compositional flows in porous media." Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3036/document.

Full text
Abstract:
Les calculs d'équilibres à triphasiques et quadriphasiques sont au cœur des simulations de réservoirs impliquant des processus de récupérations tertiaires. Dans les procédés d'injection de gaz ou de vapeur, le système huile-gaz est enrichi d'une nouvelle phase qui joue un rôle important dans la récupération de l'huile en place. Les calculs d'équilibres représentent la majeure partie des temps de calculs dans les simulations de réservoir compositionnelles où les routines thermodynamiques sont appelées un nombre conséquent de fois. Il est donc important de concevoir des algorithmes qui soient fiables, robustes et rapides. Dans la littérature peu de simulateurs basés sur des équations d'état sont applicables aux procédés de récupération thermique. A notre connaissance, il n'existe pas de simulation thermique complètement compositionnelle de ces procédés pour des cas d'applications aux huiles lourdes. Ces simulations apparaissent essentielles et pourraient offrir des outils améliorés pour l’étude prédictive de certains champs. Dans cette thèse, des algorithmes robustes et efficaces de calculs d’équilibre multiphasiques sont proposés permettant de surmonter les difficultés rencontrés durant les simulations d'injection de vapeur pour des huiles lourdes. La plupart des algorithmes d'équilibre de phases sont basés sur la méthode de Newton et utilisent les variables conventionnelles comme variables indépendantes. Dans un premier temps, des améliorations de ces algorithmes sont proposées. Les variables réduites permettent de réduire la dimensionnalité du système de nc (nombre de composants) dans le cas des variables conventionnelles, à M (M<
Three to four phase equilibrium calculations are in the heart of tertiary recovery simulations. In gas/steam injection processes, additional phases emerging from the oil-gas system are added to the set and have a significant impact on the oil recovery. The most important computational effort in many chemical process simulators and in petroleum compositional reservoir simulations is required by phase equilibrium and thermodynamic property calculations. In field scale reservoir simulations, a huge number of phase equilibrium calculations is required. For all these reasons, the algorithms must be robust and time-saving. In the literature, few simulators based on equations of state (EoS) are applicable to thermal recovery processes such as steam injection. To the best of our knowledge, no fully compositional thermal simulation of the steam injection process has been proposed with extra-heavy oils; these simulations are essential and will offer improved tools for predictive studies of the heavy oil fields. Thus, in this thesis different algorithms of improved efficiency and robustness for multiphase equilibrium calculations are proposed, able to handle conditions encountered during the simulation of steam injection for heavy oil mixtures. Most of the phase equilibrium calculations are based on the Newton method and use conventional independent variables. These algorithms are first investigated and different improvements are proposed. Michelsen’s (Fluid Phase Equil. 9 (1982) 21-40) method for multiphase-split problems is modified to take full advantage of symmetry (in the construction of the Jacobian matrix and the resolution of the linear system). The reduction methods enable to reduce the space of study from nc (number of components) for conventional variables to M (M<
APA, Harvard, Vancouver, ISO, and other styles
32

Ourique, Luiz Eduardo. "Eficiência probabilística de algoritmos numéricos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1990. http://hdl.handle.net/10183/127095.

Full text
Abstract:
Seguindo as ideias de s. smale, estudamos a eficiencia probabilistica de algoritmos numericos para equacoes diferenciais ordinarias. especial atencao e dada a dois exemplos classicos: os algoritmos de runge-kutta de dois e de quatro estagios, sendo a sua eficiencia estimada em termos de medidas gaussianas. em ambos os casos, sao obtidas estimativas detalhadas que levam a uma expressao para a media do erro global.
Following the ideas of S. Smale, we study the probabilistic efficiency of numerical algorithms in ordinary differential equations. Special attention is directed to two classical examples: the algorithms of Runge-Kutta of two and four stages with their efficiency estimated in terms of gaussian measures. In both these cases detailed estimates are given. leading to an expression for the mean global error.
APA, Harvard, Vancouver, ISO, and other styles
33

Tchiotsop, Daniel. "Modélisations polynomiales des signaux ECG. Application à la compression." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2007. http://tel.archives-ouvertes.fr/tel-00197549.

Full text
Abstract:
La compression des signaux ECG trouve encore plus d'importance avec le développement de la télémédecine. En effet, la compression permet de réduire considérablement les coûts de la transmission des informations médicales à travers les canaux de télécommunication. Notre objectif dans ce travail de thèse est d'élaborer des nouvelles méthodes de compression des signaux ECG à base des polynômes orthogonaux. Pour commencer, nous avons étudié les caractéristiques des signaux ECG, ainsi que différentes opérations de traitements souvent appliquées à ce signal. Nous avons aussi décrit de façon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux à base des approximations et interpolations polynomiales. Nous avons abordé par la suite, les fondements théoriques des polynômes orthogonaux, en étudiant successivement leur nature mathématique, les nombreuses et intéressantes propriétés qu'ils disposent et aussi les caractéristiques de quelques uns de ces polynômes. La modélisation polynomiale du signal ECG consiste d'abord à segmenter ce signal en cycles cardiaques après détection des complexes QRS, ensuite, on devra décomposer dans des bases polynomiales, les fenêtres de signaux obtenues après la segmentation. Les coefficients produits par la décomposition sont utilisés pour synthétiser les segments de signaux dans la phase de reconstruction. La compression revient à utiliser un petit nombre de coefficients pour représenter un segment de signal constitué d'un grand nombre d'échantillons. Nos expérimentations ont établi que les polynômes de Laguerre et les polynômes d'Hermite ne conduisaient pas à une bonne reconstruction du signal ECG. Par contre, les polynômes de Legendre et les polynômes de Tchebychev ont donné des résultats intéressants. En conséquence, nous concevons notre premier algorithme de compression de l'ECG en utilisant les polynômes de Jacobi. Lorsqu'on optimise cet algorithme en supprimant les effets de bords, il dévient universel et n'est plus dédié à la compression des seuls signaux ECG. Bien qu'individuellement, ni les polynômes de Laguerre, ni les fonctions d'Hermite ne permettent une bonne modélisation des segments du signal ECG, nous avons imaginé l'association des deux systèmes de fonctions pour représenter un cycle cardiaque. Le segment de l'ECG correspondant à un cycle cardiaque est scindé en deux parties dans ce cas: la ligne isoélectrique qu'on décompose en séries de polynômes de Laguerre et les ondes P-QRS-T modélisées par les fonctions d'Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant.
APA, Harvard, Vancouver, ISO, and other styles
34

Aloui, Asma. "Approche combinée théorie-expérience pour la catalyse d’hydrogénation asymétrique." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10291/document.

Full text
Abstract:
Plusieurs études ont rapporté l’influence de la pression d’hydrogène, plus précisément la concentration réelle en hydrogène dissous dans le milieu réactionnel, sur l’énantiosélectivité des réactions d’hydrogénations catalytiques intervenant des catalyseurs à base de rhodium. Cependant, l’identification de l’étape ou les étapes enantiodéterminantes ou limitantes ainsi que l’explication de l’effet de la pression d’hydrogène sur cet étape, exigent la détermination des constantes cinétiques de chaque étape élémentaire. Ce projet de recherche vise une telle détermination en combinant deux études expérimentale et théorique. Dans un premier temps, un système catalytique présentant deux effets opposés de la pression de l’hydrogène en fonction de la nature du substrat, a été identifié : un effet néfaste avec le M-acrylate (MAA) et un effet bénéfique avec l’E-emap. Ensuite, deux études ont été menées sur les réactions d’hydrogénation de ces deux substrats par le Rh(I) /(R,R)-Me-BPE. L’étude cinétique expérimentale est basée sur le modèle cinétique proposé par Halpern dans le but d’estimer les paramètres cinétiques des différentes étapes élémentaires, alors que celle théorique consiste à étudier les différents chemins réactionnels possibles par calcul DFT en utilisant le logiciel de modélisation Gaussian 03. L’exploitation des résultats obtenus a permis de revisiter les concepts clés de la catalyse d’hydrogénation asymétrique et de mener une discussion par rapport à la fiabilité des méthodes théoriques à prévoir l’expérience
Several studies brought back the influence of the hydrogen pressure, more precisely the real hydrogen concentration dissolved in solution, on the enantioselectivity of the catalytic asymmetric hydrogenation for rhodium based catalysts. However to identify the enantiodetermining step(s), and to gain some further understanding on the hydrogen pressure-enantioselectivity relationship, the determination of the kinetic constants is required. We have thus embarked a project aiming such determination by coupling experimental work and theoretical chemistry. Two studies were undertaken on the asymmetric hydrogenation of both substrates by the Rh (I)/ (R,R)-Me-bpe catalyst. The experimental kinetic work study is based on the kinetic model suggested by Halpern in order to estimate the parameters kinetic of each elementary step, whereas that theoretical, consists in studying the various possible pathways by DFT calculation using the software of modelling Gaussian 03. The analysis of the obtained results made it possible to revisit the concepts’ key of the catalytic asymmetric hydrogenation and to hold a discussion about the reliability of the theoretical methods to envisage the experiment
APA, Harvard, Vancouver, ISO, and other styles
35

Sauer, Laurete Zanol. "Solução da equação de transporte multigrupo com núcleo de espalhamento de Klein-Nishina : uma aplicação ao cálculo de dose." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1997. http://hdl.handle.net/10183/127097.

Full text
Abstract:
Neste trabalho propomos uma solução para a equação de transporte multigrupo com núcleo de espalhamento de Klein-Nishina. A idéia básica consiste na aproximação do termo integral em energia, resultando em uma solução final para valores discretos de energia. Resolvemos o sistema resultante, em termos da variável espacial e angular, usando o métodoLTSN, que fornece uma solução analítica para o problema de ordenadas discretas. Aplicamos essa formulação ao cálculo de dose c apresentamos resultados numéricos para quatro e cinco valores de energia.
In this work we propose a solution to the multigroup transport equation with Klein-Nishina scattering kernel. The main ielea is the approximation of the integral energy term such that we obtain the final solution for eliscrete energy values. We solve the resulting system, in terrns of the spacial anel angular variable, using the LTSN methoel, that provieles an analytical solution to the cliscrete ordinates problem. vVe applieel the formulation on the calculation of the elosis anel we present numerical results for four anel five energy values.
APA, Harvard, Vancouver, ISO, and other styles
36

Vu, Thi Lan Huong. "Analyse statistique locale de textures browniennes multifractionnaires anisotropes." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0094.

Full text
Abstract:
Nous construisons quelques extensions anisotropes des champs browniens multifractionnels qui rendent compte de phénomènes spatiaux dont les propriétés de régularité et de directionnalité peuvent varier dans l’espace. Notre objectif est de mettre en place des tests statistiques pour déterminer si un champ observé de ce type est hétérogène ou non. La méthodologie statistique repose sur une analyse de champ par variations quadratiques, qui sont des moyennes d’incréments de champ au carré. Notre approche, ces variations sont calculées localement dans plusieurs directions. Nous établissons un résultat asymptotique montrant une relation linéaire gaussienne entre ces variations et des paramètres liés à la régularité et aux propriétés directionnelles. En utilisant ce résultat, nous concevons ensuite une procédure de test basée sur les statistiques de Fisher des modèles linéaires gaussiens. Nous évaluons cette procédure sur des données simulées. Enfin, nous concevons des algorithmes pour la segmentation d’une image en régions de textures homogènes. Le premier algorithme est basé sur une procédure K-means qui a estimé les paramètres en entrée et prend en compte les distributions de probabilité théoriques. Le deuxième algorithme est basé sur une algorithme EM qui implique une exécution continue à chaque boucle de 2 processus. Finalement, nous présentons une application de ces algorithmes dans le cadre d’un projet pluridisciplinaire visant à optimiser le déploiement de panneaux photovoltaïques sur le terrain. Nous traitons d’une étape de prétraitement du projet qui concerne la segmentation des images du satellite Sentinel-2 dans des régions où la couverture nuageuse est homogène
We deal with some anisotropic extensions of the multifractional brownian fields that account for spatial phenomena whose properties of regularity and directionality may both vary in space. Our aim is to set statistical tests to decide whether an observed field of this kind is heterogeneous or not. The statistical methodology relies upon a field analysis by quadratic variations, which are averages of square field increments. Specific to our approach, these variations are computed locally in several directions. We establish an asymptotic result showing a linear gaussian relationship between these variations and parameters related to regularity and directional properties of the model. Using this result, we then design a test procedure based on Fisher statistics of linear gaussian models. Eventually we evaluate this procedure on simulated data. Finally, we design some algorithms for the segmentation of an image into regions of homogeneous textures. The first algorithm is based on a K-means procedure which has estimated parameters as input and takes into account their theoretical probability distributions. The second algorithm is based on an EM algorithm which involves continuous execution ateach 2-process loop (E) and (M). The values found in (E) and (M) at each loop will be used for calculations in the next loop. Eventually, we present an application of these algorithms in the context of a pluridisciplinary project which aims at optimizing the deployment of photo-voltaic panels on the ground. We deal with a preprocessing step of the project which concerns the segmentation of images from the satellite Sentinel-2 into regions where the cloud cover is homogeneous
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Mingming. "Development of the partition of unity finite element method for the numerical simulation of interior sound field." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2282/document.

Full text
Abstract:
Dans ce travail, nous avons introduit le concept sous-jacent de PUFEM et la formulation de base lié à l'équation de Helmholtz dans un domaine borné. Le processus d'enrichissement de l'onde plane de variables PUFEM a été montré et expliqué en détail. L'idée principale est d'inclure une connaissance a priori sur le comportement local de la solution dans l'espace des éléments finis en utilisant un ensemble de fonctions d'onde qui sont des solutions aux équations aux dérivées partielles. Dans cette étude, l'utilisation des ondes planes se propageant dans différentes directions a été favorisée car elle conduit à des algorithmes de calcul efficaces. En outre, nous avons montré que le nombre de directions d'ondes planes dépend de la taille de l'élément PUFEM et la fréquence des ondes à la fois en 2D et 3D. Les approches de sélection de ces ondes planes sont également illustrés. Pour les problèmes 3D, nous avons étudié deux systèmes de distribution des directions d'ondes planes qui sont la méthode du cube discrétisé et la méthode de la force de Coulomb. Il a été montré que celle-ci permet d'obtenir des directions d'onde espacées de façon uniforme et permet d'obtenir un nombre arbitraire d'ondes planes attachées à chaque noeud de l'élément de PUFEM, ce qui rend le procédé plus souple.Dans le chapitre 3, nous avons étudié la simulation numérique des ondes se propageant dans deux dimensions en utilisant PUFEM. La principale priorité de ce chapitre est de venir avec un schéma d'intégration exacte (EIS), résultant en un algorithme d'intégration rapide pour le calcul de matrices de coefficients de système avec une grande précision. L'élément 2D PUFEM a ensuite été utilisé pour résoudre un problème de transmission acoustique impliquant des matériaux poreux. Les résultats ont été vérifiés et validés par la comparaison avec des solutions analytiques. Les comparaisons entre le régime exact d'intégration (EIS) et en quadrature de Gauss ont montré le gain substantiel offert par l'EIE en termes de temps CPU.Une 3D exacte Schéma d'intégration a été présenté dans le chapitre 4, afin d'accélérer et de calculer avec précision (jusqu'à la précision de la machine) des intégrales très oscillatoires découlant des coefficients de la matrice de PUFEM associés à l'équation 3D Helmholtz. Grâce à des tests de convergence, un critère de sélection du nombre d'ondes planes a été proposé. Il a été montré que ce nombre ne pousse que quadratiquement avec la fréquence qui donne lieu à une réduction drastique du nombre total de degrés de libertés par rapport au FEM classique. Le procédé a été vérifié pour deux exemples numériques. Dans les deux cas, le procédé est représenté à converger vers la solution exacte. Pour le problème de la cavité avec une source de monopôle située à l'intérieur, nous avons testé deux modèles numériques pour évaluer leur performance relative. Dans ce scénario, où la solution exacte est singulière, le nombre de directions d'onde doit être choisie suffisamment élevée pour faire en sorte que les résultats ont convergé.Dans le dernier chapitre, nous avons étudié les performances numériques du PUFEM pour résoudre des champs sonores intérieurs 3D et des problèmes de transmission d'ondes dans lequel des matériaux absorbants sont présents. Dans le cas particulier d'un matériau réagissant localement modélisé par une impédance de surface. Un des critères d'estimation d'erreur numérique est proposé en considérant simplement une impédance purement imaginaire qui est connu pour produire des solutions à valeur réelle. Sur la base de cette estimation d'erreur, il a été démontré que le PUFEM peut parvenir à des solutions précises tout en conservant un coût de calcul très faible, et seulement environ 2 degrés de liberté par longueur d'onde ont été jugées suffisantes. Nous avons également étendu la PUFEM pour résoudre les problèmes de transmission des ondes entre l'air et un matériau poreux modélisé comme un fluide homogène équivalent
In this work, we have introduced the underlying concept of PUFEM and the basic formulation related to the Helmholtz equation in a bounded domain. The plane wave enrichment process of PUFEM variables was shown and explained in detail. The main idea is to include a priori knowledge about the local behavior of the solution into the finite element space by using a set of wave functions that are solutions to the partial differential equations. In this study, the use of plane waves propagating in various directions was favored as it leads to efficient computing algorithms. In addition, we showed that the number of plane wave directions depends on the size of the PUFEM element and the wave frequency both in 2D and 3D. The selection approaches for these plane waves were also illustrated. For 3D problems, we have investigated two distribution schemes of plane wave directions which are the discretized cube method and the Coulomb force method. It has been shown that the latter allows to get uniformly spaced wave directions and enables us to acquire an arbitrary number of plane waves attached to each node of the PUFEM element, making the method more flexible.In Chapter 3, we investigated the numerical simulation of propagating waves in two dimensions using PUFEM. The main priority of this chapter is to come up with an Exact Integration Scheme (EIS), resulting in a fast integration algorithm for computing system coefficient matrices with high accuracy. The 2D PUFEM element was then employed to solve an acoustic transmission problem involving porous materials. Results have been verified and validated through the comparison with analytical solutions. Comparisons between the Exact Integration Scheme (EIS) and Gaussian quadrature showed the substantial gain offered by the EIS in terms of CPU time.A 3D Exact Integration Scheme was presented in Chapter 4, in order to accelerate and compute accurately (up to machine precision) of highly oscillatory integrals arising from the PUFEM matrix coefficients associated with the 3D Helmholtz equation. Through convergence tests, a criteria for selecting the number of plane waves was proposed. It was shown that this number only grows quadratically with the frequency thus giving rise to a drastic reduction in the total number of degrees of freedoms in comparison to classical FEM. The method has been verified for two numerical examples. In both cases, the method is shown to converge to the exact solution. For the cavity problem with a monopole source located inside, we tested two numerical models to assess their relative performance. In this scenario where the exact solution is singular, the number of wave directions has to be chosen sufficiently high to ensure that results have converged. In the last Chapter, we have investigated the numerical performances of the PUFEM for solving 3D interior sound fields and wave transmission problems in which absorbing materials are present. For the specific case of a locally reacting material modeled by a surface impedance. A numerical error estimation criteria is proposed by simply considering a purely imaginary impedance which is known to produce real-valued solutions. Based on this error estimate, it has been shown that the PUFEM can achieve accurate solutions while maintaining a very low computational cost, and only around 2 degrees of freedom per wavelength were found to be sufficient. We also extended the PUFEM for solving wave transmission problems between the air and a porous material modeled as an equivalent homogeneous fluid. A simple 1D problem was tested (standing wave tube) and the PUFEM solutions were found to be around 1% error which is sufficient for engineering purposes
APA, Harvard, Vancouver, ISO, and other styles
38

Lo, Chiang-Wei, and 羅蔣偉. "Statistical Analysis for Multivariate Current Status Data with Gauss-quadrature Method." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/90665935086248986836.

Full text
Abstract:
碩士
靜宜大學
財務與計算數學系
101
The current status is obtained due to a unique examination time. Therefore, the researchers can only know whether the event has occurred before this time point or not. If the event has occurred before the examination time, the event time of interest is left-censored, otherwise it is right-censored. The multivariate current status data arise if the researches consider multiple events simultaneously. There are a lot of literature about the statistical analysis of the multivariable current status data, but their proposed approaches are usually computationally demanding. In this thesis, we propose the Gauss-quadrature method to approximate likelihood function, and to obtain the approximate maximum likelihood estimation. We can investigate the performance of this method through simulation and illustration of the approach by real data.
APA, Harvard, Vancouver, ISO, and other styles
39

Κωστόπουλος, Δημήτριος. "Κανόνας ολοκλήρωσης του Gauss και ορθογώνια πολυώνυμα." Thesis, 2007. http://nemertes.lis.upatras.gr/jspui/handle/10889/573.

Full text
Abstract:
Ανασκόπηση του κανόνα ολοκλήρωσης του gauss. Αναπαραστάσεις και εκτιμήσεις του υπολοίπου του. Τέλος περί της σύγκλισης του κανόνα ολοκλήρωσης.
A survey on gaussian quqdrature rules. Representation and estimates of its remainder. And about its convergence.
APA, Harvard, Vancouver, ISO, and other styles
40

Gagnon, Jacob A. "A hierarchical spherical radial quadrature algorithm for multilevel GLMMs, GSMMs, and gene pathway analysis." 2010. https://scholarworks.umass.edu/dissertations/AAI3427529.

Full text
Abstract:
The first part of my thesis is concerned with estimation for longitudinal data using generalized semi-parametric mixed models and multilevel generalized linear mixed models for a binary response. Likelihood based inferences are hindered by the lack of a closed form representation. Consequently, various integration approaches have been proposed. We propose a spherical radial integration based approach that takes advantage of the hierarchical structure of the data, which we call the 2 SR method. Compared to Pinheiro and Chao’s multilevel Adaptive Gaussian quadrature [37], our proposed method has an improved time complexity with the number of functional evaluations scaling linearly in the number of subjects and in the dimension of random effects per level. Simulation studies show that our approach has similar to better accuracy compared to Gauss Hermite Quadrature (GHQ) and has better accuracy compared to PQL especially in the variance components. The second part of my thesis is concerned with identifying differentially expressed gene pathways/gene sets. We propose a logistic kernel machine to model the gene pathway effect with a binary response. Kernel machines were chosen since they account for gene interactions and clinical covariates. Furthermore, we established a connection between our logistic kernel machine with GLMMs allowing us to use ideas from the GLMM literature. For estimation and testing, we adopted Clarkson’s spherical radial approach [6] to perform the high dimensional integrations. For estimation, our performance in simulation studies is comparable to better than Bayesian approaches at a much lower computational cost. As for testing of the genetic pathway effect, our REML likelihood ratio test has increased power compared to a score test for simulated non-linear pathways. Additionally, our approach has three main advantages over previous methodologies: (1) our testing approach is self-contained rather than competitive, (2) our kernel machine approach can model complex pathway effects and gene-gene interactions, and (3) we test for the pathway effect adjusting for clinical covariates. Motivation for our work is the analysis of an Acute Lymphocytic Leukemia data set where we test for the genetic pathway effect and provide confidence intervals for the fixed effects.
APA, Harvard, Vancouver, ISO, and other styles
41

Yue, Tianyao. "Spectral Element Method for Pricing European Options and Their Greeks." Diss., 2012. http://hdl.handle.net/10161/6156.

Full text
Abstract:

Numerical methods such as Monte Carlo method (MCM), finite difference method (FDM) and finite element method (FEM) have been successfully implemented to solve financial partial differential equations (PDEs). Sophisticated computational algorithms are strongly desired to further improve accuracy and efficiency.

The relatively new spectral element method (SEM) combines the exponential convergence of spectral method and the geometric flexibility of FEM. This dissertation carefully investigates SEM on the pricing of European options and their Greeks (Delta, Gamma and Theta). The essential techniques, Gauss quadrature rules, are thoroughly discussed and developed. The spectral element method and its error analysis are briefly introduced first and expanded in details afterwards.

Multi-element spectral element method (ME-SEM) for the Black-Scholes PDE is derived on European put options with and without dividend and on a condor option with a more complicated payoff. Under the same Crank-Nicolson approach for the time integration, the SEM shows significant accuracy increase and time cost reduction over the FDM. A novel discontinuous payoff spectral element method (DP-SEM) is invented and numerically validated on a European binary put option. The SEM is also applied to the constant elasticity of variance (CEV) model and verified with the MCM and the valuation formula. The Stochastic Alpha Beta Rho (SABR) model is solved with multi-dimensional spectral element method (MD-SEM) on a European put option. Error convergence for option prices and Greeks with respect to the number of grid points and the time step is analyzed and illustrated.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Yilei Abbie. "Exploring the Importance of Accounting for Nonlinearity in Correlated Count Regression Systems from the Perspective of Causal Estimation and Inference." Diss., 2007. http://hdl.handle.net/1805/26379.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The main motivation for nearly all empirical economic research is to provide scientific evidence that can be used to assess causal relationships of interest. Essential to such assessments is the rigorous specification and accurate estimation of parameters that characterize the causal relationship between a presumed causal variable of interest, whose value is to be set and altered in the context of a relevant counterfactual and a designated outcome of interest. Relationships of this type are typically characterized by an effect parameter (EP) and estimation of the EP is the objective of the empirical analysis. The present research focuses on cases in which the regression outcome of interest is a vector that has count-valued elements (i.e., the model under consideration comprises a multi-equation system of equations). This research examines the importance of account for nonlinearity and cross-equation correlations in correlated count regression systems from the perspective of causal estimation and inference. We evaluate the efficiency and accuracy gains of estimating bivariate count valued systems-of-equations models by comparing three pairs of models: (1) Zellner’s Seemingly Unrelated Regression (SUR) versus Count-Outcome SUR - Conway Maxwell Poisson (CMP); (2) CMP SUR versus Single-Equation CMP Approach; (3) CMP SUR versus Poisson SUR. We show via simulation studies that it is more efficient to estimate jointly than equation-by-equation, it is more efficient to account for nonlinearity. We also apply our model and estimation method to real-world health care utilization data, where the dependent variables are correlated counts: count of physician office-visits, and count of non-physician health professional office-visits. The presumed causal variable is private health insurance status. Our model results in a reduction of at least 30% in standard errors for key policy EP (e.g., Average Incremental Effect). Our results are enabled by our development of a Stata program for approximating two-dimensional integrals via Gauss-Legendre Quadrature.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Yilei. "Exploring the Importance of Accounting for Nonlinearity in Correlated Count Regression Systems from the Perspective of Causal Estimation and Inference." Diss., 2021. http://hdl.handle.net/1805/26379.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The main motivation for nearly all empirical economic research is to provide scientific evidence that can be used to assess causal relationships of interest. Essential to such assessments is the rigorous specification and accurate estimation of parameters that characterize the causal relationship between a presumed causal variable of interest, whose value is to be set and altered in the context of a relevant counterfactual and a designated outcome of interest. Relationships of this type are typically characterized by an effect parameter (EP) and estimation of the EP is the objective of the empirical analysis. The present research focuses on cases in which the regression outcome of interest is a vector that has count-valued elements (i.e., the model under consideration comprises a multi-equation system of equations). This research examines the importance of account for nonlinearity and cross-equation correlations in correlated count regression systems from the perspective of causal estimation and inference. We evaluate the efficiency and accuracy gains of estimating bivariate count valued systems-of-equations models by comparing three pairs of models: (1) Zellner’s Seemingly Unrelated Regression (SUR) versus Count-Outcome SUR - Conway Maxwell Poisson (CMP); (2) CMP SUR versus Single-Equation CMP Approach; (3) CMP SUR versus Poisson SUR. We show via simulation studies that it is more efficient to estimate jointly than equation-by-equation, it is more efficient to account for nonlinearity. We also apply our model and estimation method to real-world health care utilization data, where the dependent variables are correlated counts: count of physician office-visits, and count of non-physician health professional office-visits. The presumed causal variable is private health insurance status. Our model results in a reduction of at least 30% in standard errors for key policy EP (e.g., Average Incremental Effect). Our results are enabled by our development of a Stata program for approximating two-dimensional integrals via Gauss-Legendre Quadrature.
APA, Harvard, Vancouver, ISO, and other styles
44

Otava, Martin. "Metody výpočtu maximálně věrohodných odhadů v zobecněném lineárním smíšeném modelu." Master's thesis, 2011. http://www.nusl.cz/ntk/nusl-300455.

Full text
Abstract:
of the diploma thesis Title: Computational Methods for Maximum Likelihood Estimation in Generalized Linear Mixed Models Author: Bc. Martin Otava Department: Department of Probability and Mathematical Statistics Supervisor: RNDr. Arnošt Komárek, Ph.D., Department of Probability and Mathematical Statistics Abstract: Using maximum likelihood method for generalized linear mixed models, the analytically unsolvable problem of maximization can occur. As solution, iterative and ap- proximate methods are used. The latter ones are core of the thesis. Detailed and general introducing of the widely used methods is emphasized with algorithms useful in practical cases. Also the case of non-gaussian random effects is discussed. The approximate methods are demonstrated using the real data sets. Conclusions about bias and consistency are supported by the simulation study. Keywords: generalized linear mixed model, penalized quasi-likelihood, adaptive Gauss- Hermite quadrature 1
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography