To see the other types of publications on this topic, follow the link: Matriz de erros.

Dissertations / Theses on the topic 'Matriz de erros'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Matriz de erros.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aragão, Canuto Ruan Santos. "Códigos cíclicos : uma introdução aos códigos corretores de erros." Universidade Federal de Sergipe, 2017. https://ri.ufs.br/handle/riufs/6495.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
A cyclic code is a speci c type of linear code. Its relevance consists in the fact that all its main information is intrinsic to the structure of the ideals in the quotient ring K[x]=(xn - 1) via an isomorphism. In this work, we characterize the cyclic codes in biunivocal correspondence with the ideals of this quotient ring. We will also present its generating matrix, the parity matrix and we will discuss its codi cation and decoding.
Um código cíclico é um tipo específico de código linear. Sua relevância consiste no fato de que todas suas principais informações são intrinsecas à estrutura dos ideais no anel quociente K[x]=(xn 1) via um isomorfismo. Neste trabalho, caracterizamos os códigos cíclicos em correspondência biunívoca com os ideais deste anel quociente. Apresentaremos também sua matriz geradora, a matriz de paridade e abordaremos sua codificação e decodificação.
APA, Harvard, Vancouver, ISO, and other styles
2

Kestring, Franciele Buss Frescki. "Análise geoestatística de mapas temáticos da produtividade da soja com diferentes grades amostrais." Universidade Estadual do Oeste do Parana, 2011. http://tede.unioeste.br:8080/tede/handle/tede/359.

Full text
Abstract:
Made available in DSpace on 2017-05-12T14:48:16Z (GMT). No. of bitstreams: 1 Franciele_texto.pdf: 972546 bytes, checksum: 4159555de632249d0c83764a3aecc74c (MD5) Previous issue date: 2011-07-07
Studies on spatial variability of soybeans yield are of great importance for the development of new technologies that improve the world agricultural production. One of methods that allows this study is geostatistics. The geostatistical analysis makes possible the predictions of results and one of its products are thematic maps. Thus, this trial describes some techniques to draw and compare thematic maps using kriging. The analysis was based on data from soybean yield in t ha−1 according to harvest year 2004/2005 in an experimental area with sampling grades whose distances were: 25x25 m, 50x50 m, 75x75 m and 100x100 m plus a harvest monitor. The maps were compared using error matrix and confusion matrix. In addition, there was a better accuracy of the spatial variability maps that were drawn, while the analysis of coefficients of accuracy allows a better planning of sampling mesh for future studies. The measures of accuracy that were obtained by error matrix are significant options to make comparison among thematic maps, once they provide global indices and also by classes.
Com o aumento da produção agrícola mundial, o processo de produção agrícola tornou-se alvo do estudo de diversos pesquisadores. Estudos sobre a variabilidade espacial da produtividade da soja são de grande importância para o desenvolvimento de novas tecnologias, que beneficiam a agricultura. A análise geoestatística torna possível realizar previsões dos resultados, tendo como um de seus produtos os mapas temáticos. Este trabalho descreve algumas técnicas para a construção e comparação de mapas temáticos, utilizando a krigagem. A análise foi realizada com dados da produtividade de soja em t ha−1 do ano agrícola 2004/2005 numa área experimental com grades de amostragem com distâncias de 25x25 m, 50x50 m, 75x75 m, 100x100 m e monitor de colheita, comparando-se os mapas, utilizando a matriz de erros e a matriz de confusão. Além de uma melhor precisão dos mapas de variabilidade espacial gerados, a análise dos índices de acurácia possibilita um melhor planejamento das malhas amostrais para futuros estudos. As medidas de acurácia obtidas por meio da matriz de erros são opções significativas para realizar a comparação entre mapas temáticos, uma vez que fornecem índices globais e também por classes.
APA, Harvard, Vancouver, ISO, and other styles
3

Rocha, André Muniz Marinho da. "Impacto do ajuste da matriz de covariância dos erros do background na assimilação de dados de radar." Instituto Nacional de Pesquisas Espaciais (INPE), 2016. http://urlib.net/sid.inpe.br/mtc-m21b/2016/11.30.10.55.

Full text
Abstract:
A assimilação de dados combina as informações de modelos numéricos e as observações meteorológicas, através de um processo físico-estatístico, gerando a melhor representação possível do estado da atmosfera em um dado instante de tempo. O objetivo deste trabalho é ajustar a matriz covariância do erro do background dentro do ciclo de assimilação de dados de radar Doppler, a fim de melhorar a análise e, como consequência, as previsões de precipitação de curto prazo. O modelo atmosférico e o sistema de assimilação utilizados são o Weather Research and Forcasting (WRF) e o WRF Data Assimilation (WRFDA) 3D-Var. O domínio abrange o oeste do sul do Brasil, incluindo os estados do Paraná, Santa Catarina e Rio Grande do Sul e parte do Paraguai com resolução horizontal de 2 km e 45 níveis. O período de estudo é de 15 de outubro a 15 de novembro de 2014, com a avaliação da precipitação feita comparando os resultados da modelagem com os dados do Tropical Rainfall Measuring Mission (TRMM) 3B42, usando os índices estatístico Root Mean Square Error (RMSE). Os outros campos meteorológicos também foram avaliados usando o mesmo índice estatísticos comparando-o com as observações de superfície. Observações das Estações meteorológicas de superfície foram usadas para comparação com os resultados do modelo com e sem assimilação de dados do radar. As estações selecionadas foram Curitiba, Bacacheri, Londrina e Foz do Iguaçu. Durante o processo de assimilação, os dados convencionais do Global Telecommunication System também foram assimilados. A matriz de covariância do erro de background foi gerada utilizando um utilitário do WRFDA aplicando o método NMC com 03 meses de simulações de 24 h a partir de 00UTC e 12UTC. O processo de geração da matriz B espalha horizontalmente as informações de uma determinada observação usando um filtro recursivo, em seguida, o ajuste da matriz de covariância do erro de background foi aplicado, ajustando os parâmetros variance scaling, relacionada com a intensidade com que cada observação irá influenciar as variáveis de estado nos pontos da grade do modelo, e o length scaling, relacionada com a influência do erro em escala de distância nos valores dos pontos da grade das variáveis de estado do modelo, de modo a ajustá-los para a região de estudo, os dados assimilados e o sistema meteorológico estudado. Foram testados diversos valores dos dois parâmetros e os resultados baseado no índice estatístico mostrou melhorias na previsão da localização e intensidade da precipitação quando aplicado os ajustes na matriz de covariância do erro de background.
Data assimilation combines the information from numerical models and meteorological observations through a physical-statistical process generating the best representation of atmospheric state in a moment of time. The goal of this work is to tune the background error covariance matrix while assimilating Doppler radar data in order to improve the analysis and then the short-term precipitation forecast. The atmospheric model and the assimilation system used are the Weather Research and Forecasting (WRF) and the WRF Data Assimilation (WRFDA) 3D-Var. The domain covers the west of Southern Brazil, including the state of Parana, Santa Catarina and Rio Grande do Sul and part of Paraguay with horizontal resolution of 2-km and 45 levels. The period of study is from October 15 to November 15, 2014, and the evaluation of the precipitation was made by comparing the results from modeling against the Tropical Rainfall Measuring Mission (TRMM) 3B42 data, using statistical index such the Root Mean Square Error (RMSE). The other meteorological fields were also evaluated using the same statistical indice comparing them to the surface observations. Observations of the surface weather stations were used for comparison with the model results with and without radar data assimilation. The selected stations were Curitiba, Bacacheri, Londrina and Foz do Iguaçu. During the assimilation process, the conventional data from Global Telecommunication System was also assimilated. The background error covariance matrix was generated using utility WRFDA applying the NMC method with 03 months of simulations of 24-h starting at 00UTC and 12UTC. The process of generating the matrix B horizontally spreads the information from a specific observation using a recursive filter, and then setting the error covariance matrix background was applied by adjusting the parameters variance scaling related to the intensity at each observation will influence the state variables in the model grid points , and the length scaling, related to the influence of the error in distance scale the values of the grid points of the model state variables, in order to adjust them to the region study, the assimilated data and the weather system studied. Different values of the two parameters were tested and the results based on statistical indicator showed improvements in predicting the location and intensity of precipitation when applied adjustments to the covariance matrix of background error.
APA, Harvard, Vancouver, ISO, and other styles
4

Tomaya, Lorena Yanet Cáceres. "Inferência em modelos de regressão com erros de medição sob enfoque estrutural para observações replicadas." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4584.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1 6069.pdf: 3171774 bytes, checksum: a737da63d3ddeb0d44dfc38839337d42 (MD5) Previous issue date: 2014-03-10
Financiadora de Estudos e Projetos
The usual regression model fits data under the assumption that the explanatory variable is measured without error. However, in many situations the explanatory variable is observed with measurement errors. In these cases, measurement error models are recommended. We study a structural measurement error model for replicated observations. Estimation of parameters of the proposed models was obtained by the maximum likelihood and maximum pseudolikelihood methods. The behavior of the estimators was assessed in a simulation study with different numbers of replicates. Moreover, we proposed the likelihood ratio test, Wald test, score test, gradient test, Neyman's C test and pseudolikelihood ratio test in order to test hypotheses of interest related to the parameters. The proposed test statistics are assessed through a simulation study. Finally, the model was fitted to a real data set comprising measurements of concentrations of chemical elements in samples of Egyptian pottery. The computational implementation was developed in R language.
Um dos procedimentos usuais para estudar uma relação entre variáveis é análise de regressão. O modelo de regressão usual ajusta os dados sob a suposição de que as variáveis explicativas são medidas sem erros. Porém, em diversas situações as variáveis explicativas apresentam erros de medição. Nestes casos são utilizados os modelos com erros de medição. Neste trabalho estudamos um modelo estrutural com erros de medição para observações replicadas. A estimação dos parâmetros dos modelos propostos foi efetuada pelos métodos de máxima verossimilhança e de máxima pseudoverossimilhança. O comportamento dos estimadores de alguns parâmetros foi analisado por meio de simulações para diferentes números de réplicas. Além disso, são propostos o teste da razão de verossimilhanças, o teste de Wald, o teste escore, o teste gradiente, o teste C de Neyman e o teste da razão de pseudoverossimilhanças com o objetivo de testar algumas hipóteses de interesse relacionadas aos parâmetros. As estatísticas propostas são avaliadas por meio de simulações. Finalmente, o modelo foi ajustado a um conjunto de dados reais referentes a medições de concentrações de elementos químicos em amostras de cerâmicas egípcias. A implementação computacional foi desenvolvida em linguagem R.
APA, Harvard, Vancouver, ISO, and other styles
5

Rolim, Jacqueline Gisele. "Estimação de estados em sistemas de potência pelo método da matriz aumentada : aspectos numéricos e processamento de erros grosseiros." reponame:Repositório Institucional da UFSC, 1988. https://repositorio.ufsc.br/handle/123456789/111680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fuselier, Edward J. Jr. "Refined error estimates for matrix-valued radial basis functions." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5788.

Full text
Abstract:
Radial basis functions (RBFs) are probably best known for their applications to scattered data problems. Until the 1990s, RBF theory only involved functions that were scalar-valued. Matrix-valued RBFs were subsequently introduced by Narcowich and Ward in 1994, when they constructed divergence-free vector-valued functions that interpolate data at scattered points. In 2002, Lowitzsch gave the first error estimates for divergence-free interpolants. However, these estimates are only valid when the target function resides in the native space of the RBF. In this paper we develop Sobolev-type error estimates for cases where the target function is less smooth than functions in the native space. In the process of doing this, we give an alternate characterization of the native space, derive improved stability estimates for the interpolation matrix, and give divergence-free interpolation and approximation results for band-limited functions. Furthermore, we introduce a new class of matrix-valued RBFs that can be used to produce curl-free interpolants.
APA, Harvard, Vancouver, ISO, and other styles
7

Danell, Håkansson August, and Mirja Johnsson. "An error assessment of matrix multiplications on posit matrices." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280137.

Full text
Abstract:
The posit is a floating-point number format as proposed by John Gustafson to improve upon the accuracy of the IEEE-754 format, which is the standard today. The goal of this paper was to look specifically at matrix multiplication and examine how the posit format compared to the IEEE-754. The quire which is part of the posit standard was not included in this paper due to limitations. We used the library softPosit in Python to construct arrays of matrices referred to as matrix chains. These matrices were filled with numbers in one format and bit size at a time. These chains were then multiplied together with normal matrix multiplication, and then we compared the error of the two formats. An IEEE- 754 matrix chain with more bits than the ones being compared was used as the reference to compare the accuracy between the IEEE-754 and the posit, as it pertains to matrix multiplication. The result was that the posit format could yield more accurate matrix multiplications, especially for matrix multiplications with few matrices of low dimension. When dimensions and number of matrices increased, however, the posit matrix produced an error that was greater than that of the IEEE-754 matrix. The conclusion was that posits, if used sensibly, can be a more accurate format for matrix multiplication but it is important to consider the properties inherent to the posit when dealing with matrix multiplication of matrices inhabited by posits.
Posit är John Gustafsons förslag på ett nytt format för att representera flyttal med bättre precision än det vedertagna formatet IEEE-754. På grund av avgränsningar användes inte quire-ackumulatorn som är en del av posit-standarden. Syftet med denna rapport var att genomföra matrismultiplikationer och jämföra de båda formatens resultat. Vi använde Pythonbiblioteket softPosit för att skapa listor av matriser som vi kallar för matriskedjor. Dessa matriser fylldes med tal uttryckta i de olika formaten och bitstorlekarna, i tur och ordning. Sedan utfördes matrismultiplikation på kedjorna, och felen som uppstått jämfördes mellan typerna. Som referens för detta användes IEEE-754 med fler bitar än vad talen som testades hade. Resultatet blev att positformated gav mer exakta svar. Särskilt märkbart var detta för matriskedjorna med få, mindre matriser. När dimensionerna och antalet matriser ökade hade IEEE-754 det klara övertaget. Slutsatsen blev att ett informerat och strategiskt användande av positformatet kan vara det bättre valet för matrismultiplikation, då det är viktigt att ta dess egenskaper i beaktande.
APA, Harvard, Vancouver, ISO, and other styles
8

Pazman, Andrej, and Werner Müller. "Optimal Design of Experiments Subject to Correlated Errors." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/64/1/document.pdf.

Full text
Abstract:
In this paper we consider optimal design of experiments in the case of correlated observations, when no replications are possible. This situation is typical when observing a random process or random field with known covariance structure. We present a theorem which demonstrates that the computation of optimum exact designs corresponds to solving minimization problems in terms of design measures. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
9

Zivcovich, Franco. "Backward error accurate methods for computing the matrix exponential and its action." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/250078.

Full text
Abstract:
The theory of partial differential equations constitutes today one of the most important topics of scientific understanding. A standard approach for solving a time-dependent partial differential equation consists in discretizing the spatial variables by finite differences or finite elements. This results in a huge system of (stiff) ordinary differential equations that has to be integrated in time. Exponential integrators constitute an interesting class of numerical methods for the time integration of stiff systems of differential equations. Their efficient implementation heavily relies on the fast computation of the action of certain matrix functions; among those, the matrix exponential is the most prominent one. In this manuscript, we go through the steps that led to the development of backward error accurate routines for computing the action of the matrix exponential.
APA, Harvard, Vancouver, ISO, and other styles
10

Wennbom, Marika. "Impact of error : Implementation and evaluation of a spatial model for analysing landscape configuration." Thesis, Stockholms universitet, Institutionen för naturgeografi och kvartärgeologi (INK), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-79214.

Full text
Abstract:
Quality and error assessment is an essential part of spatial analysis which with the increasingamount of applications resulting from today’s extensive access to spatial data, such as satelliteimagery and computer power is extra important to address. This study evaluates the impact ofinput errors associated with satellite sensor noise for a spatial method aimed at characterisingaspects of landscapes associated with the historical village structure, called the HybridCharacterisation Model (HCM), that was developed as a tool to monitor sub goals of theSwedish Environmental Goal “A varied agricultural landscape”. The method and errorsimulation method employed for generating random errors in the input data, is implemented andautomated as a Python script enabling easy iteration of the procedure. The HCM is evaluatedqualitatively (by visual analysis) and quantitatively comparing kappa index values between theoutputs affected by error. Comparing the result of the qualitative and quantitative evaluationshows that the kappa index is an applicable measurement of quality for the HCM. Thequalitative analysis compares impact of error for two different scales, the village scale and thelandscape scale, and shows that the HCM is performing well on the landscape scale for up to30% error and on the village scale for up to 10% and shows that the impact of error differsdepending on the shape of the analysed feature. The Python script produced in this study couldbe further developed and modified to evaluate the HCM for other aspects of input error, such asclassification errors, although for such studies to be motivated the potential errors associatedwith the model and its parameters must first be further evaluated.
APA, Harvard, Vancouver, ISO, and other styles
11

Thola, Forest D. "Minimizing Recommended Error Costs Under Noisy Inputs in Rule-Based Expert Systems." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/323.

Full text
Abstract:
This dissertation develops methods to minimize recommendation error costs when inputs to a rule-based expert system are prone to errors. The problem often arises in web-based applications where data are inherently noisy or provided by users who perceive some benefit from falsifying inputs. Prior studies proposed methods that attempted to minimize the probability of recommendation error, but did not take into account the relative costs of different types of errors. In situations where these differences are significant, an approach that minimizes the expected misclassification error costs has advantages over extant methods that ignore these costs. Building on the existing literature, two new techniques - Cost-Based Input Modification (CBIM) and Cost-Based Knowledge-Base Modification (CBKM) were developed and evaluated. Each method takes as inputs (1) the joint probability distribution of a set of rules, (2) the distortion matrix for input noise as characterized by the probability distribution of the observed input vectors conditioned on their true values, and (3) the misclassification cost for each type of recommendation error. Under CBIM, for any observed input vector v, the recommendation is based on a modified input vector v' such that the expected error costs are minimized. Under CBKM the rule base itself is modified to minimize the expected cost of error. The proposed methods were investigated as follows: as a control, in the special case where the costs associated with different types of errors are identical, the recommendations under these methods were compared for consistency with those obtained under extant methods. Next, the relative advantages of CBIM and CBKM were compared as (1) the noise level changed, and (2) the structure of the cost matrix varied. As expected, CBKM and CBIM outperformed the extant Knowledge Base Modification (KM) and Input Modification (IM) methods over a wide range of input distortion and cost matrices, with some restrictions. Under the control, with constant misclassification costs, the new methods performed equally with the extant methods. As misclassification costs increased, CBKM outperformed KM and CBIM outperformed IM. Using different cost matrices to increase misclassification cost asymmetry and order, CBKM and CBIM performance increased. At very low distortion levels, CBKM and CBIM underperformed as error probability became more significant in each method's estimation. Additionally, CBKM outperformed CBIM over a wide range of input distortion as its technique of modifying an original knowledge base outperformed the technique of modifying inputs to an unmodified decision tree.
APA, Harvard, Vancouver, ISO, and other styles
12

Nicoletti, Everton Rodrigo. "Aplicações de ágebra linear aos códigos corretos de erros e ao ensino médio /." Rio Claro, 2015. http://hdl.handle.net/11449/131882.

Full text
Abstract:
Orientador: Carina Alves
Banca: Marta Cilene Gadotti
Banca: Cintya Wink de Oliveira Benedito
Resumo: Este trabalho aborda conceitos básicos de Álgebra Linear e suas aplicações no desenvolvimento da Teoria de Códigos Corretores de Erros. O uso desta ferramenta matemática simpli ca a geração e a decodi cação dos códigos lineares. Destacamos também a importância de se trabalhar com este tema na educação básica
Abstract: The present work addresses basic concepts of Linear Algebra and its applications in the development of the Theory of Error Correcting Codes. The use of this mathematical tool simpli es the generation and decoding of linear codes. This dissertation also highlights the importance of working with this subject in high school
Mestre
APA, Harvard, Vancouver, ISO, and other styles
13

Nicoletti, Everton Rodrigo [UNESP]. "Aplicações de ágebra linear aos códigos corretos de erros e ao ensino médio." Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/131882.

Full text
Abstract:
Made available in DSpace on 2015-12-10T14:22:23Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-02-24. Added 1 bitstream(s) on 2015-12-10T14:28:28Z : No. of bitstreams: 1 000853539.pdf: 809931 bytes, checksum: 9d8238626da19307dd37676a2879339e (MD5)
Este trabalho aborda conceitos básicos de Álgebra Linear e suas aplicações no desenvolvimento da Teoria de Códigos Corretores de Erros. O uso desta ferramenta matemática simpli ca a geração e a decodi cação dos códigos lineares. Destacamos também a importância de se trabalhar com este tema na educação básica
The present work addresses basic concepts of Linear Algebra and its applications in the development of the Theory of Error Correcting Codes. The use of this mathematical tool simpli es the generation and decoding of linear codes. This dissertation also highlights the importance of working with this subject in high school
APA, Harvard, Vancouver, ISO, and other styles
14

Wojtas, David Heinrich. "Suppressing Discretization Error in Langevin Simulations of (2+1)-dimensional Field Theories." Thesis, University of Canterbury. Physics and Astronomy, 2006. http://hdl.handle.net/10092/1294.

Full text
Abstract:
Lattice simulations are a popular tool for studying the non-perturbative physics of nonlinear field theories. To perform accurate lattice simulations, a careful account of the discretization error is necessary. Spatial discretization error as a result of lattice spacing dependence in Langevin simulations of anisotropic (2 + 1)-dimensional classical scalar field theories is studied. A transfer integral operator (TIO) method and a one-loop renormalization (1LR) procedure are used to formulate effective potentials. The effective potentials contain counterterms which are intended to suppress the lattice spacing dependence. The two effective potentials were tested numerically in the case of a phi-4 model. A high accuracy modified Euler method was used to evolve a phenomenological Langevin equation. Large scale Langevin simulations were performed in parameter ranges determined to be appropriate. Attempts at extracting correlation lengths as a means of determining effectiveness of each method were not successful. Lattice sizes used in this study were not of a sufficient size to obtain an accurate representation of thermal equilibrium. As an alternative, the initial behaviour of the ensemble field average was observed. Results for the TIO method showed that it was successful at suppressing lattice spacing dependence in a mean field limit. Results for the 1LR method showed that it performed poorly.
APA, Harvard, Vancouver, ISO, and other styles
15

Albertson, K. V. "Pre-test estimation in a regression model with a mis-specified error covariance matrix." Thesis, University of Canterbury. Economics, 1993. http://hdl.handle.net/10092/4315.

Full text
Abstract:
This thesis considers some finite sample properties of a number of preliminary test (pre-test) estimators of the unknown parameters of a linear regression model that may have been mis-specified as a result of incorrectly assuming that the disturbance term has a scalar covariance matrix, and/or as a result of the exclusion of relevant regressors. The pre-test itself is a test for exact linear restrictions and is conducted using the usual Wald statistic, which provides a Uniformly Most Powerful Invariant test of the restrictions in a well specified model. The parameters to be estimated are the coefficient vector, the prediction vector (i.e. the expectation of the dependent variable conditional on the regressors), and the regression scale parameter. Note that while the problem of estimating the prediction vector is merely a special case of estimating the coefficient vector when the model is well specified, this is not the case when the model is mis-specified. The properties of each of these estimators in a well specified regression model have been examined in the literature, as have the effects of a number of different model mis-specifications, and we survey these results in Chapter Two. We will extend the existing literature by generalising the error covariance matrix in conjunction with allowing for possibly excluded regressors. To motivate the consideration of a nonscalar error covariance matrix in the context of a pre-test situation we briefly examine the literature on autoregressive and heteroscedastic error processes in Chapter Three. In Chapters Four, Five, Six, and Seven we derive the cumulative distribution function of the test statistic, and exact formulae for the bias and risk (under quadratic loss) of the unrestricted, restricted and pre-test estimators, in a model with a general error covariance matrix and possibly excluded relevant regressors. These formulae are data dependent and, to illustrate the results, are evaluated for a number of regression models and forms of error covariance matrix. In particular we determine the effects of autoregressive errors and heteroscedastic errors on each of the regression models under consideration. Our evaluations confirm the known result that the presence of a non scalar error covariance matrix introduces a distortion into the pre-test power function and we show the effects of this on the pre-test estimators. In addition to this we show that one effect of the mis-specification may be that the pre-test and restricted estimators may be strictly dominated by the corresponding unrestricted estimator even if there are no relevant regressors excluded from the model. If there are relevant regressors excluded from the model it appears that the additional mis-specification of the error covariance matrix has little qualitative impact unless the coefficients on the excluded regressors are small in magnitude or the excluded regressors are not correlated with the included regressors. As one of the effects of the mis-specification is to introduce a distortion into the pre-test power function, in Chapter Eight we consider the problem of determining the optimal critical value (under the criterion of minimax regret) for the pre-test when estimating the regression coefficient vector. We show that the mis-specification of the error covariance matrix may have a substantial impact on the optimal critical value chosen for the pre-test under this criterion, although, generally, the actual size of the pre-test is relatively unaffected by increasing degrees of mis-specification. Chapter Nine concludes this thesis and provides a summary of the results obtained in the earlier chapters. In addition, we outline some possible future research topics in this general area.
APA, Harvard, Vancouver, ISO, and other styles
16

Frederic, John. "Examination of Initialization Techniques for Nonnegative Matrix Factorization." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/63.

Full text
Abstract:
While much research has been done regarding different Nonnegative Matrix Factorization (NMF) algorithms, less time has been spent looking at initialization techniques. In this thesis, four different initializations are considered. After a brief discussion of NMF, the four initializations are described and each one is independently examined, followed by a comparison of the techniques. Next, each initialization's performance is investigated with respect to the changes in the size of the data set. Finally, a method by which smaller data sets may be used to determine how to treat larger data sets is examined.
APA, Harvard, Vancouver, ISO, and other styles
17

Feng, Dehua. "Determining Intersection Turning Movements with Detection Errors." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1512746695445707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Relton, Samuel. "Algorithms for matrix functions and their Fréchet derivatives and condition numbers." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-matrix-functions-and-their-frechet-derivatives-and-condition-numbers(f20e8144-1aa0-45fb-9411-ddc0dc7c2c31).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Stehlik, Milan. "Further aspects on an example of D-optimal designs in the case of correlated errors." Institut für Statistik und Mathematik, WU Vienna University of Economics and Business, 2004. http://epub.wu.ac.at/670/1/document.pdf.

Full text
Abstract:
The aim of this paper is discussion on particular aspects of the extension of a classic example in the design of experiments under the presence of correlated errors. Such extension allows us to study the effect of the correlation range on the design. We discuss the dependence of the information gained by the D-optimum design on the covariance bandwidth and also we concentrate to some technical aspects that occurs in such settings. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
20

Pinto, Maria Raquel Rocha. "Representações Matriciais Fraccionárias em Codificação Convolucional." Doctoral thesis, Universidade de Aveiro, 2003. http://hdl.handle.net/10773/19076.

Full text
Abstract:
Doutoramento em Matemática
Os objectos de estudo desta tese são os códigos convolucionais sobre um corpo, constituídos por sequências com suporte compacto à esquerda. Aplicando a abordagem comportamental à teoria dos sistemas, é obtida uma nova definição de código convolucional baseada em propriedades estruturais do próprio código. Os codificadores e os formadores de síndrome de um código convolucional são, respectivamente, as representações de imagem e as representações de núcleo do código. As suas estruturas e propriedades são estudadas, utilizando representações matriciais fraccionárias (RMF's). Seguidamente, são analisados os codificadores e formadores de síndrome minimais de um código convolucional, sendo apresentada uma parametrização simples das suas RMF's. Mostra-se também como obter todos os codificadores minimais de um código convolucional por aplicação de realimentação estática do estado e précompensação. De modo análogo, obtêm-se todos os formadores de síndrome minimais utilizando injecção da saída e pós-compensação. Finalmente, estudam-se os codificadores desacoplados de um código convolucional, que estão directamente ligados à sua decomposição. Apresenta-se um algoritmo para determinação de um codificador desacoplado maximal, que permitirá obter a decomposição máxima do código. Quando se restringe a análise dos codificadores desacoplados aos minimais, obtém-se um codificador canónico desacoplado e parametriza-se, utilizando RMF's, todos os codificadores minimais que apresentam grau máximo de desacoplamento.
The objects of study of this thesis are the convolutional codes over a field, constituted by left compact sequences. To define a convolutional code we consider the behavioral approach to systems theory, and present a new definition of convolutional code, taking into account its structural properties. Matrix Fractions Descriptions (MFD’s) are used as a tool for investigating the structure of the encoders and the syndrome formers of a convolutional code, which are, respectively, the image and the kernel representations of the code. Next, we concentrate on the study of the minimal encoders and syndrome formers, and obtain a simple parametrization of their MFD’s. We also show that static feedback and precompensation allow to obtain all minimal encoders of the code. The same is done for the minimal syndrome formers, using output injection and postcompensation. Finally, we analyse the decoupled encoders of a convolutional code, which are associated with code decomposition. We provide an algorithm to determine a maximally decoupled encoder, and, consequently, the finest decomposition of the code. Restricting to minimal decoupled encoders, we first obtain a canonical decoupled one, and parametrize, via MFD’s, all minimal decoupled encoders realizing the finest decomposition of the code.
APA, Harvard, Vancouver, ISO, and other styles
21

Duarte, Horacio Valadares. "Estimador de erro para a formulação p do metodo dos elementos finitos aplicado ao problema fluido-estrutura." [s.n.], 2003. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265155.

Full text
Abstract:
Orientador: Renato Pavanello
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-03T17:01:05Z (GMT). No. of bitstreams: 1 Duarte_HoracioValadares_D.pdf: 1802638 bytes, checksum: 6c15333619e711ad1597599c3d8fd770 (MD5) Previous issue date: 2003
Doutorado
APA, Harvard, Vancouver, ISO, and other styles
22

Dolgun, Leman Esra. "A Decision Matrix Based Method For Determining Priorities Of Quality Improvement Projects In Manufacturing With Inspection Error And Rework." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/2/12607859/index.pdf.

Full text
Abstract:
Today&rsquo
s competitive environments and heightened expectation of customers make it necessary to improve quality of products and processes continuously. Therefore, quality improvement is a major concern for companies. Determining improvement priorities for not only long but also short term bottom line results is a key problem in quality improvement management. In this thesis a practical decision matrix based method is developed for selecting quality improvement projects by considering throughput and quality loss in manufacturing environments with inspection error and rework. Performance of the proposed method under different experimental conditions is analyzed and results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Schweitzer, Marcel [Verfasser]. "Restarting and error estimation in polynomial and extended Krylov subspace methods for the approximation of matrix functions / Marcel Schweitzer." Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1093601442/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zu, Seung-Don. "The effect of irregular fiber distribution and error in assumed transverse fiber CTE on thermally induced fiber/matrix interfacial stresses." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3800.

Full text
Abstract:
Thermally induced interfacial stress states between fiber and matrix at cryogenic temperature were studied using three-dimensional finite element based micromechanics. Mismatch of the coefficient of thermal expansion between fiber and matrix, and mismatch of coefficient of thermal expansion between plies with different fiber orientation were considered. In order to approximate irregular fiber distributions and to model irregular fiber arrangements, various types of unit cells, which can represent nonuniformity, were constructed and from the results the worst case of fiber distributions that can have serious stress states were suggested. Since it is difficult to measure the fiber transverse coefficient of thermal expansion at the micro scale, there is an uncertainty problem for stress analysis. In order to investigate the effect of error in assumed fiber transverse coefficient of thermal expansion on thermally induced interfacial stresses, systematic studies were carried out. In this paper, the effect of measurement errors on the local stress states will be studied. Also, in order to determine fiber transverse CTE values from lamina properties, a back calculation method is used for various composite systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Mohy, Awad. "Algorithms for the matrix exponential and its Fréchet derivative." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-the-matrix-exponential-and-its-frechet-derivative(4de9bdbd-6d79-4e43-814a-197668694b8e).html.

Full text
Abstract:
New algorithms for the matrix exponential and its Fréchet derivative are presented. First, we derive a new scaling and squaring algorithm (denoted expm[new]) for computing eA, where A is any square matrix, that mitigates the overscaling problem. The algorithm is built on the algorithm of Higham [SIAM J.Matrix Anal. Appl., 26(4): 1179-1193, 2005] but improves on it by two key features. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of powering them. The second is to base the backward error analysis that underlies the algorithm on members of the sequence {||Ak||1/k} instead of ||A||. The terms ||Ak||1/k are estimated without computing powers of A by using a matrix 1-norm estimator. Second, a new algorithm is developed for computing the action of the matrix exponential on a matrix, etAB, where A is an n x n matrix and B is n x n₀ with n₀ << n. The algorithm works for any A, its computational cost is dominated by the formation of products of A with n x n₀ matrices, and the only input parameter is a backward error tolerance. The algorithm can return a single matrix etAB or a sequence etkAB on an equally spaced grid of points tk. It uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential. It determines the amount of scaling and the Taylor degree using the strategy of expm[new].Preprocessing steps are used to reduce the cost of the algorithm. An important application of the algorithm is to exponential integrators for ordinary differential equations. It is shown that the sums of the form $\sum_{k=0}^p\varphi_k(A)u_k$ that arise in exponential integrators, where the $\varphi_k$ are related to the exponential function, can be expressed in terms of a single exponential of a matrix of dimension $n+p$ built by augmenting $A$ with additional rows and columns. Third, a general framework for simultaneously computing a matrix function, $f(A)$, and its Fréchet derivative in the direction $E$, $L_f(A,E)$, is established for a wide range of matrix functions. In particular, we extend the algorithm of Higham and $\mathrm{expm_{new}}$ to two algorithms that intertwine the evaluation of both $e^A$ and $L(A,E)$ at a cost about three times that for computing $e^A$ alone. These two extended algorithms are then adapted to algorithms that simultaneously calculate $e^A$ together with an estimate of its condition number. Finally, we show that $L_f(A,E)$, where $f$ is a real-valued matrix function and $A$ and $E$ are real matrices, can be approximated by $\Im f(A+ihE)/h$ for some suitably small $h$. This approximation generalizes the complex step approximation known in the scalar case, and is proved to be of second order in $h$ for analytic functions $f$ and also for the matrix sign function. It is shown that it does not suffer the inherent cancellation that limits the accuracy of finite difference approximations in floating point arithmetic. However, cancellation does nevertheless vitiate the approximation when the underlying method for evaluating $f$ employs complex arithmetic. The complex step approximation is attractive when specialized methods for evaluating the Fréchet derivative are not available.
APA, Harvard, Vancouver, ISO, and other styles
26

Segura, ugalde Esteban. "Computation of invariant pairs and matrix solvents." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0045/document.

Full text
Abstract:
Cette thèse porte sur certains aspects symboliques-numériques du problème des paires invariantes pour les polynômes de matrices. Les paires invariantes généralisent la définition de valeur propre / vecteur propre et correspondent à la notion de sous-espaces invariants pour le cas nonlinéaire. Elles trouvent leurs applications dans le calcul numérique de plusieurs valeurs propres d’un polynôme de matrices; elles présentent aussi un intérêt dans le contexte des systèmes différentiels. En utilisant une approche basée sur les intégrales de contour, nous déterminons des expressions du nombre de conditionnement et de l’erreur rétrograde pour le problème du calcul des paires invariantes. Ensuite, nous adaptons la méthode des moments de Sakurai-Sugiura au calcul des paires invariantes et nous étudions le comportement de la version scalaire et par blocs de la méthode en présence de valeurs propres multiples. Le résultats obtenus à l’aide des approches directes peuvent éventuellement être améliorés numériquement grâce à une méthode itérative: nous proposons ici une comparaison de deux variantes de la méthode de Newton appliquée aux paires invariantes. Le problème des solvants de matrices est très proche de celui des paires invariants. Le résultats présentés ci-dessus sont donc appliqués au cas des solvants pour obtenir des expressions du nombre de conditionnement et de l’erreur, et un algorithme de calcul basé sur la méthode des moments. De plus, nous étudions le lien entre le problème des solvants et la transformation des polynômes de matrices en forme triangulaire
In this thesis, we study some symbolic-numeric aspects of the invariant pair problem for matrix polynomials. Invariant pairs extend the notion of eigenvalue-eigenvector pairs, providing a counterpart of invariant subspaces for the nonlinear case. They have applications in the numeric computation of several eigenvalues of a matrix polynomial; they also present an interest in the context of differential systems. Here, a contour integral formulation is applied to compute condition numbers and backward errors for invariant pairs. We then adapt the Sakurai-Sugiura moment method to the computation of invariant pairs, including some classes of problems that have multiple eigenvalues, and we analyze the behavior of the scalar and block versions of the method in presence of different multiplicity patterns. Results obtained via direct approaches may need to be refined numerically using an iterative method: here we study and compare two variants of Newton’s method applied to the invariant pair problem. The matrix solvent problem is closely related to invariant pairs. Therefore, we specialize our results on invariant pairs to the case of matrix solvents, thus obtaining formulations for the condition number and backward errors, and a moment-based computational approach. Furthermore, we investigate the relation between the matrix solvent problem and the triangularization of matrix polynomials
APA, Harvard, Vancouver, ISO, and other styles
27

Teixeira, Antonio Cesar. "Reconciliação de dados de processos e detecção de erros grosseiros em sistemas com restrições não-lineares." [s.n.], 1997. http://repositorio.unicamp.br/jspui/handle/REPOSIP/266369.

Full text
Abstract:
Orientador: João Alexandre Ferreira da Rocha Pereira
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica
Made available in DSpace on 2018-07-22T19:07:47Z (GMT). No. of bitstreams: 1 Teixeira_AntonioCesar_M.pdf: 8247740 bytes, checksum: 0279970d0d4efd19c97ed5fd37b1fba3 (MD5) Previous issue date: 1997
Resumo: o tratamento de dados de processos industriais envolve uma série de medidas as quais visam a dar mais confiabilidade aos valores medidos diretamente e aos inferidos indiretamente, para sua utilização no controle dos mesmos. Estão entre estas medidas, a classificação, a reconciliaçãoe a retificação de dados. Este trabalho apresenta uma metodologia para reconciliação de dados de processos industriais onde não existam erros grosseiros entre os valores das variáveis medidas, sejam as restrições lineares ou não-lineares. A ferramenta utilizada é a projeção matricial a qual é utilizada para simplificaras equações de balanços (restrições) de massa e/ou energia de processos complexos. O objetivo é minimizaro erro ou a diferença entre os valores reconciliados e os valores reais. A partir de cálculos intermediários do procedimento de reconciliação, foidesenvolvido um segundo procedimento para detecção de erros grosseiros entre os valores das variáveis medidas. A presença de erros grosseiros entre as medidas inutiliza os dados reconciliados, contudo fornece subsídios para, a partir deste segundo procedimento, determinar a presença do erro grosseiro. Os três procedimentos, acima citados, para o tratamento de dados do processo, são descritos neste trabalho, com os elementos teóricos desenvolvidos de modo detalhado. Dois programas computacionais são escritos e aqui apresentados, sendo que o primeiro faz a reconciliação de dados e o segundo detecta a existência ou não de erros grosseiros entre os valores apresentados
Abstract: Not informed.
Mestrado
Sistemas de Processos Quimicos e Informatica
Mestre em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
28

Dalposso, Gustavo Henrique. "Estatística espacial aplicada à agricultura de precisão." Universidade Estadual do Oeste do Parana, 2010. http://tede.unioeste.br:8080/tede/handle/tede/324.

Full text
Abstract:
Made available in DSpace on 2017-05-12T14:48:03Z (GMT). No. of bitstreams: 1 Gustavo Henrique Dalposso.pdf: 751881 bytes, checksum: d4ec13dacd0e510c7549e67525afd909 (MD5) Previous issue date: 2010-01-13
The methods provided by the spatial statistics are of great importance for studies involving data related to agriculture, for they allow one to know the space variability of the study and identify regions that have similar characteristics, which allows completely localized treatment, maximizing productivity and minimizing the impacts of excessive input application. One of the branches of spatial statistics is geostatistics, which uses a set of regionalized variables to model the structure of spatial dependence, allowing the preparation of thematic maps. Currently, geostatistical studies do not end with the preparation of maps, but also estimates monitored the attribute in non-sampled locations. It is necessary to investigate the quality of these maps, investigating influential points and using measurements to compare maps and area estimations. Another form of research is known as spatial statistics of areas where the objects of analysis are polygons representing blocks, neighborhoods, cities, states and others. This type of analysis seeks to identify spatial autocorrelation in global and local levels, and the usual form of reporting is through thematic maps. In this work we used geostatistics to investigate the productivity of wheat in an agricultural area of 13.7 hectares in the municipality of Salto do Lontra PR. Out of the 50 samples, two were identified as influential, and thus, we chose to build two thematic maps and to compare them using metrics derived from the matrix of errors. The results showed that the maps are different and the removal of influential points was essential to improve the quality of thematic map, since the difference between the estimated yield and actual yield was only 40 Kilos. In order to display the resources provided by the spatial statistics of areas we compared to the vegetation rates NDVI and GVI's of soybean yield from 36 cities in Western Paraná in the agricultural year of 2004/2005. The results showed regions with similar characteristics and that soybeans grow at different times in the region.
As metodologias fornecidas pela estatística espacial são de grande importância para estudos envolvendo dados relacionados à agricultura, pois permitem conhecer a variabilidade espacial dos atributos estudados e identificar regiões que apresentam características semelhantes, o que permite realizar tratamentos localizados, maximizando as produtividades e minimizando os impactos causados pela aplicação de insumos em excesso. Um dos ramos da estatística espacial é a geoestatística, que utiliza um conjunto de variáveis regionalizadas para modelar a estrutura de dependência espacial, possibilitando a elaboração de mapas temáticos. Atualmente os estudos geoestatísticos não terminam com a elaboração dos mapas, pois além de estimar o atributo monitorado em locais não amostrados se faz necessário investigar a qualidade destes mapas, investigando pontos influentes e utilizando medidas que permitam comparar mapas e realizar estimações de áreas. Outra forma de investigação é conhecida como estatística espacial de áreas, em que os objetos de análise são polígonos que representam talhões, bairros, municípios, estados entre outros. Neste tipo de análise, procura-se identificar autocorrelações espaciais em nível global e local, e a forma usual de apresentação dos resultados é feita utilizando mapas temáticos. Neste trabalho utilizou-se a geoestatística para investigar a produtividade de trigo em uma área agrícola de 13,7 hectares no município de Salto do Lontra Pr. Das 50 amostras coletadas, identificou-se duas como influentes e, com isso, optou-se por construir dois mapas temáticos e compará-los utilizando métricas derivadas da matriz dos erros. Os resultados mostraram que os mapas são diferentes e a retirada dos pontos influentes foi de fundamental importância para melhorar a qualidade do mapa temático, visto que a diferença entre a produtividade estimada e a produtividade real foi de apenas 40 quilos. Para apresentar os recursos fornecidos pela estatística espacial de áreas comparou-se os índices de vegetação NDVI e GVI da produtividade de soja de 36 municípios da região Oeste do Paraná no ano agrícola 2004/2005. Os resultados permitiram identificar regiões com características semelhantes e que a soja é cultivada em períodos distintos na região.
APA, Harvard, Vancouver, ISO, and other styles
29

Abear, Saeed Aboglida Saeed. "Convergence Analysis of Modulus Based Methods for Linear Complementarity Problems." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110168&source=NDLTD&language=en.

Full text
Abstract:
The linear complementarity problems (LCP) arise from linear or quadratic programming, or from a variety of other particular application problems, like boundary problems, network equilibrium problems,contact problems, market equilibria problems, bimatrix games etc. Recently, many people have focused on the solver of LCP with a matrix having some kind of special property, for example, when this matrix is an H+-matrix, since this property is a sufficient condition for the existence and uniqueness of the soluition of LCP. Generally speaking, solving LCP can be approached from two essentially different perspectives. One of them includes the use of so-called direct methods, in the literature also known under the name pivoting methods. The other, and from our perspective - more interesting one, which we actually focus on in this thesis,is the iterative approach. Among the vast collection of iterative solvers,our choice was one particular class of modulus based iterative methods.Since the subclass of modulus based-methods is again diverse in some sense, it can be specialized even further, by the introduction and the use of matrix splittings. The main goal of this thesis is to use the theory of H -matrices for proving convergence of the modulus-based multisplit-ting methods, and to use this new technique to analyze some important properties of iterative methods once the convergence has been guaranteed.
Problemi linearne komplementarnosti (LCP) se javljaju kod problema linearnog i kvadratnog programiranja i kod mnogih drugih problema iz prakse, kao što su, na  primer, problemi sa graničnim slojem, problemi mrežnih ekvilibrijuma, kontaktni problemi, problemi određivanja tržišne ravnoteže, problemi bimatričnih igara i mnogi drugi. Ne tako davno, veliki broj autora se bavio razvijanjem postupaka za rešavanje LCP sa matricom koja ispunjava neko specijalno svojstvo, na primer, da pripada klasi H+-matrica, budući da je dobro poznato da je ovaj uslov dovoljan da obezbedi egzistenciju i jedinstvenost rešenja LCP. Uopšteno govoreći, rešavanju LCP moguce  je pristupiti dvojako. Prvi pristup podrazumeva upotrebu takozvanih direktnih metoda, koje su u literaturi poznate i pod nazivom metode pivota. Drugoj kategoriji, koja je i sa stanovišta ove teze interesantna, pripadaju iterativni postupci. S obzirom da je ova kategorija izuzetno bogata, mi smo se opredelili za jednu od najznačajnijih varijanti, a  to je modulski iterativni postupak. Međutim, ni ova odrednica nije dovoljno adekvatna, budući da modulski postupci obuhvataju nekolicinu različitih pravaca. Zato smo se odlučili da posmatramo postupke koji se zasnivaju na razlaganjima ali i višestrukim razlaganjima matrice. Glavni cilj ove doktorske disertacije jeste upotreba teorije H -matrica u teoremama o konvergenciji modulskih metoda zasnovanih na multisplitinzima matrice i korišćenje ove nove tehnike, sa ciljem analize bitnih osobina, nakon što je konvergencija postupka zagarantovana.
APA, Harvard, Vancouver, ISO, and other styles
30

Koniakowski, Isabella. "Debugging in a World Full of Bugs : Designing an educational game to teach debugging and error detection with the help of a teachable agent." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171770.

Full text
Abstract:
This study used the Magical Garden software and earlier research into computational thinking as a point of departure to explore what metaphors could be used and how a teachable agent could be utilised to introduce debugging and error detection to preschool children between four and six years old. A research through design methodology allowed the researcher to iteratively work divergently and convergently through sketching, creating a Pugh matrix, conducting six formative interviews, and finally creating two hybrid-concepts as paths to teaching debugging in the form of concepts. Many metaphors discovered in the design process and in preschool teachers' daily practices were judged possible for teaching debugging and error detection. The analysis of these resulted in four recommendations for choosing a suitable metaphor when teaching debugging: it should have clear rights and wrongs, it should allow for variation, it should have an easily understandable sequentiality to it, and it should be appropriate for the age-group. Furthermore, six recommendations were formulated for utilising a teachable agent: have explicitly stated learning goals, review them and explore new ones as you go, have a diverse design space exploration, make the learning objective task complex, not the game in general, reflect on if using a TA is the best solution, make use of the correct terminology, and keep the graphical elements simple. These recommendations, together with the hybrid-concepts created, provide researchers and teachers with knowledge of how to choose appropriate metaphors and utilise teachable agents when aiming to teach debugging and error detection to children between four and six years old.
APA, Harvard, Vancouver, ISO, and other styles
31

Chabot, Vincent. "Etude de représentations parcimonieuses des statistiques d'erreur d'observation pour différentes métriques. Application à l'assimilation de données images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM094/document.

Full text
Abstract:
Les dernières décennies ont vu croître en quantité et en qualité les données satellites. Au fil des ans, ces observations ont pris de plus en plus d'importance en prévision numérique du temps. Ces données sont aujourd'hui cruciales afin de déterminer de manière optimale l'état du système étudié, et ce, notamment car elles fournissent des informations denses et de qualité dansdes zones peu observées par les moyens conventionnels. Cependant, le potentiel de ces séquences d'images est encore largement sous–exploitée en assimilation de données : ces dernières sont sévèrement sous–échantillonnées, et ce, en partie afin de ne pas avoir à tenir compte des corrélations d'erreurs d'observation.Dans ce manuscrit nous abordons le problème d'extraction, à partir de séquences d'images satellites, d'information sur la dynamique du système durant le processus d'assimilation variationnelle de données. Cette étude est menée dans un cadre idéalisé afin de déterminer l'impact d'un bruit d'observations et/ou d'occultations sur l'analyse effectuée.Lorsque le bruit est corrélé en espace, tenir compte des corrélations en analysant les images au niveau du pixel n'est pas chose aisée : il est nécessaire d'inverser la matrice de covariance d'erreur d'observation (qui se révèle être une matrice de grande taille) ou de faire des approximationsaisément inversibles de cette dernière. En changeant d'espace d'analyse, la prise en compte d'une partie des corrélations peut être rendue plus aisée. Dans ces travaux, nous proposons d'effectuer cette analyse dans des bases d'ondelettes ou des trames de curvelettes. En effet, un bruit corréléen espace n'impacte pas de la même manière les différents éléments composants ces familles. En travaillant dans ces espaces, il est alors plus aisé de tenir compte d'une partie des corrélations présentes au sein du champ d'erreur. La pertinence de l'approche proposée est présentée sur différents cas tests.Lorsque les données sont partiellement occultées, il est cependant nécessaire de savoir comment adapter la représentation des corrélations. Ceci n'est pas chose aisée : travailler avec un espace d'observation changeant au cours du temps rend difficile l'utilisation d'approximations aisément inversibles de la matrice de covariance d'erreur d'observation. Dans ces travaux uneméthode permettant d'adapter, à moindre coût, la représentations des corrélations (dans des bases d'ondelettes) aux données présentes dans chaque image est proposée. L'intérêt de cette approche est présenté dans un cas idéalisé
Recent decades have seen an increase in quantity and quality of satellite observations . Over the years , those observations has become increasingly important in numerical weather forecasting. Nowadays, these datas are crucial in order to determine optimally the state of the studied system. In particular, satellites can provide dense observations in areas poorly observed by conventionnal networks. However, the potential of such observations is clearly under--used in data assimilation : in order to avoid the management of observation errors, thinning methods are employed in association to variance inflation.In this thesis, we adress the problem of extracting information on the system dynamic from satellites images data during the variationnal assimilation process. This study is carried out in an academic context in order to quantify the influence of observation noise and of clouds on the performed analysis.When the noise is spatially correlated, it is hard to take into account such correlations by working in the pixel space. Indeed, it is necessary to invert the observation error covariance matrix (which turns out to be very huge) or make an approximation easily invertible of such a matrix. Analysing the information in an other space can make the job easier. In this manuscript, we propose to perform the analysis step in a wavelet basis or a curvelet frame. Indeed, in those structured spaces, a correlated noise does not affect in the same way the differents structures. It is then easier to take into account part of errors correlations : a suitable approximation of the covariance matrix is made by considering only how each kind of element is affected by a correlated noise. The benefit of this approach is demonstrated on different academic tests cases.However, when some data are missing one has to address the problem of adapting the way correlations are taken into account. This work is not an easy one : working in a different observation space for each image makes the use of easily invertible approximate covariance matrix very tricky. In this work a way to adapt the diagonal hypothesis of the covariance matrix in a wavelet basis, in order to take into account that images are partially hidden, is proposed. The interest of such an approach is presented in an idealised case
APA, Harvard, Vancouver, ISO, and other styles
32

Alves, Marcelo Muniz Silva. "Codigos geometricamente uniformes em espaços de Lee." [s.n.], 1998. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306609.

Full text
Abstract:
Orientador: Sueli Irene Rodrigues Costa
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-07-23T11:40:28Z (GMT). No. of bitstreams: 1 Alves_MarceloMunizSilva_M.pdf: 2107560 bytes, checksum: 6f6290ff4cfa14083f8d89ca2d08a5f5 (MD5) Previous issue date: 1998
Resumo: Não informado.
Abstract: Not informed.
Mestrado
Mestre em Matemática
APA, Harvard, Vancouver, ISO, and other styles
33

Bautista, Martín Miguel Ángel. "Learning error-correcting representations for multi-class problems." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/396124.

Full text
Abstract:
Real life is full of multi-class decision tasks. In the Pattern Recognition field, several method- ologies have been proposed to deal with binary problems obtaining satisfying results in terms of performance. However, the extension of very powerful binary classifiers to the multi-class case is a complex task. The Error-Correcting Output Codes framework has demonstrated to be a very powerful tool to combine binary classifiers to tackle multi-class problems. However, most of the combinations of binary classifiers in the ECOC framework overlook the underlay- ing structure of the multi-class problem. In addition, is still unclear how the Error-Correction of an ECOC design is distributed among the different classes. In this dissertation, we are interested in tackling critic problems of the ECOC framework, such as the definition of the number of classifiers to tackle a multi-class problem, how to adapt the ECOC coding to multi-class data and how to distribute error-correction among different pairs of categories. In order to deal with this issues, this dissertation describes several proposals. 1) We define a new representation for ECOC coding matrices that expresses the pair-wise codeword separability and allows for a deeper understanding of how error-correction is distributed among classes. 2) We study the effect of using a logarithmic number of binary classifiers to treat the multi-class problem in order to obtain very efficient models. 3) In order to search for very compact ECOC coding matrices that take into account the distribution of multi-class data we use Genetic Algorithms that take into account the constraints of the ECOC framework. 4) We propose a discrete factorization algorithm that finds an ECOC configuration that allocates the error-correcting capabilities to those classes that are more prone to errors. The proposed methodologies are evaluated on different real and synthetic data sets: UCI Machine Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, and Human Pose Recovery. The results of this thesis show that significant perfor- mance improvements are obtained on traditional coding ECOC designs when the proposed ECOC coding designs are taken into account.
En la vida cotidiana las tareas de decisión multi-clase surgen constantemente. En el campo de Reconocimiento de Patrones muchos métodos de clasificación binaria han sido propuestos obteniendo resultados altamente satisfactorios en términos de rendimiento. Sin embargo, la extensión de estos sofisticados clasificadores binarios al contexto multi-clase es una tarea compleja. En este ámbito, las estrategias de Códigos Correctores de Errores (CCEs) han demostrado ser una herramienta muy potente para tratar la combinación de clasificadores binarios. No obstante, la mayoría de arquitecturas de combinación de clasificadores binarios negligen la estructura del problema multi-clase. Sin embargo, el análisis de la distribución de corrección de errores entre clases es aún un problema abierto. En esta tesis doctoral, nos centramos en tratar problemas críticos de los códigos correctores de errores; la definición del número de clasificadores necesarios para tratar un problema multi-clase arbitrario; la adaptación de los problemas binarios al problema multi-clase y cómo distribuir la corrección de errores entre clases. Para dar respuesta a estas cuestiones, en esta tesis doctoral describimos varias propuestas. 1) Definimos una nueva representación para CCEs que expresa la separabilidad entre pares de códigos y nos permite una mejor comprensión de cómo se distribuye la corrección de errores entre distintas clases. 2) Estudiamos el efecto de usar un número logarítmico de clasificadores binarios para tratar el problema multi-clase con el objetivo de obtener modelos muy eficientes. 3) Con el objetivo de encontrar modelos muy eficientes que tienen en cuenta la estructura del problema multi-clase utilizamos algoritmos genéticos que tienen en cuenta las restricciones de los ECCs. 4) Pro- ponemos un algoritmo de factorización de matrices discreta que encuentra ECCs con una configuración que distribuye corrección de error a aquellas categorías que son más propensas a tener errores. Las metodologías propuestas son evaluadas en distintos problemas reales y sintéticos como por ejemplo: Repositorio UCI de Aprendizaje Automático, reconocimiento de símbolos escritos, clasificación de señales de tráfico y reconocimiento de la pose humana. Los resultados obtenidos en esta tesis muestran mejoras significativas en rendimiento comparados con los diseños tradiciones de ECCs cuando las distintas propuestas se tienen en cuenta.
APA, Harvard, Vancouver, ISO, and other styles
34

Cheng, Sibo. "Error covariance specification and localization in data assimilation with industrial application Background error covariance iterative updating with invariant observation measures for data assimilation A graph clustering approach to localization for adaptive covariance tuning in data assimilation based on state-observation mapping Error covariance tuning in variational data assimilation: application to an operating hydrological model." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST067.

Full text
Abstract:
Les méthodes d’assimilation de données et plus particulièrement les méthodes variationnelles sont mises à profit dans le domaine industriel pour deux grands types d’applications que sont la reconstruction de champ physique et le recalage de paramètres. Une des difficultés de mise en œuvre des algorithmes d’assimilation est que la structure de matrices de covariance d’erreurs, surtout celle d’ébauche, n’est souvent pas ou mal connue. Dans cette thèse, on s’intéresse à la spécification et la localisation de matrices de covariance dans des systèmes multivariés et multidimensionels, et dans un cadre industriel. Dans un premier temps, on cherche à adapter/améliorer notre connaissance sur les covariances d’analyse à l’aide d’un processus itératif. Dans ce but nous avons développé deux nouvelles méthodes itératives pour la construction de matrices de covariance d’erreur d’ébauche. L’efficacité de ces méthodes est montrée numériquement en expériences jumelles avec des erreurs indépendantes ou relatives aux états vrais. On propose ensuite un nouveau concept de localisation pour le diagnostic et l’amélioration des covariances des erreurs. Au lieu de s’appuyer sur une distance spatiale, cette localisation est établie exclusivement à partir de liens entre les variables d’état et les observations. Finalement, on applique une combinaison de ces nouvelles approches et de méthodes plus classiques existantes, pour un modèle hydrologique multivarié développé à EDF. L’assimilation de données est mise en œuvre pour corriger la quantité de précipitation observée afin d’obtenir une meilleure prévision du débit d’une rivière en un point donné
Data assimilation techniques are widely applied in industrial problems of field reconstruction or parameter identification. The error covariance matrices, especially the background matrix in data assimilation are often difficult to specify. In this thesis, we are interested in the specification and localization of covariance matrices in multivariate and multidimensional systems in an industrial context. We propose to improve the covariance specification by iterative processes. Hence, we developed two new iterative methods for background matrix recognition. The power of these methods is demonstrated numerically in twin experiments with independent errors or relative to true states. We then propose a new concept of localization and applied it for error covariance tuning. Instead of relying on spatial distance, this localization is established purely on links between state variables and observations. Finally, we apply these new approaches, together with other classical methods for comparison, to a multivariate hydrological model. Variational assimilation is implemented to correct the observed precipitation in order to obtain a better river flow forecast
APA, Harvard, Vancouver, ISO, and other styles
35

Seiss, Mark Thomas. "Improving Survey Methodology Through Matrix Sampling Design, Integrating Statistical Review Into Data Collection, and Synthetic Estimation Evaluation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47968.

Full text
Abstract:
The research presented in this dissertation touches on all aspects of survey methodology, from questionnaire design to final estimation. We first approach the questionnaire development stage by proposing a method of developing matrix sampling designs, a design where a subset of questions are administered to a respondent in such a way that the administered questions are predictive of the omitted questions. The proposed methodology compares favorably to previous methods when applied to data collected from a household survey conducted in the Nampula province of Mozambique. We approach the data collection stage by proposing a structured procedure of implementing small-scale surveys in such a way that non-sampling error attributed to data collection is minimized. This proposed methodology requires the inclusion of the statistician in the data editing process during data collection. We implemented the structured procedure during the collection of household survey data in the city of Maputo, the capital of Mozambique. We found indications that the data resulting from the structured procedure is of higher quality than the data with no editing. Finally, we approach the estimation phase of sample surveys by proposing a model-based approach to the estimation of the mean squared error associated with synthetic (indirect) estimates. Previous methodology aggregates estimates for stability, while our proposed methodology allows area-specific estimates. We applied the proposed mean squared error estimation methodology and methods found during literature review to simulated data and estimates from 2010 Census Coverage Measurement (CCM). We found that our proposed mean squared error estimation methodology compares favorably to the previous methods, while allowing for area-specific estimates.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Kuljus, Kristi. "Rank Estimation in Elliptical Models : Estimation of Structured Rank Covariance Matrices and Asymptotics for Heteroscedastic Linear Regression." Doctoral thesis, Uppsala universitet, Matematisk statistik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9305.

Full text
Abstract:
This thesis deals with univariate and multivariate rank methods in making statistical inference. It is assumed that the underlying distributions belong to the class of elliptical distributions. The class of elliptical distributions is an extension of the normal distribution and includes distributions with both lighter and heavier tails than the normal distribution. In the first part of the thesis the rank covariance matrices defined via the Oja median are considered. The Oja rank covariance matrix has two important properties: it is affine equivariant and it is proportional to the inverse of the regular covariance matrix. We employ these two properties to study the problem of estimating the rank covariance matrices when they have a certain structure. The second part, which is the main part of the thesis, is devoted to rank estimation in linear regression models with symmetric heteroscedastic errors. We are interested in asymptotic properties of rank estimates. Asymptotic uniform linearity of a linear rank statistic in the case of heteroscedastic variables is proved. The asymptotic uniform linearity property enables to study asymptotic behaviour of rank regression estimates and rank tests. Existing results are generalized and it is shown that the Jaeckel estimate is consistent and asymptotically normally distributed also for heteroscedastic symmetric errors.
APA, Harvard, Vancouver, ISO, and other styles
37

Nino, Ruiz Elias David. "Efficient formulation and implementation of ensemble based methods in data assimilation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/64438.

Full text
Abstract:
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Dahlin, Mathilda. "Avkodning av cykliska koder - baserad på Euklides algoritm." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-48248.

Full text
Abstract:
Today’s society requires that transformation of information is done effectively and correctly. In other words, the received message must correspond to the message being sent. There are a lot of decoding methods to locate and correct errors. The main purpose in this degree project is to study one of these methods based on the Euclidean algorithm. Thereafter an example will be illustrated showing how the method is used when decoding a three - error correcting BCH code. To begin with, fundamental concepts about coding theory are introduced. Secondly, linear codes, cyclic codes and BCH codes - in that specific order - are explained before advancing to the decoding process. The results show that correcting one or two errors is relatively simple, but when three or more errors occur it becomes much more complicated. In that case, a specific method is required.
Dagens samhälle kräver att informationsöverföring sker på ett effektivt och korrekt sätt, det vill säga att den information som når mottagaren motsvarar den som skickades från början. Det finns många avkodningsmetoder för att lokalisera och rätta fel. Syftet i denna uppsats är att studera en av dessa, en som baseras på Euklides algoritm och därefter illustrera ett exempel på hur metoden används vid avkodning av en tre - rättande BCH - kod. Först ges en presentation av grunderna inom kodningsteorin. Sedan introduceras linjära koder, cykliska koder och BCH - koder i nämnd ordning, för att till sist presentera avkodningsprocessen. Det visar sig att det är relativt enkelt att rätta ett eller två fel, men när tre eller fler fel uppstår blir det betydligt mer komplicerat. Då krävs någon speciell metod.
APA, Harvard, Vancouver, ISO, and other styles
39

SILVA, Cássio André Sousa da. "Correção de apagamentos em rajadas utilizando códigos LDPC gerados pela composição de matrizes bases e pelos moviementos de matrizes circulantes." Universidade Federal do Pará, 2016. http://repositorio.ufpa.br/jspui/handle/2011/8232.

Full text
Abstract:
Submitted by camilla martins (camillasmmartins@gmail.com) on 2017-04-24T11:48:05Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5)
Approved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2017-04-24T16:57:51Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5)
Made available in DSpace on 2017-04-24T16:57:51Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5) Previous issue date: 2016-10-21
Nesta tese são propostos procedimentos para a construção de matrizes de verificação de paridade para codificação e decodificação de códigos LDPC (low-density paritycheck) na recuperação de bits apagados no canal com apagamentos em rajada. As matrizes de verificação de paridade são produzidas por concatenação das matrizes bases binárias justapostas por matrizes circulantes sendo de fácil implementação e de menor aleatoriedade. As matrizes bases são desenvolvidas a partir de fundamentos da álgebra e da geometria. Para demonstrar o potencial da técnica foi elaborado um conjunto de simulações que usa codificação de baixa complexidade, bem como o uso dos algoritmos soma e produto para recuperar os apagamentos. Foram gerados vários códigos LDPC, a partir das matrizes, e os resultados obtidos foram comparados com outros códigos LDPC obtidos da literatura. São ainda apresentados os resultados da simulação da recuperação de apagamentos resultantes da transmissão de uma imagem através de um canal ruidoso.partir das matrizes, e os resultados obtidos foram comparados com outros códigos LDPC obtidos da literatura. São ainda apresentados os resultados da simulação da recuperação de apagamentos resultantes da transmissão de uma imagem através de um canal ruidoso.
This thesis proposed procedures for the construction of parity check matrices for encoding and decoding of LDPC codes in the recovery of deleted bits in Burst Erasure Channel. The parity check matrices are produced by concatenation of binary bases matrices juxtaposed by circulating matrices are easy to implement and lower randomness. The base arrays are developed from the foundations of algebra and geometry. To demonstrate the potential of the technique, we developed a number of simulations using low complexity encoding as well as the sum-product algorithm. Several LDPC codes (matrices) were generated and the results were compared with other approaches. We also present the outcomes of erasure recovery simulations that result from the transmission of an image through a noisy channel.
APA, Harvard, Vancouver, ISO, and other styles
40

Essongue-Boussougou, Simon. "Méthode des éléments finis augmentés pour la rupture quasi-fragile : application aux composites tissés à matrice céramique." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0018/document.

Full text
Abstract:
Le calcul de la durée de vie des Composites tissés à Matrice Céramique (CMC) nécessite de déterminer l’évolution de la densité de fissures dans le matériau(pouvant atteindre 10 mm-1). Afin de les représenter finement on se propose de travailler à l’échelle mésoscopique. Les méthodes de type Embedded Finite Element (EFEM) nous ont paru être les plus adaptées au problème. Elles permettent une représentation discrète des fissures sans introduire de degrés de liberté additionnels.Notre choix s’est porté sur une EFEM s’affranchissant d’itérations élémentaires et appelée Augmented Finite Element Method (AFEM). Une variante d’AFEM, palliant des lacunes de la méthode originale, a été développée. Nous avons démontré que,sous certaines conditions, AFEM et la méthode des éléments finis classique (FEM) étaient équivalentes. Nous avons ensuite comparé la précision d’AFEM et de FEM pour représenter des discontinuités fortes et faibles. Les travaux de thèse se concluent par des exemples d’application de la méthode aux CMC
Computing the lifetime of woven Ceramic Matrix Composites (CMC) requires evaluating the crack density in the material (which can reach 10 mm-1). Numerical simulations at the mesoscopic scale are needed to precisely estimate it. Embedded Finite Element Methods (EFEM) seem to be the most appropriate to do so. They allow for a discrete representation of cracks with no additional degrees of freedom.We chose to work with an EFEM free from local iterations named the Augmented Finite Element Method (AFEM). Improvements over the original AFEM have been proposed. We also demonstrated that, under one hypothesis, the AFEM and the classical Finite Element Method (FEM) are fully equivalent. We then compare the accuracy of the AFEM and the classical FEM to represent weak and strong discontinuities. Finally, some examples of application of AFEM to CMC are given
APA, Harvard, Vancouver, ISO, and other styles
41

Vyas, Saurabh, and Venkata Dinesh Raju Jonnalagadda. "Modelling of Automotive Suspension Damper." Thesis, KTH, Fordonsdynamik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293498.

Full text
Abstract:
A hydraulic damper plays an important role in tuning the handling and comfort characteristicsof a vehicle. Tuning and selecting a damper based on subjective evaluation, by considering theopinions of various users, would be an inefficient method since the comfort requirements of usersvary a lot. Instead, mathematical models of damper and simulation of these models in variousoperating conditions are preferred to standardize the tuning procedure, quantify the comfortlevels and reduce cost of testing. This would require a model, which is good enough to capture thebehaviour of damper in various operating and extreme conditions.The Force-Velocity (FV) curve is one of the most widely used model of a damper. This curve isimplemented either as an equation or as a look-up table. It is a plot between the maximum forceat each peak velocity point. There are certain dynamic phenomena like hysteresis and dependencyon the displacement of damper, which cannot be captured with a FV curve model, but are requiredfor better understanding of the vehicle behaviour.This thesis was conducted in cooperation with Volvo Cars with an aim to improve the existingdamper model which is a Force-Velocity curve. This work focuses on developing a damper model,which is complex enough to capture the phenomena discussed above and simple enough to beimplemented in real time simulations. Also, the thesis aims to establish a standard method toparameterise the damper model and generate the Force-Velocity curve from the tests performedon the damper test rig. A test matrix which includes the standard tests for parameterising andthe extreme test cases for the validation of the developed model will be developed. The final focusis to implement the damper model in a multi body simulation (MBS) software.The master thesis starts with an introduction, where the background for the project is described and then the thesis goals are set. It is followed by a literature review in which fewadvanced damper models are discussed in brief. Then, a step-by-step process of developing thedamper model is discussed along with few more possible options. Later, the construction of a testmatrix is discussed in detail followed by the parameter identification process. Next, the validationof the developed damper model is discussed using the test data from Volvo Hällered ProvingGround (HPG). After validation, implementation of the model in VI CarRealTime and Adams Caralong with the results are presented. Finally the thesis is concluded and the recommendations forfuture work are made on further improving the model.
En hydraulisk stötdämpare spelar en viktig roll för att fordonets hantering och komfort. Attjustera och välja en stötdämpare baserat på subjektiv utvärdering, genom att beakta olika användares åsikter, skulle vara en ineffektiv metod eftersom användarnas komfortkrav varierarmycket. Istället föredras matematiska modeller av stötdämpare och simulering av dessa modellerunder olika driftsförhållanden för att standardisera inställningsförfarandet, kvantifiera komfortnivåerna och minska testkostnaden. Detta skulle kräva en modell som är tillräckligt bra för attfånga upp stötdämparens beteende under olika drifts- och extrema förhållanden.Force-Velocity (FV) -kurvan är en av de mest använda stötdämparmodellerna. Denna kurvaimplementeras antingen som en ekvation eller som en uppslagstabell. Det är ett diagram somredovisar den maximala kraften vid varje maxhastighetspunkt. Det finns vissa dynamiskafenomen som hysteres och beroende av stötdämparens förskjutning, som inte kan fångas med enFV-kurvmodell, men som krävs för att bättre förstå fordonets beteende.Denna avhandling genomfördes i samarbete med Volvo Cars i syfte att förbättra den befintligastötdämparmodellen som är en Force-Velocity-kurva. Detta arbete fokuserar på att utveckla enstötdämparmodell, som är tillräckligt komplex för att fånga upp de fenomen som diskuteratsovan och tillräckligt enkel för att implementeras i realtidssimuleringar. Avhandlingen syftarockså till att upprätta en standardmetod för att parametrisera spjällmodellen och generera ForceVelocity-kurvan från de test som utförts på stötdämpartestriggen. En testmatris som innehållerstandardtest för parametrisering och extrema testfall för validering av den utvecklade modellenkommer att utvecklas. Det sista fokuset är att implementera stötdämparmodellen i en multi-bodysimulation (MBS) programvara.Examensarbetet inleds med en introduktion, där bakgrunden för projektet beskrivs ochdärefter definieras målen med arbetet. Det följs av en litteraturöversikt där några avanceradestötdämparmodeller diskuteras i korthet. Därefter diskuteras en steg-för-steg-process för attutveckla stötdämparmodeller tillsammans med några fler möjliga alternativ. Senare diskuteraskonstruktionen av en testmatris i detalj följt av parameteridentifieringsprocessen. Därefterdiskuteras valideringen av den utvecklade stötdämparmodellen med hjälp av testdata från VolvoHällered Proving Ground (HPG). Efter validering presenteras implementeringen av modellen iVI CarRealTime och Adams Car tillsammans med resultaten. Slutligen avslutas rapporten medslutsatser från arbetet och rekommendationer för framtida arbete görs för att ytterligare förbättramodellen.
APA, Harvard, Vancouver, ISO, and other styles
42

Koniaris, Christos. "Perceptually motivated speech recognition and mispronunciation detection." Doctoral thesis, KTH, Tal-kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102321.

Full text
Abstract:
This doctoral thesis is the result of a research effort performed in two fields of speech technology, i.e., speech recognition and mispronunciation detection. Although the two areas are clearly distinguishable, the proposed approaches share a common hypothesis based on psychoacoustic processing of speech signals. The conjecture implies that the human auditory periphery provides a relatively good separation of different sound classes. Hence, it is possible to use recent findings from psychoacoustic perception together with mathematical and computational tools to model the auditory sensitivities to small speech signal changes. The performance of an automatic speech recognition system strongly depends on the representation used for the front-end. If the extracted features do not include all relevant information, the performance of the classification stage is inherently suboptimal. The work described in Papers A, B and C is motivated by the fact that humans perform better at speech recognition than machines, particularly for noisy environments. The goal is to make use of knowledge of human perception in the selection and optimization of speech features for speech recognition. These papers show that maximizing the similarity of the Euclidean geometry of the features to the geometry of the perceptual domain is a powerful tool to select or optimize features. Experiments with a practical speech recognizer confirm the validity of the principle. It is also shown an approach to improve mel frequency cepstrum coefficients (MFCCs) through offline optimization. The method has three advantages: i) it is computationally inexpensive, ii) it does not use the auditory model directly, thus avoiding its computational cost, and iii) importantly, it provides better recognition performance than traditional MFCCs for both clean and noisy conditions. The second task concerns automatic pronunciation error detection. The research, described in Papers D, E and F, is motivated by the observation that almost all native speakers perceive, relatively easily, the acoustic characteristics of their own language when it is produced by speakers of the language. Small variations within a phoneme category, sometimes different for various phonemes, do not change significantly the perception of the language’s own sounds. Several methods are introduced based on similarity measures of the Euclidean space spanned by the acoustic representations of the speech signal and the Euclidean space spanned by an auditory model output, to identify the problematic phonemes for a given speaker. The methods are tested for groups of speakers from different languages and evaluated according to a theoretical linguistic study showing that they can capture many of the problematic phonemes that speakers from each language mispronounce. Finally, a listening test on the same dataset verifies the validity of these methods.

QC 20120914


European Union FP6-034362 research project ACORNS
Computer-Animated language Teachers (CALATea)
APA, Harvard, Vancouver, ISO, and other styles
43

Palkki, Ryan D. "Chemical identification under a poisson model for Raman spectroscopy." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45935.

Full text
Abstract:
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
APA, Harvard, Vancouver, ISO, and other styles
44

Zou, Weiyao. "OPTIMIZATION OF ZONAL WAVEFRONT ESTIMATION AND CURVATURE MEASUREMENTS." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4134.

Full text
Abstract:
Optical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying estimation errors caused by measurement noise propagation, and designing an instrument with sufficient accuracy and sensitivity for the application. Starting with gradient-based wavefront sensing, a zonal least-squares wavefront estimation algorithm for any irregular pupil shape and size is presented, for which the normal matrix equation sets share a pre-defined matrix. A Gerchberg–Saxton iterative method is employed to reduce the deviation errors in the estimated wavefront caused by the pre-defined matrix across discontinuous boundary. The results show that the RMS deviation error of the estimated wavefront from the original wavefront can be less than λ/130~ λ/150 (for λ equals 632.8nm) after about twelve iterations and less than λ/100 after as few as four iterations. The presented approach to handling irregular pupil shapes applies equally well to wavefront estimation from curvature data. A defining characteristic for a wavefront estimation algorithm is its error propagation behavior. The error propagation coefficient can be formulated as a function of the eigenvalues of the wavefront estimation-related matrices, and such functions are established for each of the basic estimation geometries (i.e. Fried, Hudgin and Southwell) with a serial numbering scheme, where a square sampling grid array is sequentially indexed row by row. The results show that with the wavefront piston-value fixed, the odd-number grid sizes yield lower error propagation than the even-number grid sizes for all geometries. The Fried geometry either allows sub-sized wavefront estimations within the testing domain or yields a two-rank deficient estimation matrix over the full aperture; but the latter usually suffers from high error propagation and the waffle mode problem. Hudgin geometry offers an error propagator between those of the Southwell and the Fried geometries. For both wavefront gradient-based and wavefront difference-based estimations, the Southwell geometry is shown to offer the lowest error propagation with the minimum-norm least-squares solution. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimate, corresponds to the Southwell geometry with an odd-number grid size. For curvature-based wavefront sensing, a concept for a differential Shack-Hartmann (DSH) curvature sensor is proposed. This curvature sensor is derived from the basic Shack-Hartmann sensor with the collimated beam split into three output channels, along each of which a lenslet array is located. Three Hartmann grid arrays are generated by three lenslet arrays. Two of the lenslets shear in two perpendicular directions relative to the third one. By quantitatively comparing the Shack-Hartmann grid coordinates of the three channels, the differentials of the wavefront slope at each Shack-Hartmann grid point can be obtained, so the Laplacian curvatures and twist terms will be available. The acquisition of the twist terms using a Hartmann-based sensor allows us to uniquely determine the principal curvatures and directions more accurately than prior methods. Measurement of local curvatures as opposed to slopes is unique because curvature is intrinsic to the wavefront under test, and it is an absolute as opposed to a relative measurement. A zonal least-squares-based wavefront estimation algorithm was developed to estimate the wavefront shape from the Laplacian curvature data, and validated. An implementation of the DSH curvature sensor is proposed and an experimental system for this implementation was initiated. The DSH curvature sensor shares the important features of both the Shack-Hartmann slope sensor and Roddier's curvature sensor. It is a two-dimensional parallel curvature sensor. Because it is a curvature sensor, it provides absolute measurements which are thus insensitive to vibrations, tip/tilts, and whole body movements. Because it is a two-dimensional sensor, it does not suffer from other sources of errors, such as scanning noise. Combined with sufficient sampling and a zonal wavefront estimation algorithm, both low and mid frequencies of the wavefront may be recovered. Notice that the DSH curvature sensor operates at the pupil of the system under test, therefore the difficulty associated with operation close to the caustic zone is avoided. Finally, the DSH-curvature-sensor-based wavefront estimation does not suffer from the 2-ambiguity problem, so potentially both small and large aberrations may be measured.
Ph.D.
Optics and Photonics
Optics and Photonics
Optics PhD
APA, Harvard, Vancouver, ISO, and other styles
45

Hong, Je Hyeong. "Widening the basin of convergence for the bundle adjustment type of problems in computer vision." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275067.

Full text
Abstract:
Bundle adjustment is the process of simultaneously optimizing camera poses and 3D structure given image point tracks. In structure-from-motion, it is typically used as the final refinement step due to the nonlinearity of the problem, meaning that it requires sufficiently good initialization. Contrary to this belief, recent literature showed that useful solutions can be obtained even from arbitrary initialization for fixed-rank matrix factorization problems, including bundle adjustment with affine cameras. This property of wide convergence basin of high quality optima is desirable for any nonlinear optimization algorithm since obtaining good initial values can often be non-trivial. The aim of this thesis is to find the key factor behind the success of these recent matrix factorization algorithms and explore the potential applicability of the findings to bundle adjustment, which is closely related to matrix factorization. The thesis begins by unifying a handful of matrix factorization algorithms and comparing similarities and differences between them. The theoretical analysis shows that the set of successful algorithms actually stems from the same root of the optimization method called variable projection (VarPro). The investigation then extends to address why VarPro outperforms the joint optimization technique, which is widely used in computer vision. This algorithmic comparison of these methods yields a larger unification, leading to a conclusion that VarPro benefits from an unequal trust region assumption between two matrix factors. The thesis then explores ways to incorporate VarPro to bundle adjustment problems using projective and perspective cameras. Unfortunately, the added nonlinearity causes a substantial decrease in the convergence basin of VarPro, and therefore a bootstrapping strategy is proposed to bypass this issue. Experimental results show that it is possible to yield feasible metric reconstructions and pose estimations from arbitrary initialization given relatively clean point tracks, taking one step towards initialization-free structure-from-motion.
APA, Harvard, Vancouver, ISO, and other styles
46

MARAKBI, ZAKARIA. "Mean-Variance Portfolio Optimization : Challenging the role of traditional covariance estimation." Thesis, KTH, Industriell Marknadsföring och Entreprenörskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199185.

Full text
Abstract:
Ever since its introduction in 1952, the Mean-Variance (MV) portfolio selection theory has remained a centerpiece within the realm of e_cient asset allocation. However, in scienti_c circles, the theory has stirred controversy. A strand of criticism has emerged that points to the phenomenon that Mean-Variance Optimization su_ers from the severe drawback of estimation errors contained in the expected return vector and the covariance matrix, resulting in portfolios that may signi_cantly deviate from the true optimal portfolio. While a substantial amount of e_ort has been devoted to estimating the expected return vector in this context, much less is written about the covariance matrix input. In recent times, however, research that points to the importance of the covariance matrix in MV optimization has emerged. As a result, there has been a growing interest whether MV optimization can be enhanced by improving the estimate of the covariance matrix. Hence, this thesis was set forth by the purpose to investigate whether nancial practitioners and institutions can allocate portfolios consisting of assets in a more e_cient manner by changing the covariance matrix input in mean-variance optimization. In the quest of chieving this purpose, an out-of-sample analysis of MV optimized portfolios was performed, where the performance of ve prominent covariance matrix estimators were compared, holding all other things equal in the MV optimization. The optimization was performed under realistic investment constraints, taking incurred transaction costs into account, and for an investment asset universe ranging from equity to bonds. The empirical _ndings in this study suggest one dominant estimator: the covariance matrix estimator implied by the Gerber Statistic (GS). Speci_cally, by using this covariance matrix estimator in lieu of the traditional sample covariance matrix, the MV optimization rendered more e_cient portfolios in terms of higher Sharpe ratios, higher risk-adjusted returns and lower maximum drawdowns. The outperformance was protruding during recessionary times. This suggests that an investor that employs traditional MVO in quantitative asset allocation can improve their asset picking abilities by changing to the, in theory, more robust GS  ovariance matrix estimator in times of volatile nancial markets.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, I.-Chen. "Improved Methods and Selecting Classification Types for Time-Dependent Covariates in the Marginal Analysis of Longitudinal Data." UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/19.

Full text
Abstract:
Generalized estimating equations (GEE) are popularly utilized for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, when certain types of time-dependent covariates are presented, these equations can be biased unless an independence working correlation structure is employed. Moreover, in this case regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches using the generalized method of moments or quadratic inference functions have been proposed for utilizing all valid moment conditions. However, we have found that such methods will not always provide valid inference and can also be improved upon in terms of finite-sample regression parameter estimation. Therefore, we propose a modified GEE approach and a selection method that will both ensure the validity of inference and improve regression parameter estimation. In addition, these modified approaches assume the data analyst knows the type of time-dependent covariate, although this likely is not the case in practice. Whereas hypothesis testing has been used to determine covariate type, we propose a novel strategy to select a working covariate type in order to avoid potentially high type II error rates with these hypothesis testing procedures. Parameter estimates resulting from our proposed method are consistent and have overall improved mean squared error relative to hypothesis testing approaches. Finally, for some real-world examples the use of mean regression models may be sensitive to skewness and outliers in the data. Therefore, we extend our approaches from their use with marginal quantile regression to modeling the conditional quantiles of the response variable. Existing and proposed methods are compared in simulation studies and application examples.
APA, Harvard, Vancouver, ISO, and other styles
48

Coulibaly, Massita. "L'autosuffisance alimentaire et la politique rizicole en Côte d'Ivoire." Clermont-Ferrand 1, 1996. http://www.theses.fr/1996CLF10179.

Full text
Abstract:
Cette étude a pour objet d’analyser le bien-fondé de la recherche de l’autosuffisance alimentaire entreprise en Côte d’Ivoire autour du développement de la filière riz, et d’examiner les capacités du pays à atteindre cet objectif après la dévaluation du FCFA. L’intérêt de ce travail pour le riz se justifie par la croissance de la part du riz dans la consommation des ivoiriens et le déficit constant de cette filière en dépit des sommes importantes engagées dans le développement de la production locale. Deux grands axes d’analyse sont évoqués. Le premier axe est l’analyse des causes de l’échec des politiques mises en place dans la filière avant la dévaluation. Cet échec s’est traduit par la croissance de la part des importations dans la consommation des ivoiriens. Par l’analyse des fondements des politiques alimentaires, nous vous proposons dans le premier chapitre d’examiner la cohérence des politiques mises en place dans le secteur. Cette analyse nous permet d’émettre l’hypothèse que la sécurité alimentaire, plus que l’autosuffisance a été le fondement des politiques alimentaires. Il y a donc eu une contradiction entre les objectifs d’autosuffisance affichés et les mesures de développement de la filière. Un examen de l’évolution du marché du riz par rapport aux différentes politiques, dans le chapitre 2, confirme bien cette contradiction et présente les réformes entreprises pour la production en vue de réaliser l’autosuffisance dans la filière. Le second axe porte sur l’efficacité des nouvelles mesures entreprises après la dévaluation. Dans cette optique, nous avons calculé les indicateurs de performance de la production de riz, à l’aide de la Matrice d’Analyse des Politiques, dans le chapitre 3. Ces indicateurs, calculés avant et après la dévaluation mettent en évidence le regain de compétitivité des unités de production locale. Cette analyse a aussi permis de souligner l’importance de l’approche par systèmes de culture et apporte des informations nécessaires à l’identification des modes de production à promouvoir. Nous avons également estimé la réponse des paysans aux différentes incitations contenues dans ces mesures dont la principale est le relèvement des prix aux producteurs. Cette analyse économétrique, a été menée à l’aide d’un modèle dynamique d’offre agricole intégrant un Mécanisme de Correction d’Erreurs. Elle révèle que l’offre du riz dépend positivement des prix relatifs des différentes cultures (riz, coton et maïs), elle dépend négativement du prix des facteurs de production (en l’occurrence la main d’œuvre) et du crédit agricole. La réaction des riziculteurs est donc en partie dictée par la rentabilité de la culture du riz et des conditions d’accès aux intrants agricoles. Ce résultat souligne de nouveau la nécessité de mettre en place un système de financement des activités agricoles après la dissolution de la BNDA
Our goal in this study is to analyze the economic foundations of the rice self sufficiency objective (which ranks high in the agenda of the government) and assess the attainability of such goal after the devaluation of the CFA franc. Two main themes are evoked here. The first one focusses on the array of policies implemented before the devaluation. The causes of poor results obtained (growth in the importations of rice), seem to stem from inconsistencies between declared objectives and policies implemented. Chapter one has shown that food security rather than self-sufficiency was guiding force of reforms. This result is confirmed when we analyze in the chapter 2, the evolution of the rice market. The analysis of rice market evolution shows the reforms undertaken as well. In the second part, we analyze the effectiveness and usefulness if policies implemented after devaluation. For that purpose, indicators of performance of rice production have been calculated using Policy Analysis Matrix (PMA). This indicators, computed for the pre-devaluation as well as post-devaluation period have shown some increasing competitiveness of the local production units. We finally estimate the supply response behavior of peasants using a dynamic supply model with an Error Correction Mechanism (ECM). This econometric analysis shows that supply of rice is positively correlated with relative price (of rice, cotton and maize), negatively with price of inputs (especially labor) and credit. The latter call for an adequate rural financing system after the bankrucy of BNDA
APA, Harvard, Vancouver, ISO, and other styles
49

Theveny, Philippe. "Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0941/document.

Full text
Abstract:
L'objet est de comparer des algorithmes de multiplication de matrices à coefficients intervalles et leurs implémentations.Le premier axe est la mesure de la précision numérique. Les précédentes analyses d'erreur se limitent à établir une borne sur la surestimation du rayon du résultat en négligeant les erreurs dues au calcul en virgule flottante. Après examen des différentes possibilités pour quantifier l'erreur d'approximation entre deux intervalles, l'erreur d'arrondi est intégrée dans l'erreur globale. À partir de jeux de données aléatoires, la dispersion expérimentale de l'erreur globale permet d'éclairer l'importance des différentes erreurs (de méthode et d'arrondi) en fonction de plusieurs facteurs : valeur et homogénéité des précisions relatives des entrées, dimensions des matrices, précision de travail. Cette démarche conduit à un nouvel algorithme moins coûteux et tout aussi précis dans certains cas déterminés.Le deuxième axe est d'exploiter le parallélisme des opérations. Les implémentations précédentes se ramènent à des produits de matrices de nombres flottants. Pour contourner les limitations d'une telle approche sur la validité du résultat et sur la capacité à monter en charge, je propose une implémentation par blocs réalisée avec des threads OpenMP qui exécutent des noyaux de calcul utilisant les instructions vectorielles. L'analyse des temps d'exécution sur une machine de 4 octo-coeurs montre que les coûts de calcul sont du même ordre de grandeur sur des matrices intervalles et numériques de même dimension et que l'implémentation par bloc passe mieux à l'échelle que l'implémentation avec plusieurs appels aux routines BLAS
This work aims at determining suitable scopes for several algorithms of interval matrices multiplication.First, we quantify the numerical quality. Former error analyses of interval matrix products establish bounds on the radius overestimation by neglecting the roundoff error. We discuss here several possible measures for interval approximations. We then bound the roundoff error and compare experimentally this bound with the global error distribution on several random data sets. This approach enlightens the relative importance of the roundoff and arithmetic errors depending on the value and homogeneity of relative accuracies of inputs, on the matrix dimension, and on the working precision. This also leads to a new algorithm that is cheaper yet as accurate as previous ones under well-identified conditions.Second, we exploit the parallelism of linear algebra. Previous implementations use calls to BLAS routines on numerical matrices. We show that this may lead to wrong interval results and also restrict the scalability of the performance when the core count increases. To overcome these problems, we implement a blocking version with OpenMP threads executing block kernels with vector instructions. The timings on a 4-octo-core machine show that this implementation is more scalable than the BLAS one and that the cost of numerical and interval matrix products are comparable
APA, Harvard, Vancouver, ISO, and other styles
50

Moalla, Borhane. "Approximants de Padé, polynômes orthogonaux (cas matriciel)." Rouen, 1995. http://www.theses.fr/1995ROUES052.

Full text
Abstract:
Ce travail est consacré aux approximants de Padé. On commence par une amélioration du calcul des coefficients des polynômes orthogonaux par rapport à une fonctionnelle linéaire quelconque en utilisant la méthode Cestac de J. Vignes. On étend les notions d'approximants de Padé en deux points des séries formelles aux séries de fonctions. On étend également la méthode de C. Brezinski, pour l'estimation de l'erreur des approximants de Padé en un point dans le cas normal, au cas non normal et au cas des approximants de Padé en deux points. On étudie la stabilité et la convergence des formules de quadrature de Gauss pour une fonction poids polynomiale de degré inferieur ou égal à 2. Enfin, les approximants de Padé matriciels rectangulaires ayant des polynômes générateurs à coefficients matriciels carrés sont définis ; des relations de récurrence que vérifient ces polynômes sont établies. On obtient un algorithme QD matriciel ; on généralise le théorème de Shohat-Favard et on étend la procédure de Kronrod pour l'estimation de l'erreur de ces approximants.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography