To see the other types of publications on this topic, follow the link: Backward error.

Dissertations / Theses on the topic 'Backward error'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Backward error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cottrell, David 1979. "Symplectic integration of simple collisions : a backward error analysis." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81322.

Full text
Abstract:
Molecular Dynamics simulations often involve the numerical integration of pair-wise particle interactions with a constant, step size method. Of primary concern in these simulations is the introduction of error in velocity statistics. We consider the simple example of the symplectic Euler method applied to two-particle collisions in one dimension governed by linear restoring force and use backward error analysis to predict, these errors. For nearly all choices of system and method parameters, the post-collision energy is not conserved and depends upon the initial conditions of the particles and the step size of the method. The analysis of individual collisions is extended to predict energy growth in systems of particles in one dimension.
APA, Harvard, Vancouver, ISO, and other styles
2

Zivcovich, Franco. "Backward error accurate methods for computing the matrix exponential and its action." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/250078.

Full text
Abstract:
The theory of partial differential equations constitutes today one of the most important topics of scientific understanding. A standard approach for solving a time-dependent partial differential equation consists in discretizing the spatial variables by finite differences or finite elements. This results in a huge system of (stiff) ordinary differential equations that has to be integrated in time. Exponential integrators constitute an interesting class of numerical methods for the time integration of stiff systems of differential equations. Their efficient implementation heavily relies on the fast computation of the action of certain matrix functions; among those, the matrix exponential is the most prominent one. In this manuscript, we go through the steps that led to the development of backward error accurate routines for computing the action of the matrix exponential.
APA, Harvard, Vancouver, ISO, and other styles
3

Zivcovich, Franco. "Backward error accurate methods for computing the matrix exponential and its action." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/250078.

Full text
Abstract:
The theory of partial differential equations constitutes today one of the most important topics of scientific understanding. A standard approach for solving a time-dependent partial differential equation consists in discretizing the spatial variables by finite differences or finite elements. This results in a huge system of (stiff) ordinary differential equations that has to be integrated in time. Exponential integrators constitute an interesting class of numerical methods for the time integration of stiff systems of differential equations. Their efficient implementation heavily relies on the fast computation of the action of certain matrix functions; among those, the matrix exponential is the most prominent one. In this manuscript, we go through the steps that led to the development of backward error accurate routines for computing the action of the matrix exponential.
APA, Harvard, Vancouver, ISO, and other styles
4

Moan, Per Christian. "On backward error analysis and Nekhoroshev stability in the numerical analysis of conservative systems of ODEs." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tantardini, F. "QUASI-OPTIMALITY IN THE BACKWARD EULER-GALERKIN METHOD FOR LINEAR PARABOLIC PROBLEMS." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/229462.

Full text
Abstract:
We analyse the backward Euler-Galerkin method for linear parabolic problems, looking for quasi-optimality results in the sense of Céa's Lemma. We cast the problem into the framework given by the inf-sup theory, and we analyse the spatial discretization, the discretization in time and the topic of varying the spatial discretization separately. Concerning the spatial discretization, we prove the the H1-stability of the L2-projection is also a necessary condition for quasi-optimality, both in the H1(H-1)∩L2(H1)-norm and in the L2(H1)-norm. Concerning the discretization in time, we prove that the error in a norm that mimics the H1(H-1)∩L2(H1)-norm is equivalent to the sum of the best errors with piecewise constants for the exact solution and its time derivative, if the partition is locally quasi-uniform. Turning to the topic of varying the spatial dicretization, we provide a bound for the error that includes the best error and an additional term, which vanishes if there are not modifications of the spatial dicretization and which is consistent with the example of non convergence in Dupont '82. We combine these elements in an analysis of the backward Euler-Galerkin method and derive error estimates in case the spatial discretization is based on finite elements.
APA, Harvard, Vancouver, ISO, and other styles
6

Volz, Claudius. "Concealment of Video Transmission Packet Losses Based on Advanced Motion Prediction." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1771.

Full text
Abstract:

Recent algorithms for video coding achieve a high-quality transmission at moderate bit rates. On the other hand, those coders are very sensitive to transmission errors. Many research projects focus on methods to conceal such errors in the decoded video sequence.

Motion compensated prediction is commonly used in video coding to achieve a high compression ratio. This thesis proposes an algorithm which uses the motion compensated prediction of a given video coder to predict a sequence of several complete frames, based on the last correctly decoded images, during a transmission interruption. The proposed algorithm is evaluated on a video coder which uses a dense motion field for motion compensation.

A drawback of predicting lost fields is the perceived discontinuity when the decoder switches back from the prediction to a normal mode of operation. Various approaches to reduce this discontinuity are investigated.

APA, Harvard, Vancouver, ISO, and other styles
7

Beuzeville, Theo. "Analyse inverse des erreurs des réseaux de neurones artificiels avec applications aux calculs en virgule flottante et aux attaques adverses." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP054.

Full text
Abstract:
L'utilisation d'intelligences artificielles, dont les implémentations reposent souvent sur des réseaux de neurones artificiels, se démocratise maintenant dans une grande variété de tâches. En effet, ces modèles d'apprentissage profond produisent des résultats bien meilleurs que de nombreux algorithmes spécialisés précédemment utilisés et sont donc amenés à être déployés à grande échelle.C'est dans ce contexte de développement très rapide que des problématiques liées au stockage de ces modèles émergent, car ils sont parfois très profonds et comprennent donc jusqu'à des milliards de paramètres, ainsi que des problématiques liées à leurs performances en termes de calcul tant d'un point de vue de précision que de coût en temps et en énergie. Pour toutes ces raisons, l'utilisation de précision réduite est de plus en plus indispensable.D'autre part, il a été noté que les réseaux de neurones souffrent d'un manque d'interprétabilité, étant donné qu'ils sont souvent des modèles très profonds, entraînés sur de vastes quantités de données. Par conséquent, ils sont très sensibles aux perturbations qui peuvent toucher les données qu'ils traitent. Les attaques adverses en sont un exemple ; ces perturbations, souvent imperceptibles à l'œil humain, sont conçues pour tromper un réseau de neurones, le faisant échouer dans le traitement de ce qu'on appelle un exemple adverse. Le but de cette thèse est donc de fournir des outils pour mieux comprendre, expliquer et prédire la sensibilité des réseaux de neurones artificiels à divers types de perturbations. À cette fin, nous avons d'abord étendu à des réseaux de neurones artificiels certains concepts bien connus de l'algèbre linéaire numérique, tels que le conditionnement et l'erreur inverse. Nous avons donc établi des formules explicites permettant de calculer ces quantités et trouvé des moyens de les calculer lorsque nous ne pouvions pas obtenir de formule. Ces quantités permettent de mieux comprendre l'impact des perturbations sur une fonction mathématique ou un système, selon les variables qui sont perturbées ou non.Nous avons ensuite utilisé cette analyse d'erreur inverse pour démontrer comment étendre le principe des attaques adverses au cas où, non seulement les données traitées par les réseaux sont perturbées, mais également leurs propres paramètres. Cela offre une nouvelle perspective sur la robustesse des réseaux neuronaux et permet, par exemple, de mieux contrôler la quantification des paramètres pour ensuite réduire la précision arithmétique utilisée et donc faciliter leur stockage. Nous avons ensuite amélioré cette approche, obtenue par l'analyse d'erreur inverse, pour développer des attaques sur les données des réseaux comparables à l'état de l'art. Enfin, nous avons étendu les approches d'analyse d'erreurs d'arrondi, qui jusqu'à présent avaient été abordées d'un point de vue pratique ou vérifiées par des logiciels, dans les réseaux de neurones en fournissant une analyse théorique basée sur des travaux existants en algèbre linéaire numérique. Cette analyse permet d'obtenir des bornes sur les erreurs directes et inverses lors de l'utilisation d'arithmétiques flottantes. Ces bornes permettent à la fois d'assurer le bon fonctionnement des réseaux de neurones une fois entraînés, mais également de formuler des recommandations concernant les architectures et les méthodes d'entraînement afin d'améliorer la robustesse des réseaux de neurones
The use of artificial intelligence, whose implementations are often based on artificial neural networks, is now becoming widespread across a wide variety of tasks. These deep learning models indeed yield much better results than many specialized algorithms previously used and are therefore being deployed on a large scale.It is in this context of very rapid development that issues related to the storage of these models emerge, since they are sometimes very deep and therefore comprise up to billions of parameters, as well as issues related to their computational performance, both in terms of accuracy and time- and energy-related costs. For all these reasons, the use of reduced precision is increasingly being considered.On the other hand, it has been noted that neural networks suffer from a lack of interpretability, given that they are often very deep models trained on vast amounts of data. Consequently, they are highly sensitive to small perturbations in the data they process. Adversarial attacks are an example of this; since these are perturbations often imperceptible to the human eye, constructed to deceive a neural network, causing it to fail in processing the so-called adversarial example.The aim of this thesis is therefore to provide tools to better understand, explain, and predict the sensitivity of artificial neural networks to various types of perturbations.To this end, we first extended to artificial neural networks some well-known concepts from numerical linear algebra, such as condition number and backward error. These quantities allow to better understand the impact of perturbations on a mathematical function or system, depending on which variables are perturbed or not.We then use this backward error analysis to demonstrate how to extend the principle of adversarial attacks to the case where not only the data processed by the networks is perturbed but also their own parameters. This provides a new perspective on neural networks' robustness and allows, for example, to better control quantization to reduce the precision of their storage. We then improved this approach, obtained through backward error analysis, to develop attacks on network input comparable to state-of-the-art methods.Finally, we extended approaches of round-off error analysis, which until now had been approached from a practical standpoint or verified by software, in neural networks by providing a theoretical analysis based on existing work in numerical linear algebra.This analysis allows for obtaining bounds on forward and backward errors when using floating-point arithmetic. These bounds both ensure the proper functioning of neural networks once trained, and provide recommendations on architectures and training methods to enhance the robustness of neural networks
APA, Harvard, Vancouver, ISO, and other styles
8

Relton, Samuel. "Algorithms for matrix functions and their Fréchet derivatives and condition numbers." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-matrix-functions-and-their-frechet-derivatives-and-condition-numbers(f20e8144-1aa0-45fb-9411-ddc0dc7c2c31).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Mohy, Awad. "Algorithms for the matrix exponential and its Fréchet derivative." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-the-matrix-exponential-and-its-frechet-derivative(4de9bdbd-6d79-4e43-814a-197668694b8e).html.

Full text
Abstract:
New algorithms for the matrix exponential and its Fréchet derivative are presented. First, we derive a new scaling and squaring algorithm (denoted expm[new]) for computing eA, where A is any square matrix, that mitigates the overscaling problem. The algorithm is built on the algorithm of Higham [SIAM J.Matrix Anal. Appl., 26(4): 1179-1193, 2005] but improves on it by two key features. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of powering them. The second is to base the backward error analysis that underlies the algorithm on members of the sequence {||Ak||1/k} instead of ||A||. The terms ||Ak||1/k are estimated without computing powers of A by using a matrix 1-norm estimator. Second, a new algorithm is developed for computing the action of the matrix exponential on a matrix, etAB, where A is an n x n matrix and B is n x n₀ with n₀ << n. The algorithm works for any A, its computational cost is dominated by the formation of products of A with n x n₀ matrices, and the only input parameter is a backward error tolerance. The algorithm can return a single matrix etAB or a sequence etkAB on an equally spaced grid of points tk. It uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential. It determines the amount of scaling and the Taylor degree using the strategy of expm[new].Preprocessing steps are used to reduce the cost of the algorithm. An important application of the algorithm is to exponential integrators for ordinary differential equations. It is shown that the sums of the form $\sum_{k=0}^p\varphi_k(A)u_k$ that arise in exponential integrators, where the $\varphi_k$ are related to the exponential function, can be expressed in terms of a single exponential of a matrix of dimension $n+p$ built by augmenting $A$ with additional rows and columns. Third, a general framework for simultaneously computing a matrix function, $f(A)$, and its Fréchet derivative in the direction $E$, $L_f(A,E)$, is established for a wide range of matrix functions. In particular, we extend the algorithm of Higham and $\mathrm{expm_{new}}$ to two algorithms that intertwine the evaluation of both $e^A$ and $L(A,E)$ at a cost about three times that for computing $e^A$ alone. These two extended algorithms are then adapted to algorithms that simultaneously calculate $e^A$ together with an estimate of its condition number. Finally, we show that $L_f(A,E)$, where $f$ is a real-valued matrix function and $A$ and $E$ are real matrices, can be approximated by $\Im f(A+ihE)/h$ for some suitably small $h$. This approximation generalizes the complex step approximation known in the scalar case, and is proved to be of second order in $h$ for analytic functions $f$ and also for the matrix sign function. It is shown that it does not suffer the inherent cancellation that limits the accuracy of finite difference approximations in floating point arithmetic. However, cancellation does nevertheless vitiate the approximation when the underlying method for evaluating $f$ employs complex arithmetic. The complex step approximation is attractive when specialized methods for evaluating the Fréchet derivative are not available.
APA, Harvard, Vancouver, ISO, and other styles
10

Kuo, Hui-Ying. "Comparison of temporal processing and motion perception in emmetropes and myopes." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/31905/1/Hui-Ying_Kuo_Thesis.pdf.

Full text
Abstract:
While spatial determinants of emmetropization have been examined extensively in animal models and spatial processing of human myopes has also been studied, there have been few studies investigating temporal aspects of emmetropization and temporal processing in human myopia. The influence of temporal light modulation on eye growth and refractive compensation has been observed in animal models and there is evidence of temporal visual processing deficits in individuals with high myopia or other pathologies. Given this, the aims of this work were to examine the relationships between myopia (i.e. degree of myopia and progression status) and temporal visual performance and to consider any temporal processing deficits in terms of the parallel retinocortical pathways. Three psychophysical studies investigating temporal processing performance were conducted in young adult myopes and non-myopes: (1) backward visual masking, (2) dot motion perception and (3) phantom contour. For each experiment there were approximately 30 young emmetropes, 30 low myopes (myopia less than 5 D) and 30 high myopes (5 to 12 D). In the backward visual masking experiment, myopes were also classified according to their progression status (30 stable myopes and 30 progressing myopes). The first study was based on the observation that the visibility of a target is reduced by a second target, termed the mask, presented quickly after the first target. Myopes were more affected by the mask when the task was biased towards the magnocellular pathway; myopes had a 25% mean reduction in performance compared with emmetropes. However, there was no difference in the effect of the mask when the task was biased towards the parvocellular system. For all test conditions, there was no significant correlation between backward visual masking task performance and either the degree of myopia or myopia progression status. The dot motion perception study measured detection thresholds for the minimum displacement of moving dots, the maximum displacement of moving dots and degree of motion coherence required to correctly determine the direction of motion. The visual processing of these tasks is dominated by the magnocellular pathway. Compared with emmetropes, high myopes had reduced ability to detect the minimum displacement of moving dots for stimuli presented at the fovea (20% higher mean threshold) and possibly at the inferior nasal retina. The minimum displacement threshold was significantly and positively correlated to myopia magnitude and axial length, and significantly and negatively correlated with retinal thickness for the inferior nasal retina. The performance of emmetropes and myopes for all the other dot motion perception tasks were similar. In the phantom contour study, the highest temporal frequency of the flickering phantom pattern at which the contour was visible was determined. Myopes had significantly lower flicker detection limits (21.8 ± 7.1 Hz) than emmetropes (25.6 ± 8.8 Hz) for tasks biased towards the magnocellular pathway for both high (99%) and low (5%) contrast stimuli. There was no difference in flicker limits for a phantom contour task biased towards the parvocellular pathway. For all phantom contour tasks, there was no significant correlation between flicker detection thresholds and magnitude of myopia. Of the psychophysical temporal tasks studied here those primarily involving processing by the magnocellular pathway revealed differences in performance of the refractive error groups. While there are a number of interpretations for this data, this suggests that there may be a temporal processing deficit in some myopes that is selective for the magnocellular system. The minimum displacement dot motion perception task appears the most sensitive test, of those studied, for investigating changes in visual temporal processing in myopia. Data from the visual masking and phantom contour tasks suggest that the alterations to temporal processing occur at an early stage of myopia development. In addition, the link between increased minimum displacement threshold and decreasing retinal thickness suggests that there is a retinal component to the observed modifications in temporal processing.
APA, Harvard, Vancouver, ISO, and other styles
11

Aydin, Ayhan. "Geometric Integrators For Coupled Nonlinear Schrodinger Equation." Phd thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605773/index.pdf.

Full text
Abstract:
Multisymplectic integrators like Preissman and six-point schemes and a semi-explicit symplectic method are applied to the coupled nonlinear Schrö
dinger equations (CNLSE). Energy, momentum and additional conserved quantities are preserved by the multisymplectic integrators, which are shown using modified equations. The multisymplectic schemes are backward stable and non-dissipative. A semi-explicit method which is symplectic in the space variable and based on linear-nonlinear, even-odd splitting in time is derived. These methods are applied to the CNLSE with plane wave and soliton solutions for various combinations of the parameters of the equation. The numerical results confirm the excellent long time behavior of the conserved quantities and preservation of the shape of the soliton solutions in space and time.
APA, Harvard, Vancouver, ISO, and other styles
12

Worrall, S. T. "Backwards compatible adaptive error resilience techniques for MPEG-4 over mobile networks." Thesis, University of Surrey, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365149.

Full text
Abstract:
Advances in wireless technology will soon provide sufficient capacity for the transmission of compressed video to and from mobile terminals. However, high compression ratios and the use of Variable Length Coding in standards such as H.263 and MPEG-4 make the encoded bitstreams particularly sensitive to errors. This thesis investigates methods for the error robust transmission of MPEG-4 coded video over mobile networks. It examines curent and future mobile networks, and discusses the quality of service that they are expected to offer with respect to mobile multimedia. Also, the MPEG-4 systems and visual layer standards are briefly described. Real-time MPEG-4 encoder and decoder software has been developed and exploited in the work described here. The MPEG-4 software can send MPEG-4 data over TCP or over RTP. All of the standard MPEG•4 error resilience options are implemented in the software. The effectiveness of these options is demonstrated through the results of simulated transmission over a GPRS channel. MPEG•4 is separated into two different streams via exploitation of the data partitioning option. The two streams may then be transmitted over a mobile network using different bearer channels. The most sensitive data stream is sent using a bearer channel with a low bit error rate compared to the less sensitive data stream. This technique is shown to produce quality improvements. A technique for the insertion of user-defined data is outlined. Insertion of user- defined data is achieved while retaining backwards compatibility with existing standard MPEG- 4 decoders. CRC codes are inserted using this scheme, to facilitate more accurate detection of errors. This error detection aids error concealment and results in a gain in decoded video quality after simulated transmission over a GPRS channel. Motion adaptive encoding is employed to increase the error robustness of the encoded bitstream. Video packet size and Intra block refresh rates are altered with first partition size, which is used as a guide to the amount of motion within a scene. 1Tansmlssion of video using RTP is considered. In particular, a mathematical analysis is performed for two different packetisation schemes. One scheme encapsulates one video frame within one RTP packet, while the other scheme encapsulates a single video packet within a single RTP packet.
APA, Harvard, Vancouver, ISO, and other styles
13

Freeman, Michael. "Hardware support of recovery blocks." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Segura, ugalde Esteban. "Computation of invariant pairs and matrix solvents." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0045/document.

Full text
Abstract:
Cette thèse porte sur certains aspects symboliques-numériques du problème des paires invariantes pour les polynômes de matrices. Les paires invariantes généralisent la définition de valeur propre / vecteur propre et correspondent à la notion de sous-espaces invariants pour le cas nonlinéaire. Elles trouvent leurs applications dans le calcul numérique de plusieurs valeurs propres d’un polynôme de matrices; elles présentent aussi un intérêt dans le contexte des systèmes différentiels. En utilisant une approche basée sur les intégrales de contour, nous déterminons des expressions du nombre de conditionnement et de l’erreur rétrograde pour le problème du calcul des paires invariantes. Ensuite, nous adaptons la méthode des moments de Sakurai-Sugiura au calcul des paires invariantes et nous étudions le comportement de la version scalaire et par blocs de la méthode en présence de valeurs propres multiples. Le résultats obtenus à l’aide des approches directes peuvent éventuellement être améliorés numériquement grâce à une méthode itérative: nous proposons ici une comparaison de deux variantes de la méthode de Newton appliquée aux paires invariantes. Le problème des solvants de matrices est très proche de celui des paires invariants. Le résultats présentés ci-dessus sont donc appliqués au cas des solvants pour obtenir des expressions du nombre de conditionnement et de l’erreur, et un algorithme de calcul basé sur la méthode des moments. De plus, nous étudions le lien entre le problème des solvants et la transformation des polynômes de matrices en forme triangulaire
In this thesis, we study some symbolic-numeric aspects of the invariant pair problem for matrix polynomials. Invariant pairs extend the notion of eigenvalue-eigenvector pairs, providing a counterpart of invariant subspaces for the nonlinear case. They have applications in the numeric computation of several eigenvalues of a matrix polynomial; they also present an interest in the context of differential systems. Here, a contour integral formulation is applied to compute condition numbers and backward errors for invariant pairs. We then adapt the Sakurai-Sugiura moment method to the computation of invariant pairs, including some classes of problems that have multiple eigenvalues, and we analyze the behavior of the scalar and block versions of the method in presence of different multiplicity patterns. Results obtained via direct approaches may need to be refined numerically using an iterative method: here we study and compare two variants of Newton’s method applied to the invariant pair problem. The matrix solvent problem is closely related to invariant pairs. Therefore, we specialize our results on invariant pairs to the case of matrix solvents, thus obtaining formulations for the condition number and backward errors, and a moment-based computational approach. Furthermore, we investigate the relation between the matrix solvent problem and the triangularization of matrix polynomials
APA, Harvard, Vancouver, ISO, and other styles
15

Argyridou, Eleni. "Estimating errors in quantities of interest in the case of hyperelastic membrane deformation." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16205.

Full text
Abstract:
There are many mathematical and engineering methods, problems and experiments which make use of the finite element method. For any given use of the finite element method we get an approximate solution and we usually wish to have some indication of the accuracy in the approximation. In the case when the calculation is done to estimate a quantity of interest the indication of the accuracy is concerned with estimating the difference between the unknown exact value and the finite element approximation. With a means of estimating the error, this can sometimes be used to determine how to improve the accuracy by repeating the computation with a finer mesh. A large part of this thesis is concerned with a set-up of this type with the physical problem described in a weak form and with the error in the estimate of the quantity of interest given in terms of a function which solves a related dual problem. We consider this in the case of modelling the large deformation of thin incompressible isotropic hyperelastic sheets under pressure loading. We assume throughout that the thin sheet can be modelled as a membrane, which gives us a two dimensional description of a three dimensional deformation and this simplifies further to a one space dimensional description in the axisymmetric case when we use cylindrical polar coordinates. In the general case we consider the deformation under quasi-static conditions and in the axisymmetric case we consider both quasi-static conditions and dynamic conditions, which involves the full equations of motion, which gives three different problems. In all the three problems we describe how to get the finite element solution, we describe associated dual problems, we describe how to solve these dual problems and we consider using the dual solutions in error estimation. There is hence a common framework. The details however vary considerably and much of the thesis is in describing each case.
APA, Harvard, Vancouver, ISO, and other styles
16

Horsin, Romain. "Comportement en temps long d'équations de type Vlasov : études mathématiques et numériques." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S062/document.

Full text
Abstract:
Cette thèse porte sur le comportement en temps long de solutions d’équations de type Vlasov, principalement le modèle Vlasov-HMF. On s’intéresse en particulier au phénomène d’amortissement Landau, prouvé mathématiquement dans divers cadres, pour plusieurs équations de type Vlasov, comme l’équation de Vlasov-Poisson ou le modèle Vlasov-HMF, et présentant certaines analogies avec le phénomène d’amortissement non visqueux pour l’équation d’Euler 2D. Les résultats qui y sont décrits sont les suivants. Le premier est un théorème d’amortissement Landau pour des solutions numériques du modèle Vlasov-HMF, obtenues par discrétisation en temps de ce dernier via des méthodes de splitting. Nous prouvons en outre la convergence des schémas numériques. Le second est un théorème d’amortissment Landau pour des solutions du modéle Vlasov-HMF linéarisé autour d’états stationnaires inhomogènes. Ce théorème est accompagné de nombreuses simulations numériques destinées à étudier numériquement le cas non-linéaire, et semblant mettre en lumière de nouveaux phénomènes. Enfin, le dernier résultat porte sur la discrétisation en temps de l’équation d’Euler 2D par un intégrateur de Crouch-Grossman symplectique. Nous prouvons la convergence du schéma
This thesis concerns the long time behavior of certain Vlasov equations, mainly the Vlasov- HMF model. We are in particular interested in the celebrated phenomenon of Landau damp- ing, proved mathematically in various frameworks, foar several Vlasov equations, such as the Vlasov-Poisson equation or the Vlasov-HMF model, and exhibiting certain analogies with the inviscid damping phenomenon for the 2D Euler equation. The results described in the document are the following.The first one is a Landau damping theorem for numerical solutions of the Vlasov-HMF model, constructed by means of time-discretizations by splitting methods. We prove more- over the convergence of the schemes. The second result is a Landau damping theorem for solutions of the Vlasov-HMF model linearized around inhomogeneous stationary states. We provide moreover a quite large amount of numerical simulations, which are designed to study numerically the nonlinear case, and which seem to show new phenomenons. The last result is the convergence of a scheme that discretizes in time the 2D Euler equation by means of a symplectic Crouch-Grossmann integrator
APA, Harvard, Vancouver, ISO, and other styles
17

Kopec, Marie. "Quelques contributions à l'analyse numérique d'équations stochastiques." Electronic Thesis or Diss., Rennes, École normale supérieure, 2014. http://www.theses.fr/2014ENSR0002.

Full text
Abstract:
Ce travail présente quelques résultats concernant le comportement en temps fini et en temps long de méthodes numériques pour des équations stochastiques. On s'intéresse d'abord aux équations différentielles stochastiques de Langevin et de Langevin amorti. On montre un résultat concernant l'analyse d'erreur faible rétrograde de ses équations par des schémas numériques implicites. En particulier, on montre que l'erreur entre le générateur associé au schéma numérique et la solution d'une équation de Kolmogorov modifiée est d'ordre élevé par rapport au pas de discrétisation. On montre aussi que la dynamique associée au schéma numérique est exponentiellement mélangeante. Dans un deuxième temps, on étudie le comportement en temps long d'une discrétisation en temps et en espace d'une EDPS semi-linéaire avec un bruit blanc additif, qui possède une unique mesure invariante . On considère une discrétisation en temps par un schéma d'Euler et en espace par une méthode des éléments finis. On montre que la moyenne, par rapport aux lois invariantes (qui n'est pas forcément unique) associées à l'approximation, par des fonctions tests suffisamment régulières est proche de la quantité correspondante pour µ. Plus précisément, on étudie la vitesse de convergence par rapport aux différents paramètres de discrétisation. Enfin, on s'intéresse à une EDPS semi-linéaire avec un bruit blanc additif dont le terme non-linéaire est un polynôme. On étudie la convergence au sens faible d'une approximation en temps par un schéma de splitting implicite
This work presents some results about behavior in long time and in finite time of numerical methods for stochastic equations.In a first part, we are considered with overdamped Langevin Stochastic Differential Equations (SDE) and Langevin SDE. We show a weak backward error analysis result for its numerical approximations defined by implicit methods. In particular, we prove that the generator associated with the numerical solution coincides with the solution of a modified Kolmogorov equation up to high order terms with respect to the stepsize. This implies that every measure of the numerical scheme is close to a modified invariant measure obtained by asymptotic expansion. Moreover, we prove that, up to negligible terms, the dynamic associated with the implicit scheme considered is exponentially mixing.In a second part, we study the long-time behavior of fully discretized semilinear SPDEs with additive space-time white noise, which admit a unique invariant probability measure μ. We focus on the discretization in time thanks to a scheme of Euler type, and on a Finite Element discretization in space and we show that the average of regular enough test functions with respect to the (possibly non unique) invariant laws of the approximations are close to the corresponding quantity for μ.More precisely, we analyze the rate of the convergence with respect to the different discretization parameters. Finally, we are concerned with semilinear SPDEs with additive space-time white noise, which the nonlinear term is a polynomial function. We analyze the rate of the weak convergence for discretization in time with an implicit splitting method
APA, Harvard, Vancouver, ISO, and other styles
18

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ping-chin, and 王昞清. "Enhanced Backward Error Concealment for H.264/AVC Videos on Error-Prone Networks." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/05006064494876470532.

Full text
Abstract:
碩士
國立臺南大學
資訊工程學系碩士班
100
Transmitting compressed video data is quite common over the wireless network or wired network. In error-prone networks, packet loss could lead to decode incorrectly. With the new generation standard, H.264/AVC, such issue incurs error propagation and makes quality of decoded video degrade drastically. To solve such issue, error concealment is applied in decoder. However, traditional error concealment cannot give satisfactory result in some cases, such as whole frame loss. Such issue is addressd by backword error concealment that concealing corrupted frame by utilizing succeeding frame which is correctly received. Nevertheless, backward error concealment efficiently conceal most of the corrupted pixels except the unreferenced pixels. In this thesis, we propose an enhanced backward error concealment method.Based on moving objects continuity, the porposed method provide estimated motion vectors to conceal the unreferenced pixels. Experimental results showed that the proposed method achieve better performance in measuring of distortion.
APA, Harvard, Vancouver, ISO, and other styles
20

Tsai, Ming-Kuang, and 蔡茗光. "Synchronous Backward Error Tracking Algorithm in H.264 Video Coding." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/16606590106382645211.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所
92
The most recent H.264 video coding utilizes complex predictions in both the temporal and spatial domains to get better performance than other standards. Certainly, such predictions may cause serious error propagation effects when suffering from transmission errors. Therefore, the objective of this paper is to develop a robust error resilient algorithm, named as the Synchronous Backward Error Tracking (SBET) algorithm, to completely terminate the error propagation. If the state of the encoder can synchronize to that of the decoder, the error propagation effects can be entirely terminated. Therefore, we assume that a feedback channel is available and the encoder can be aware of the decoder’s error concealment by external means. The pixel-based Precise Backward Error Tracking (PBET) is utilized to track the error locations and propagate the concealment error of erroneous frame to the corresponding areas to reconstruct the state of the decoder in the encoder. Comparing with the full re-encoding method, the proposed method only involves memory access, simple addition and multiplication operations for the error-contaminated pixels. By observing the simulation results, the rate-distortion performance of the proposed algorithm is always better than that of the conventional algorithms. SBET outperforms PBET up to 1.21 dB under 3% slice error rate for the QCIF Foreman sequence. In addition, without using forced INTRA refreshing, the phenomenon of burst bit rate can be avoided. In the future, if a better error concealment technique is utilized, a better performance of SBET is also expected.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography