Dissertations / Theses on the topic 'Arithmétique en précision finie'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 40 dissertations / theses for your research on the topic 'Arithmétique en précision finie.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Braconnier, Thierry. "Sur le calcul des valeurs propres en précision finie." Nancy 1, 1994. http://www.theses.fr/1994NAN10023.
Full textNativel, Fabrice. "Fiabilité numérique et précision finie : une méthode automatique de correction linéaire de l'erreur d'arrondi." La Réunion, 1998. http://elgebar.univ-reunion.fr/login?url=http://thesesenligne.univ.run/98_13_Nativel.pdf.
Full textGallois-Wong, Diane. "Formalisation en Coq des algorithmes de filtre numérique calculés en précision finie." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG016.
Full textDigital filters have numerous applications, from telecommunications to aerospace. To be used in practice, a filter needs to be implemented using finite precision (floating- or fixed-point arithmetic). Resulting rounding errors may become especially problematic in embedded systems: tight time, space, and energy constraints mean that we often need to cut into the precision of computations, in order to improve their efficiency. Moreover, digital filter programs are strongly iterative: rounding errors may propagate and accumulate through many successive iterations. As some of the application domains are critical, I study rounding errors in digital filter algorithms using formal methods to provide stronger guaranties. More specifically, I use Coq, a proof assistant that ensures the correctness of this numerical behavior analysis. I aim at providing certified error bounds over the difference between outputs from an implemented filter (computed using finite precision) and from the original model filter (theoretically defined with exact operations). Another goal is to guarantee that no catastrophic behavior (such as unexpected overflows) will occur. Using Coq, I define linear time-invariant (LTI) digital filters in time domain. I formalize a universal form called SIF: any LTI filter algorithm may be expressed as a SIF while retaining its numerical behavior. I then prove the error filters theorem and the Worst-Case Peak Gain theorem. These two theorems allow us to analyze the numerical behavior of the filter described by a given SIF. This analysis also involves the sum-of-products algorithm used during the computation of the filter. Therefore, I formalize several sum-of-products algorithms, that offer various trade-offs between output precision and computation speed. This includes a new algorithm whose output is correctly rounded-to-nearest. I also formalize modular overflows, and prove that one of the previous sum-of-products algorithms remains correct even when such overflows are taken into account
Hilaire, Thibault. "Analyse et synthèse de l'implémentation de lois de contrôle-commande en précision finie : étude dans le cadre des applications automobiles sur calculateur embarqué." Nantes, 2006. http://www.theses.fr/2006NANT2055.
Full textThese thesis, done in industrial context with PSA Peugeot-Citroën and IRCCyN, deals with the numerical aspect of the implementation of filters or controllers in embedded processors. The implementation of a controller or a filter in a Finite Word Length context may lead to a deterioration of the global performance, due to parametric errors and numerical noises. We focus here on the effect of the quantization of the embedded coefficients. It exists an infinity of equivalent relalizations of a given controller, and these realizations are no more equivalent in finite precision : classical state-space realizations, delta-realizations, direct forms, observer-state feedback, cascade or parallel decomposition, etc. After having exhibits theses possibilites, this Phd thesis proposes an unifying framework — the implicit specialized state-space — that encompasses usual realizations (and many others unexplored). This specialized form, but still macroscopic, is directly connected to the in-line calculations to be performed and the involved coefficients. Various analysis tools, applied to our formalism, may be used to determine how the realization will be altered during the FWL process (with floating point and fixed point considerations). Then, according to these tools, optimal realizations with the best FWL robustness can be found, via an optimization problem
Ménard, Daniel. "Méthodologie de compilation d'algorithmes de traitement du signal pour les processeurs en virgule fixe sous contrainte de précision." Phd thesis, Université Rennes 1, 2002. http://tel.archives-ouvertes.fr/tel-00609159.
Full textGratton, Serge. "Outils théoriques d'analyse du calcul à précision finie." Toulouse, INPT, 1998. http://www.theses.fr/1998INPT015H.
Full textLouvet, Nicolas. "Algorithmes compensés en arithmétique flottante : précision, validation, performances." Perpignan, 2007. http://www.theses.fr/2007PERP0842.
Full textRounding error may totally corrupt the result of a floating point computation. How to improve and validate the accuracy of a floating point computation, without large computing time overheads ? We consider two case studies: polynomial evaluation and linear triangular system solving. In both cases we use compensation of the rounding errors to improve the accuracy of the computed result. The contributions of this work are divided into three levels. 1) Improving the accuracy: We propose a compensated Horner scheme that computes polynomial evaluation with the same accuracy as the classic Horner algorithm performed in twice the working precision. Generalizing this algorithm, we present another compensated version of the Horner scheme simulating K times the working precision (K>1). We also show how to compensate the rounding errors generated by the substitution algorithm for triangular system solving. 2) Validating the computed result: we show how to validate the quality of the compensated polynomial evaluation. We propose a method to compute an aposteriori error bound together with the compensated result. This error bound is computed using only basic floating point operations, which ensures portability and efficiency of the method. 3) Performances of compensated algorithms: Our computing time measures show the interest of compensated algorithms compared to other software solutions that provide the same output accuracy. We also justify good practical performances of compensated algorithms thanks to a detailed study of the instruction-level parallelism they contain
Gazeau, Ivan. "Programmation sûre en précision finie : Contrôler les erreurs et les fuites d'informations." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00913469.
Full textVaccon, Tristan. "Précision p-adique." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S032/document.
Full textP-Adic numbers are a field in arithmetic analoguous to the real numbers. The advent during the last few decades of arithmetic geometry has yielded many algorithms using those numbers. Such numbers can only by handled with finite precision. We design a method, that we call differential precision, to study the behaviour of the precision in a p-adic context. It reduces the study to a first-order problem. We also study the question of which Gröbner bases can be computed over a p-adic number field
Ioualalen, Arnault. "Transformation de programmes synchrones pour l’optimisation de la précision numérique." Perpignan, 2012. http://www.theses.fr/2012PERP1108.
Full textThe certification of programs embedded in critical systems is still a challenge for both the industry and the research communities. The numerical accuracy of programs using the floating-point arithmetics is one aspect of this issue which has been addressed by manytechniques and tools. Nowadays we can statically infer a sound over-approximation of the rounding errors introduced by all the possible executions of a program. However, these techniques do not indicate how to correct or even how to reduce these errors. This work presents a new automatic technique to transform a synchronous program in order to reduce the rounding errors arising during its execution. We introduce a new intermediate representation of programs, called APEG, which is an under-approximation of the set of all the programs that are mathematically equivalent to the original one. This representation allows us to synthesize, in polynomial time, a program with a better numerical accuracy, while being mathematically equivalent to the original one. In addition, we present many experimental results obtained with the tool we have developed, Sardana, and which implements all of our contributions
Fiallos, Aguilar Mario. "Conception et simulation d'une machine massivement parallèle en grande précision." Lyon 1, 1994. http://www.theses.fr/1994LYO10135.
Full textGazeau, Ivan. "Safe Programming in Finite Precision: Controlling Errors and Information Leaks." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/91/34/69/PDF/main.pdf.
Full textIn this thesis, we analyze the problem of the finite representation of real numbers and we control the deviation due to this approximation. We particularly focus on two complex problems. First, we study how finite precision interacts with differentially private protocols. We present a methodology to study the perturbations on the probabilistic distribution induced by finite representation. Then we show that a direct implementation of differential privacy protocols is not safe while, with addition of some safeguards, differential privacy is preserved under finite precision up to some quantified inherent leakage. Next, we propose a method to analyze programs that cannot be analyzed by a compositional analysis due to ''erratic'' control flow. This method based on rewrite system techniques allows us to use the proof of correction of the program in the exact semantics to prove the program is still safe in the finite representation
Fousse, Laurent. "Intégration numérique avec erreur bornée en précision arbitraire." Phd thesis, Université Henri Poincaré - Nancy I, 2006. http://tel.archives-ouvertes.fr/tel-00477243.
Full textThévenoux, Laurent. "Synthèse de code avec compromis entre performance et précision en arithmétique flottante IEEE 754." Perpignan, 2014. http://www.theses.fr/2014PERP1176.
Full textNumerical accuracy and execution time of programs using the floating-point arithmetic are major challenges in many computer science applications. The improvement of these criteria is the subject of many research works. However we notice that the accuracy improvement decrease the performances and conversely. Indeed, improvement techniques of numerical accuracy, such as expansions or compensations, increase the number of computations that a program will have to execute. The more the number of computations added is, the more the performances decrease. This thesis work presents a method of accuracy improvement which take into account the negative effect on the performances. So we automatize the error-free transformations of elementary floating-point operations because they present a high potential of parallelism. Moreover we propose some transformation strategies allowing partial improvement of programs to control more precisely the impact on execution time. Then, tradeoffs between accuracy and performances are assured by code synthesis. We present also several experimental results with the help of tools implementing all the contributions of our works
Mezzarobba, Marc. "Autour de l'évaluation numérique des fonctions D-finies." Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00663017.
Full textDefour, David. "Fonctions élémentaires : algorithmes et implémentations efficaces pour l'arrondi correct en double précision." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2003. http://tel.archives-ouvertes.fr/tel-00006022.
Full textL'objectif de ce mémoire est d'exploiter les bornes associées à chaque fonction, pour certifier l'arrondi correct des fonctions élémentaires en double précision pour les quatre modes d'arrondi. À cet effet, nous avons implémenté les évaluations en deux étapes : l'une rapide et juste la plupart du temps, basée sur les propriétés de l'arithmétique IEEE double précision, et l'autre juste tout le temps, composé d'opérateurs multiprécision. Pour cette deuxième phase, nous avons développé une bibliothèque d'opérateurs multiprécision optimisés pour les précisions données par ces bornes et les caractéristiques des processeurs en 2003.
Hadri, Salah Eddine. "Contribution à la synthèse de structures optimales pour la réalisation des filtres et de régulateurs en précision finie." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL129N.
Full textDe, Dinechin Florent. "Matériel et logiciel pour l'évaluation de fonctions numériques :précision, performance et validation." Habilitation à diriger des recherches, Université Claude Bernard - Lyon I, 2007. http://tel.archives-ouvertes.fr/tel-00270151.
Full textChakhari, Aymen. "Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S059/document.
Full textTraditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two
Daumas, Marc. "Contributions à l'Arithmétique des Ordinateurs : Vers une Maîtrise de la Précision." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 1996. http://tel.archives-ouvertes.fr/tel-00147426.
Full textHilaire, Thibault. "Analyse et synthèse de l'implémentation de lois de contrôle-commande en précision finie- Étude dans le cadre des applications automobiles sur calculateur embarquée -." Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00086926.
Full textCes travaux ont porté sur l'implémentation de lois de contrôle-commande (provenant de l'automatique ou du traitement du signal) sous les contraintes de précision finie.
Le processus d'implémentation amène de nombreuses dégradations de la loi et nous nous intéressons plus particulièrement à la quantification des coefficients intervenant dans les calculs.
Pour une loi (filtre ou régulateur) donnée, il existe une infinité de réalisations numériques possibles qui, bien que mathématiquement équivalentes, ne le sont plus en précision finie : de nombreuses réalisations équivalentes existent : forme d'état, réalisations en delta, formes directes, structures retour d'état observateur, décompositions en cascade, en parallèle, ...
Après avoir présenté ces différentes possibilités, ce mémoire de thèse, propose un formalisme mathématique — la forme implicite spécialisée —qui permet de décrire de manière unifiée un ensemble élargi d'implémentations. Celui-ci, bien que macroscopique, permet d'exprimer précisément les calculs à réaliser et les paramètres réellement mis en jeu. Différentes mesures, appliquées à ce formalisme et qui permettent d'évaluer l'impact de la quantification (en virgule fixe et virgule flottante) et d'analyser la dégradation induite, sont ensuite proposées.
Via un problème d'optimisation, la réalisation qui présente la meilleure robustesse face aux détériorations induites par les processus d'implémentation en précision finie est trouvée.
Atay, Rachid. "Contribution à l'étude du comportement et à l'implémentation des algorithmes adaptatifs rapides stabilisés en précision finie sur processeur spécialisé en traitement du signal." Bordeaux 1, 1992. http://www.theses.fr/1992BOR10515.
Full textRoch, Jean-Louis. "Calcul formel et parallélisme : l'architecture du système PAC et son arithmétique rationnelle." Phd thesis, Grenoble INPG, 1989. http://tel.archives-ouvertes.fr/tel-00334457.
Full textNguyen, Hai-Nam. "Optimisation de la précision de calcul pour la réduction d'énergie des systèmes embarqués." Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00705141.
Full textChotin-Avot, Roselyne. "Architectures matérielles pour l'arithmétique stochastique discrète." Paris 6, 2003. http://hal.upmc.fr/tel-01267458.
Full textMou, Zhi-Jian. "Filtrage RIF rapide : algorithmes et architectures." Paris 11, 1989. http://www.theses.fr/1989PA112406.
Full textTisserand, Arnaud. "Étude et conception d'opérateurs arithmétiques." Habilitation à diriger des recherches, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00502465.
Full textBocco, Andrea. "A variable precision hardware acceleration for scientific computing." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI065.
Full textMost of the Floating-Point (FP) hardware units support the formats and the operations specified in the IEEE 754 standard. These formats have fixed bit-length. They are defined on 16, 32, 64, and 128 bits. However, some applications, such as linear system solvers and computational geometry, benefit from different formats which can express FP numbers on different sizes and different tradeoffs among the exponent and the mantissa fields. The class of Variable Precision (VP) formats meets these requirements. This research proposes a VP FP computing system based on three computation layers. The external layer supports legacy IEEE formats for input and output variables. The internal layer uses variable-length internal registers for inner loop multiply-add. Finally, an intermediate layer supports loads and stores of intermediate results to cache memory without losing precision, with a dynamically adjustable VP format. The VP unit exploits the UNUM type I FP format and proposes solutions to address some of its pitfalls, such as the variable latency of the internal operation and the variable memory footprint of the intermediate variables. Unlike IEEE 754, in UNUM type I the size of a number is stored within its representation. The unit implements a fully pipelined architecture, and it supports up to 512 bits of precision, internally and in memory, for both interval and scalar computing. The user can configure the storage format and the internal computing precision at 8-bit and 64-bit granularity This system is integrated as a RISC-V coprocessor. The system has been prototyped on an FPGA (Field-Programmable Gate Array) platform and also synthesized for a 28nm FDSOI process technology. The respective working frequencies of FPGA and ASIC implementations are 50MHz and 600MHz. Synthesis results show that the estimated chip area is 1.5mm2, and the estimated power consumption is 95mW. The experiments emulated in an FPGA environment show that the latency and the computation accuracy of this system scale linearly with the memory format length set by the user. In cases where legacy IEEE-754 formats do not converge, this architecture can achieve up to 130 decimal digits of precision, increasing the chances of obtaining output data with an accuracy similar to that of the input data. This high accuracy opens the possibility to use direct methods, which are more sensitive to computational error, instead of iterative methods, which always converge. However, their latency is ten times higher than the direct ones. Compared to low precision FP formats, in iterative methods, the usage of high precision VP formats helps to drastically reduce the number of iterations required by the iterative algorithm to converge, reducing the application latency of up to 50%. Compared with the MPFR software library, the proposed unit achieves speedups between 3.5x and 18x, with comparable accuracy
Chevillard, Sylvain. "Évaluation efficace de fonctions numériques - Outils et exemples." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2009. http://tel.archives-ouvertes.fr/tel-00460776.
Full textChohra, Chemseddine. "Towards reproducible, accurately rounded and efficient BLAS." Thesis, Perpignan, 2017. http://www.theses.fr/2017PERP0065.
Full textNumerical reproducibility failures rise in parallel computation because floating-point summation is non-associative. Massively parallel systems dynamically modify the order of floating-point operations. Hence, numerical results might change from one run to another. We propose to ensure reproducibility by extending as far as possible the IEEE-754 correct rounding property to larger computing sequences. We introduce RARE-BLAS a reproducible and accurate BLAS library that benefits from recent accurate and efficient summation algorithms. Solutions for level 1 (asum, dot and nrm2) and level 2 (gemv and trsv) routines are designed. Implementations relying on parallel programming API (OpenMP, MPI) and SIMD extensions areproposed. Their efficiency is studied compared to optimized library (Intel MKL) and other existing reproducible algorithms
Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.
Full textCritical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.
Full textPopescu, Valentina. "Towards fast and certified multiple-precision librairies." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN036/document.
Full textMany numerical problems require some very accurate computations. Examples can be found in the field ofdynamical systems, like the long-term stability of the solar system or the long-term iteration of the Lorenz attractor thatis one of the first models used for meteorological predictions. We are also interested in ill-posed semi-definite positiveoptimization problems that appear in quantum chemistry or quantum information.In order to tackle these problems using computers, every basic arithmetic operation (addition, multiplication,division, square root) requires more precision than the ones offered by common processors (binary32 and binary64).There exist multiple-precision libraries that allow the manipulation of very high precision numbers, but their generality(they are able to handle numbers with millions of digits) is quite a heavy alternative when high performance is needed.The major objective of this thesis was to design and develop a new arithmetic library that offers sufficient precision, isfast and also certified. We offer accuracy up to a few tens of digits (a few hundred bits) on both common CPU processorsand on highly parallel architectures, such as graphical cards (GPUs). We ensure the results obtained by providing thealgorithms with correctness and error bound proofs
Rieu, Raphaël. "Development and verification of arbitrary-precision integer arithmetic libraries." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG023.
Full textArbitrary-precision integer arithmetic algorithms are used in contexts where both their performance and their correctness are critical, such as cryptographic software or computer algebra systems. GMP is a very widely-used arbitrary-precision integer arithmetic library. It features state-of-the-art algorithms that are intricate enough that their formal verification is both justified and difficult. This thesis tackles the formal verification of the functional correctness of a large fragment of GMP using the Why3 deductive verification platform.In order to make this verification possible, I have made several additions to Why3 that enable the verification of C programs. Why3 features a functional programming and specification language called WhyML. I have developed models of the memory management and datatypes of the C language, allowing me to reimplement GMP's algorithms in WhyML and formally verify them. I have also extended Why3's extraction mechanism so that WhyML programs can be compiled to idiomatic C code, where only OCaml used to be supported.The compilation of my WhyML algorithms results in a verified C library called WhyMP. It implements many state-of-the-art algorithms from GMP, with almost all of the optimization tricks preserved. WhyMP is compatible with GMP and performance-competitive with the assembly-free version. It goes far beyond existing verified arbitrary-precision arithmetic libraries, and is arguably the most ambitious existing Why3 development in terms of size and proof effort.In an attempt to increase the degree of automation of my proofs, I have also added to Why3 a framework for proofs by reflection. It enables Why3 users to easily write dedicated decision procedures that are formally verified programs and make full use of WhyML's imperative features. Using this new framework, I was able to replace hundreds of handwritten proof annotations in my GMP verification by automated proofs
Magaud, Nicolas. "Changements de Représentation des Données dans le Calcul des Constructions." Phd thesis, Université de Nice Sophia-Antipolis, 2003. http://tel.archives-ouvertes.fr/tel-00005903.
Full textpreuves formelles en théorie des types. Nous traitons cette question
lors de l'étude
de la correction du programme de calcul de la racine carrée de GMP.
A partir d'une description formelle, nous construisons
un programme impératif avec l'outil Correctness. Cette description
prend en compte tous les détails de l'implantation, y compris
l'arithmétique de pointeurs utilisée et la gestion de la mémoire.
Nous étudions aussi comment réutiliser des preuves formelles lorsque
l'on change la représentation concrète des données.
Nous proposons un outil qui permet d'abstraire
les propriétés calculatoires associées à un type inductif dans
les termes de preuve.
Nous proposons également des outils pour simuler ces propriétés
dans un type isomorphe. Nous pouvons ainsi passer, systématiquement,
d'une représentation des données à une autre dans un développement
formel.
Jeangoudoux, Clothilde. "Génération automatique de tests logiciels dans le contexte de la certification aéronautique." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS148.
Full textThis work is done in the context of the validation and verification of numerical software for aircraft certification. In this thesis we develop an automatic generator of relaiable numerical test, according to the development rules mandated by the certification process. The tests, composed of stimulations associated with an expected behavior, are thus generated from a specification of the functional behavior of the software. Validation by test of the software means that given the simultations are inputs of the software, we compare the obtained result (binary) with the expected behavior identified using the functional specification (decimal). This work uses Constraint Programming (numerical constraints) and a combinatorial method of continuous domain resolution (intervals) to construct a paving of the feasible set by inner boxes (containing only solutions) and outer boxes encompassing the boundary of the feasible region. All tests are then developed using the Mutation Testing on constraints, which evaluates the quality of the current test campaign and adds new tests if needed. Conversions between binary and decimal formats are inevitable and introduce computational errors which can impact the reliability of test results. We strengthen our solution through the development and use of reliable arithmetic (multi-precision decimal interval arithmetic and binary/decimal mixed-radix arithmetic)
Noury, Ludovic. "Contribution à la conception de processeurs d'analyse de signaux à large bande dans le domaine temps-fréquence : l'architecture F-TFR." Paris 6, 2008. http://www.theses.fr/2008PA066206.
Full textOudjida, Abdelkrim Kamel. "Binary Arithmetic for Finite-Word-Length Linear Controllers : MEMS Applications." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2001/document.
Full textThis thesis addresses the problem of optimal hardware-realization of finite-word-length(FWL) linear controllers dedicated to MEMS applications. The biggest challenge is to ensuresatisfactory control performances with a minimal hardware. To come up, two distinct butcomplementary optimizations can be undertaken: in control theory and in binary arithmetic. Only thelatter is involved in this work.Because MEMS applications are targeted, the binary arithmetic must be fast enough to cope withthe rapid dynamic of MEMS; power-efficient for an embedded control; highly scalable for an easyadjustment of the control performances; and easily predictable to provide a precise idea on therequired logic resources before the implementation.The exploration of a number of binary arithmetics showed that radix-2r is the best candidate that fitsthe aforementioned requirements. It has been fully exploited to designing efficient multiplier cores,which are the real engine of the linear systems.The radix-2r arithmetic was applied to the hardware integration of two FWL structures: a linear timevariant PID controller and a linear time invariant LQG controller with a Kalman filter. Both controllersshowed a clear superiority over their existing counterparts, or in comparison to their initial forms
Deest, Gaël. "Implementation trade-offs for FGPA accelerators." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S102/document.
Full textHardware acceleration is the use of custom hardware architectures to perform some computations faster or more efficiently than on general-purpose hardware. Accelerators have traditionally been used mostly in resource-constrained environments, such as embedded systems, where resource-efficiency was paramount. Over the last fifteen years, with the end of empirical scaling laws, they also made their way to datacenters and High-Performance Computing environments. FPGAs constitute a convenient implementation platform for such accelerators, allowing subtle, application-specific trade-offs between all performance metrics (throughput/latency, area, energy, accuracy, etc.) However, identifying good trade-offs is a challenging task, as the design space is usually extremely large. This thesis proposes design methodologies to address this problem. First, we focus on performance-accuracy trade-offs in the context of floating-point to fixed-point conversion. Usage of fixed-point arithmetic instead of floating-point is an affective way to reduce hardware resource usage, but comes at a price in numerical accuracy. The validity of a fixed-point implementation can be assessed using either numerical simulations, or with analytical models derived from the algorithm. Compared to simulation-based methods, analytical approaches enable more exhaustive design space exploration and can thus increase the quality of the final architecture. However, their are currently only applicable to limited sets of algorithms. In the first part of this thesis, we extend such techniques to multi-dimensional linear filters, such as image processing kernels. Our technique is implemented as a source-level analysis using techniques from the polyhedral compilation toolset, and validated against simulations with real-world input. In the second part of this thesis, we focus on iterative stencil computations, a naturally-arising pattern found in many scientific and embedded applications. Because of this diversity, there is no single best architecture for stencils: each algorithm has unique computational features (update formula, dependences) and each application has different performance constraints/requirements. To address this problem, we propose a family of hardware accelerators for stencils, featuring carefully-chosen design knobs, along with simple performance models to drive the exploration. Our architecture is implemented as an HLS-optimized code generation flow, and performance is measured with actual execution on the board. We show that these models can be used to identify the most interesting design points for each use case
Shao, Peihui. "Reliable Solid Modelling Using Subdivision Surfaces." Thèse, 2013. http://hdl.handle.net/1866/9990.
Full textSubdivision surfaces are a promising alternative method for geometric modelling, and have some important advantages over the classical representation of trimmed-NURBS, especially in modelling piecewise smooth surfaces. In this thesis, we consider the problem of geometric operations on subdivision surfaces with the strict requirement of correct topological form, and since this problem may be ill-conditioned, we propose an approach for managing uncertainty that exists inherently in geometric computation. We take into account the requirement of the correctness of topological information when considering the nature of robustness for the problem of geometric operations on solid models, and it becomes clear that the problem may be ill-conditioned in the presence of uncertainty that is ubiquitous in the data. Starting from this point, we propose an interactive approach of managing uncertainty of geometric operations, in the context of computation using the standard IEEE arithmetic and modelling using a subdivision-surface representation. An algorithm for the planar-cut problem is then presented, which has as its goal the satisfaction of the topological requirement mentioned above.