Dissertations / Theses on the topic 'Polynomial calculus'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Polynomial calculus.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mikša, Mladen. "On Complexity Measures in Polynomial Calculus." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197278.
Full textQC 20161206
Understanding the Hardness of Theorem Proving
Curzi, Gianluca. "Non-laziness in implicit computational complexity and probabilistic λ-calculus." Thesis, Université de Paris (2019-....), 2020. http://www.theses.fr/2020UNIP7010.
Full textThis thesis explores the benefits of non-laziness in both Implicit Computational Complexity and probabilistic computation. More specifically, this thesis can be divided in two main parts. In the first one, we investigate in all directions the mechanisms of linear erasure and duplication, which lead us to the type assignment systems LEM (Linearly Exponential Multiplicative Type Assignment) and LAM (Linearly Additive Multiplicative Type Assignment). The former is able to express weaker versions of the exponential rules of Linear Logic, while the latter has weaker additive rules, called linear additives. These systems enjoy, respectively, a cubic cut-elimination and a linear normalization result. Since linear additives do not require a lazy evaluation to avoid the exponential blow up in normalization (unlike the standard additives), they can be employed to obtain an implicit characterization of the functions computable in probabilistic polynomial time that does not depend on the choice of the reduction strategy. This result is achieved in STA⊕, a system that extends STA (Soft Type Assignment) with a randomized formulation of linear additives. Also, this system is able to capture the complexity classes PP and BPP. The second part of the thesis is focused on the probabilistic λ-calculus endowed with an operational semantics based on the head reduction, i.e. a non-lazy call-by-name evaluation policy. We prove that probabilistic applicative bisimilarity is fully abstract with respect to context equivalence. This result witnesses the discriminating power of non-laziness, which allows to recover a perfect match between the two equivalences that was missing in the lazy setting. Moreover, we show that probabilistic applicative similarity is sound but not complete for the context preorder
Araaya, Tsehaye. "The Symmetric Meixner-Pollaczek polynomials." Doctoral thesis, Uppsala University, Department of Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3501.
Full textThe Symmetric Meixner-Pollaczek polynomials are considered. We denote these polynomials in this thesis by pn(λ)(x) instead of the standard notation pn(λ) (x/2, π/2), where λ > 0. The limiting case of these sequences of polynomials pn(0) (x) =limλ→0 pn(λ)(x), is obtained, and is shown to be an orthogonal sequence in the strip, S = {z ∈ ℂ : −1≤ℭ (z)≤1}.
From the point of view of Umbral Calculus, this sequence has a special property that makes it unique in the Symmetric Meixner-Pollaczek class of polynomials: it is of convolution type. A convolution type sequence of polynomials has a unique associated operator called a delta operator. Such an operator is found for pn(0) (x), and its integral representation is developed. A convolution type sequence of polynomials may have associated Sheffer sequences of polynomials. The set of associated Sheffer sequences of the sequence pn(0)(x) is obtained, and is found
to be ℙ = {{pn(λ) (x)} =0 : λ ∈ R}. The major properties of these sequences of polynomials are studied.
The polynomials {pn(λ) (x)}∞n=0, λ < 0, are not orthogonal polynomials on the real line with respect to any positive real measure for failing to satisfy Favard’s three term recurrence relation condition. For every λ ≤ 0, an associated nonstandard inner product is defined with respect to which pn(λ)(x) is orthogonal.
Finally, the connection and linearization problems for the Symmetric Meixner-Pollaczek polynomials are solved. In solving the connection problem the convolution property of the polynomials is exploited, which in turn helps to solve the general linearization problem.
Heinz, Sebastian. "Preservation of quasiconvexity and quasimonotonicity in polynomial approximation of variational problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15808.
Full textIn this thesis, we are concerned with three classes of non-linear problems that appear naturally in various fields of science, engineering and economics. In order to cover many different applications, we study problems in the calculus of variation (Chapter 3), partial differential equations (Chapter 4) as well as non-linear programming problems (Chapter 5). As an example of possible applications, we consider models of non-linear elasticity theory. The aim of this thesis is to approximate a given non-linear problem by polynomial problems. In order to achieve the desired polynomial approximation of problems, a large part of this thesis is dedicated to the polynomial approximation of non-linear functions. The Weierstraß approximation theorem forms the starting point. Based on this well-known theorem, we prove theorems that eventually lead to our main result: A given non-linear function can be approximated by polynomials so that essential properties of the function are preserved. This result is new for three properties that are important in the context of the considered non-linear problems. These properties are: quasiconvexity in the sense of the calculus of variation, quasimonotonicity in the context of partial differential equations and quasiconvexity in the sense of non-linear programming (Theorems 3.16, 4.10 and 5.5). Finally, we show the following: Every non-linear problem that belongs to one of the three considered classes of problems can be approximated by polynomial problems (Theorems 3.26, 4.16 and 5.8). The underlying convergence guarantees both the approximation in the parameter space and the approximation in the solution space. In this context, we use the concepts of Gamma-convergence (epi-convergence) and of G-convergence.
Souvay, Arnaud. "Une approche intrinsèque des foncteurs de Weil." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0257/document.
Full textWe construct a functor from the category of manifolds over a general topological base field or ring K, of arbitrary characteristic, to the category of manifolds over A, where A is a so-called Weil algebra, i.e. a K-algebra of the form A = K + N, where N is a nilpotent ideal. The corresponding functor, denoted by T^A, and called a Weil functor, can be interpreted as a functor of scalar extension from K to A. It is constructed by using Taylor polynomials, which we define in arbitrary characteristic. This result generalizes simultaneously results known for ordinary, real manifolds, and results for iterated tangent functors and for jet rings (A = K[X]/(X^{k+1})). We show that for any manifold M, T^A M is a polynomial bundle over M, and we investigate some algebraic aspects of the Weil functors, in particular those related to the action of the "Galois group" Aut_K(A). We study connections, which are an important tool for the analysis of fiber bundles, in two different contexts : connections on the Weil bundles T^A M, and connections on general bundles over M, following Ehresmann's approach. The curvature operators are induced by the action of the Galois group Aut_K(A) and they form an obstruction to the "integrability" of a K-smooth connection to an A-smooth one
Schimanski, Stefan. "Polynomial Time Calculi." Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-99100.
Full textVinyals, Marc. "Space in Proof Complexity." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206571.
Full textQC 20170509
Verron, Thibaut. "Régularisation du calcul de bases de Gröbner pour des systèmes avec poids et déterminantiels, et application en imagerie médicale." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066355/document.
Full textPolynomial system solving is a problem with numerous applications, and Gröbner bases are an important tool in this context. Previous studies have shown that systèmes arising in applications usually exhibit more structure than arbitrary systems, and that these structures can be used to make computing Gröbner bases easier.In this thesis, we consider two examples of such structures. First, we study weighted homogeneous systems, which are homogeneous if we give to each variable an arbitrary degree. This structure appears naturally in many applications, including a cryptographical problem (discrete logarithm). We show how existing algorithms, which are efficient for homogeneous systems, can be adapted to a weighted setting, and generically, we show that their complexity bounds can be divided by a factor polynomial in the product of the weights.Then we consider a real roots classification problem for varieties defined by determinants. This problem has a direct application in control theory, for contrast optimization in magnetic resonance imagery. This specific system appears to be out of reach of existing algorithms. We show how these algorithms can benefit from the determinantal structure of the system, and as an illustration, we answer the questions from the application to contrast optimization
Monnot, Jérôme. "Familles d'instances critiques et approximation polynomiale." Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090028.
Full textWailly, Olivier. "Placement optimal de capteurs sur un système à modèle polynomial." Corte, 2004. http://www.theses.fr/2005CORT3091.
Full textThe present thesus present a novel method in sensor design on automated system. This method is only applicable on polynomial systems. This method is using symbolic calculus software. Especially, the Groënberg bases' algorithms are used. After showing the interest of this method, algoritms and programs with optimal criteria are presented. So, the criteria like cost and reliability are developed
Moalla, Borhane. "Approximants de Padé, polynômes orthogonaux (cas matriciel)." Rouen, 1995. http://www.theses.fr/1995ROUES052.
Full textSerrière, Fabrice. "Approximation polynomiale du recouvrement d'ensemble." Paris 9, 2005. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2005PA090018.
Full textThis thesis deals with the polynomial approximation of the NP_Hard set covering problem. In the first part, we locate the problem of covering within the field of complexity before reporting the most significant results relating to the classical approximation and the traditional inapproximation of it. In the second part, we work on the differential approximation of the problem for weighted and unweighted cases, while starting by locating the limits of the problems then by proving lower and higher bounds for the differential ratio for certain algorithms whose well known greedy one. In the third part we study the probabilistic problems relating to unweighted set cover and prove that it is equivalent to the weighted one for the strategy which we propose. In the fourth and last part, we study the approximation of covering under its experimental angle by testing the algorithms studied during the document and by testing new algorithms on the instances of the literature and instances that we generated. The first appendix is dedicated to well known analyses of algorithm for set covering problem. The second one proposes a general method of analysis adaptable to many combinatorial problems. The third gives the experimental results studied in the forth part, and the last appendix is a question relating to the enumeration under constraint
Pham, Van tuan. "Applications des foncteurs strictement polynomiaux." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD016/document.
Full textThe work presented in this thesis deals with the study of representations ans cohomology of strict polynomial functors. The category P* of strict polynomial functors was introduced by Friedlander and Suslin in thei fouder article [Eric M. Friedlanderand Andrei Suslin. Cohomology of finite group schemes over a field. Invent. Math., 127(2) : 209-270, 1997]. Variants of this category, that is strict polynomial functors with several variables were introduced in [Vincent Franjou, Eric M. Friedlander, Alexander Scorichenko, and Andrei Suslin. General linear and functor cohomology over finite fields. Ann. of Math (2), 150(2) : 663-728, 1999], [Vincent Franjou and Eric Friedlander. Cohomology of bifunctors. Proc. Lond. Math. Soc. (3), 97(2) : 514-544, 2008], [Antoine Touzé. Cohomology of classical algebraic groups from the functorial viewpoint. Adv. Math., 225(1) : 33-68, 2010]. The first part of this thesis (chapter 2) studies the effect of Frobenius twist on the cohomology of strict polynomial functors. [...]
Lin, Zhicong. "Eulerian calculus arising from permutation statistics." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-00996105.
Full textWatel, Dimitri. "Approximation de l'arborescence de Steiner." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0025/document.
Full textThe directed Steiner tree problem (DST) asks, considering a directed weighted graph, a node r called root and a set of nodes X called terminals, for a minimum cost directed tree rooted in r spanning X. DST is an NP-complete problem. We are interested in the search for polynomial approximations for DST. Unless P = NP, DST can not be approximated neither within a constant ratio nor a logarithmic ratio with respected to k, where k is the number of terminals. The smallest known approximation ratio is O(kԑ)$ where ԑ is a positive real.In the first part, we provide three new approximation algorithms : a practical greedy algorithm merging two of the main approximation techniques for DST, an algorithm for the case where the graph is layered and where the distance between the terminals and the root is high, and an exponential approximation algorithm combining an approximation algorithm and an exact algorithm, parameterized with the number of terminals the exact algorithm must cover.In the last part we study DST with two branching constraints. With this technique, we are able to reduce the number of feasible solutions, and possibly facilitate the search for an optimal solution of the constraint problem. We study how it is possible to build such a solution in polynomial time and if this solution is a good approximation of an optimal solution of the non-constraint problem
Dreossi, Tommaso. "Calcul d'atteignabilité et synthèse de paramètres pour systèmes dynamiques polynomiaux." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM096.
Full textDynamical systems are important mathematical models used to describe the temporal evolution of systems.Often dynamical systems are equipped with parameters that allow the models to better capture the characteristicsof the abstracted phenomena. An important question around dynamical systems isto formally determine whether a model (biased by its parameters) behaves well.In this thesis we deal with two main questions concerning discrete-time polynomial dynamical systems:1) the reachability computation problem, i.e, given a set of initial conditions and a set ofparameters, compute the set of states reachable by the system in a bounded time horizon;2) the parameter synthesis problem, i.e., given a set of initial conditions,a set of parameters, and a specification, find the largestset of parameters such that all the behaviors of the system staring from the set ofinitial conditions satisfy the specification.The reachability computation problem for nonlinear dynamical systems is well known for being nontrivial.Difficulties arise in handling and representing sets generated by nonlinear transformations.In this thesis we adopt a common technique that consistsin over-approximating the complex reachable sets with sets that are easy to manipulate.The challenge is to determine accurate over-approximations.We propose methods to finely over-approximate the images of sets using boxes,parallelotopes, and a new data structure called parallelotope bundles (that are collections of parallelotopeswhose intersections symbolically represent polytopes). These approximation techniquesare the basic steps of our reachability algorithm.The synthesis of parameters aims at determining the valuesof the parameters such that the system behaves as expected. This feature can beused, for instance, to tune a model so that it imitates the modeledphenomenon with a sufficient level of precision. The contributions of thisthesis concerning the parameter synthesis problem are twofold. Firstly,we define a new semantics for the Signal Temporal Logic (STL) that allows oneto formalize a specification and reason on sets of parameters and flows of behaviors.Secondly, we define an algorithm to compute the synthesis semanticsof a formula against a discrete-time dynamical system. The result of the algorithmconstitutes a conservative solution of the parameter synthesis problem.The developed methods for both reachability computation and parameter synthesisexploit and improve Bernstein coefficients computation.The techniques defined in this thesis have been implemented ina tool called Sapo. The effectiveness of our methods is validatedby the application of our tool to several polynomial dynamical systems
Escoffier, Bruno. "Approximation polynomiale de problèmes d’optimisation : aspects structurels et opérationnels." Paris 9, 2005. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2005PA090065.
Full textThis thesis deals with the study of NPO-problems (optimization problems the decision version of which is in NP), and more precisely the theory of worst case polynomial approximation. In this problematic, we aim at determining how it is possible to produce efficiently a feasible solution achieving a certain quality, quality measured either by the standard ratio or by the differential ratio. Our work can be divided into two parts : - The structure of approximation classes, structure which emerges from the introduction of approximation preserving reductions which induce notions of completeness. - Polynomial approximation of some NPO problems: we studied approximation properties of the weighted coloring problem, the probabilistic coloring problem and the quadratic set covering problem using the standard ratio, and satisfiability problems using the differential ratio
Blanc, Monique. "Modèle polynomial contraint discret par couche pour le calcul thermique et thermoélastique des multicouches épais." Phd thesis, Paris, ENSAM, 2005. http://tel.archives-ouvertes.fr/tel-00012005.
Full textJoldes, Mioara Maria. "Approximations polynomiales rigoureuses et applications." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00657843.
Full textGreuet, Aurélien. "Optimisation polynomiale et variétés polaires : théorie, algorithmes et implantations." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00922805.
Full textBender, Matias Rafael. "Algorithms for sparse polynomial systems : Gröbner bases and resultants." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS029.
Full textSolving polynomial systems is one of the oldest and most important problems in computational mathematics and has many applications in several domains of science and engineering. It is an intrinsically hard problem with complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. In this thesis we focus on exploiting the structure related to the sparsity of the supports of the polynomials; that is, we exploit the fact that the polynomials only have a few monomials with non-zero coefficients. Our objective is to solve the systems faster than the worst case estimates that assume that all the terms are present. We say that a sparse system is unmixed if all its polynomials have the same Newton polytope, and mixed otherwise. Most of the work on solving sparse systems concern the unmixed case, with the exceptions of mixed sparse resultants and homotopy methods. In this thesis, we develop algorithms for mixed systems. We use two prominent tools in nonlinear algebra: sparse resultants and Groebner bases. We work on each theory independently, but we also combine them to introduce new algorithms: we take advantage of the algebraic properties of the systems associated to a non-vanishing resultant to improve the complexity of computing their Groebner bases; for example, we exploit the exactness of some strands of the associated Koszul complex to deduce an early stopping criterion for our Groebner bases algorithms and to avoid every redundant computation (reductions to zero). In addition, we introduce quasi-optimal algorithms to decompose binary forms
Colson, Pierre. "Cogèbres et calcul symbolique." Paris, EHESS, 2004. http://www.theses.fr/2004EHES0159.
Full textThe algebra of the translation operator on a space of polynomials is well known and is the key to objects such as Appell polynomials, shift-invariant operators and the Euler-McLaurin summation formula. In the 1970's G. -C. Rota proposed a new approach by interpreting the translation operator as a comultiplication. In the present thesis, we generalize some of Rota's results to arbitrary coalgebras. Our appproach is singled out by the fact that it may be tranposed to any context where an appropriate monoidal category is available. For example, the formulation for locally convex spaces provides a generalization of the concept of mean-periodicity, which goes back to J. Delsarte in the 1930's
Milio, Enea. "Calcul de polynômes modulaires en dimension 2." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0285/document.
Full textModular polynomials on elliptic curves are a fundamental tool used for the computation of graph of isogenies, class polynomials or for point counting. Thus, they are fundamental for the elliptic curve cryptography. A generalization of these polynomials for principally polarized abelian surfaces has been introduced by Régis Dupont in 2006, who has also described an algorithm to compute them, while theoretical results can been found in an article of Bröker– Lauter of 2009. But these polynomials being really big, they have been computed only in the minimal case p = 2. In this thesis, we continue the work of Dupont and Bröker–Lauter by defining and giving theoretical results on modular polynomials with new invariants, based on theta constants. Using these invariants, we have been able to compute the polynomials until p = 7 but bigger examples look intractable. Thus we define a new kind of modular polynomials where we restrict on the surfaces having real multiplication by the maximal order of a real quadratic field. We present many examples and theoretical results
Rollet, Yannis. "Vers une maîtrise des incertitudes en calculs des structures composites." Phd thesis, Ecole Polytechnique X, 2007. http://tel.archives-ouvertes.fr/tel-00257371.
Full textLarrieu, Robin. "Arithmétique rapide pour des corps finis." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX073/document.
Full textThe multiplication of polynomials is a fundamental operation in complexity theory. Indeed, for many arithmetic problems, the complexity of algorithms is expressed in terms of the complexity of polynomial multiplication. For example, the complexity of euclidean division or of multi-point evaluation/interpolation (and others) is often expressed in terms of the complexity of polynomial multiplication. This shows that a better multiplication algorithm allows to perform the corresponding operations faster. A 2016 result gave an improved asymptotic complexity for the multiplication of polynomials over finite fields. This article is the starting point of the thesis; the present work aims to study the implications of the new complexity bound, from a theoretical and practical point of view.The first part focuses on the multiplication of univariate polynomials. This part presents two new algorithms that should make the computation faster in practice (rather than asymptotically speaking). While it is difficult in general to observe the predicted speed-up, some specific cases are particularly favorable. Actually, the second proposed algorithm, which is specific to finite fields, leads to a better implementation for the multiplication in F 2 [X], about twice as fast as state-of-the-art software.The second part deals with the arithmetic of multivariate polynomials modulo an ideal, as considered typically for polynomial system solving. This work assumes a simplified situation, with only two variables and under certain regularity assumptions. In this particular case, there are algorithms whose complexity is asymptotically optimal (up to logarithmic factors), with respect to input/output size. The implementation for this specific case is then significantly faster than general-purpose software, the speed-up becoming larger and larger when the input size increases
Bouvier, Cyril. "Algorithmes pour la factorisation d'entiers et le calcul de logarithme discret." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0053/document.
Full textIn this thesis, we study the problems of integer factorization and discrete logarithm computation in finite fields. First, we study the ECM algorithm for integer factorization and present a method to analyze the elliptic curves used in this algorithm by studying the Galois properties of division polynomials. Then, we present in detail the NFS algorithm for integer factorization and we study in particular the polynomial selection step for which we propose improvements of existing algorithms. Next, we present two algorithms for computing discrete logarithms in finite fields: NFS-DL and FFS. We also give some details of two computations of discrete logarithms carried out during this thesis, one with NFS-DL and the other with FFS. Finally, we study a common step of the NFS algorithm for integer factorization and the NFS-DL and FFS algorithms for discrete logarithm computations: the filtering step. We study this step thoroughly and present an improvement for which we study the impact using data from several computations of discrete logarithms and factorizations
Raksanyl, Atilla. "Utilisation du calcul formel pour l'étude des systèmes d'équations polynomiales applications en modélisation /." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37600594x.
Full textRaksanyi, Atilla. "Utilisation du calcul formel pour l'étude des systèmes d'équations polynomiales (applications en modélisation)." Paris 9, 1986. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1986PA090011.
Full textLassus, Saint-Genies Hugues de. "Elementary functions : towards automatically generated, efficient, and vectorizable implementations." Thesis, Perpignan, 2018. http://www.theses.fr/2018PERP0010/document.
Full textElementary mathematical functions are pervasive in many high performance computing programs. However, although the mathematical libraries (libms), on which these programs rely, generally provide several flavors of the same function, these are fixed at implementation time. Hence this monolithic characteristic of libms is an obstacle for the performance of programs relying on them, because they are designed to be versatile at the expense of specific optimizations. Moreover, the duplication of shared patterns in the source code makes maintaining such code bases more error prone and difficult. A current challenge is to propose "meta-tools" targeting automated high performance code generation for the evaluation of elementary functions. These tools must allow reuse of generic and efficient algorithms for different flavours of functions or hardware architectures. Then, it becomes possible to generate optimized tailored libms with factorized generative code, which eases its maintenance. First, we propose an novel algorithm that allows to generate lookup tables that remove rounding errors for trigonometric and hyperbolic functions. The, we study the performance of vectorized polynomial evaluation schemes, a first step towards the generation of efficient vectorized elementary functions. Finally, we develop a meta-implementation of a vectorized logarithm, which factors code generation for different formats and architectures. Our contributions are shown competitive compared to free or commercial solutions, which is a strong incentive to push for developing this new paradigm
Dessombz, Olivier. "Analyse dynamique de structures comportant des paramètres incertains." Ecully, Ecole centrale de Lyon, 2000. http://www.theses.fr/2000ECDL0036.
Full textWe are interested in the modelling of structures with uncertain parameters. We focus on the characteristics of static and dynamic responses of such mechanical systems. We distinguish in this study two cases : first, the case of random parameters with a known probability law and second the case of variables of which only the bounds are known. In a first part, we investigate the case of structures with uncertain parameters modelled as random variables. We are particularly interested in the dynamic responses, as well the frequency response functions as the eigenmodes. An inovative method is carried out, which consists in a projection on orthogonal polynomial (polynomial chaos) that leads to the main stochastic characteristics of the responses. In a second part, we use the interval arithmetic to solve static and dynamic problems. We first propose an adapted formulation of the mathematical problems with respect to the finite element modeling of mechanical systems. We then introduce a new formulation of an iterative algorithm that leads to enveloppes of responses for interval linear systems
Melczer, Stephen. "Analytic Combinatorics in Several Variables : Effective Asymptotics and Lattice Path Enumeration." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN013/document.
Full textThe field of analytic combinatorics, which studies the asymptotic behaviour ofsequences through analytic properties of their generating functions, has led to thedevelopment of deep and powerful tools with applications across mathematics and thenatural sciences. In addition to the now classical univariate theory, recent work in thestudy of analytic combinatorics in several variables (ACSV) has shown how to deriveasymptotics for the coefficients of certain D-finite functions represented by diagonals ofmultivariate rational functions. This thesis examines the methods of ACSV from acomputer algebra viewpoint, developing rigorous algorithms and giving the firstcomplexity results in this area under conditions which are broadly satisfied.Furthermore, this thesis gives several new applications of ACSV to the enumeration oflattice walks restricted to certain regions. In addition to proving several openconjectures on the asymptotics of such walks, a detailed study of lattice walk modelswith weighted steps is undertaken
Bréhard, Florent. "Certified numerics in function spaces : polynomial approximations meet computer algebra and formal proof." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN032/document.
Full textRigorous numerics aims at providing certified representations for solutions of various problems, notably in functional analysis, e.g., differential equations or optimal control. Indeed, specific domains like safety-critical engineering or computer-assisted proofs in mathematics have stronger reliability requirements than what can be achieved by resorting to standard numerical analysis algorithms. Our goal consists in developing efficient algorithms, which are also validated / certified in the sense that all numerical errors (method or rounding) are taken into account. Specifically, a central contribution is to combine polynomial approximations with a posteriori fixed-point validation techniques. A C code library for rigorous polynomial approximations (RPAs) is provided, together with a Coq formal proof development, offering the highest confidence at the implementation level.After providing basic operations on RPAs, we focus on a new validation algorithm for Chebyshev basis solutions of D-finite functions, i.e., solutions of linear ordinary differential equations (LODEs) with polynomial coefficients. We give an in-depth complexity analysis, as well as an extension to general LODEs, and even coupled systems of them. These symbolic-numeric methods are finally used in several related problems: a new lower bound on the Hilbert number for quartic systems; a validation of trajectories arising in the linearized spacecraft rendezvous problem; the design of evaluation error efficient polynomial approximations; and the support and density reconstruction of particular measures using algebraic techniques
Svartz, Jules. "Résolution de systèmes polynomiaux structurés de dimension zéro." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066621/document.
Full textMultivariate polynomial systems arise naturally in many scientific fields. These systems coming from applications often carry a specific algebraic structure.A classical method for solving polynomial systems isbased on the computation of a Gr\"obner basis of the ideal associatedto the system.This thesis presents new tools for solving suchstructured systems, where the structure is induced by the action of a particular group or a monomial structure, which include multihomogeneous or quasihomogeneous systems.On the one hand, this thesis proposes new algorithmsusing these algebraic structures to improve the efficiency of solving suchsystems (invariant under the action of a group or having a support in a particular set of monomials). These techniques allow to solve a problem arising in physics for instances out of reach until now.On the other hand, these tools improve the complexity bounds for solving several families of structured polynomial systems (systems globally invariant under the action of an abelian group or with their support in the same polytope). This allows in particular to extend known results on bilinear systems to general mutlihomogeneous systems
Javan, Peykar Ariyan. "Explicit polynomial bounds for Arakelov invariants of Belyi curves." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112075/document.
Full textWe explicitly bound the Faltings height of a curve over the field of algebraic numbers in terms of the Belyi degree. Similar bounds are proven for three other Arakelov invariants: the discriminant, Faltings' delta invariant and the self-intersection of the dualizing sheaf. Our results allow us to explicitly bound these Arakelov invariants for modular curves, Hurwitz curves and Fermat curves. Moreover, as an application, we show that the Couveignes-Edixhoven-Bruin algorithmtime under the Riemann hypothesis for zeta-functions of number fields. This was known before only for certain congruence subgroups. Finally, we utilize our results to prove a conjecture of Edixhoven, de Jong and Schepers on the Faltings height of a branched cover of the projective line over the ring of integers
Guyon, Christophe. "Calcul symbolique pour la planification de trajectoire des systèmes dynamiques Nilpotents." Lille 1, 1995. http://www.theses.fr/1995LIL10146.
Full textAyad, Ali. "Complexité de résolution de systèmes algébriques paramétrés." Rennes 1, 2006. https://tel.archives-ouvertes.fr/tel-00127383.
Full textPerrinel, Matthieu. "Investigating the expressivity of linear logic subsystems characterizing polynomial time." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1001/document.
Full textImplicit computational complexity is the characterization of complexity classes by syntactic restrictions on computation models. Several subsystems of linear logic characterizing polynomial time have been defined : these systems are sound (terms normalize in polynomial time) and complete (it is possible to simulate a Turing machine during a polynomial number of steps). One of the long term goals is to statically prove complexity bounds. This is why we are looking for the most expressive characterizations possible. Our main tool is context semantics : tokens travel across proof-nets (programs of linear logic) according to some rules. The paths defined by these tokens represent the reduction of the proof-net.Contrary to previous works, we do not directly define subsystems of linear logic. We first define relations -> on subterms of proof-nets such that: B -> C means \the number of copies of B depends on the number of copies of C". The acyclicity of -> allows us to bound the number of copies of any subterm, this bounds the complexity of the term. Then, we define subsystems of linear logic guaranteeing the acyclicity of ->. We also study characterizations of elementary time and primitive recursive time. In orderto adapt our linear logic subsystems to richer languages, we adapt the context semantics to interaction nets, used as a target language for small programming languages. We use this context semantics to define a denotational semantics on interaction nets
Pasqualini, Olivier. "Éléments finis stochastiques étendus pour le calcul en fatigue de joints soudés avec géométries aléatoires." Nantes, 2013. http://www.theses.fr/2013NANT2090.
Full textWelded joints are essential components for the construction of various fixed or floating structures. These elements are so important that we need to fully understand the fatigue process in order to foresee the structural behaviour under cyclic loading. The fatigue lifetime computation of a welded joint depends of various parameters such as the geometry of the structure. The stress concentration factor computation Kt is an efficient key parameter to model the fatigue lifetime. It has the advantage to link theoretical stress with the maximum value of local stresses and so with the fatigue lifetime thanks to S-N curves. In order to compute the Kt coefficient from real data and their uncertainties, some measurements along welded joints were realized by using a Non-Destructive Control device with laser process measurement. A statistical analysis of these measures were carried out to model the geometrical parameters by random variables and to identify their probability distribution. Kt-computation were performed by using eXtended Stochastic Finite Element Method; this computation method combines the Stochastic Finite Element Method, efficient to solve problems governed by random physical inputs, and the XFEM, efficient to implicitly define the domain geometry by using level-sets. In particular, we use a non-intrusive method of least-square computation to carry out, with a few numbers of random values, a Kt formulation defined on Polynomial Chaos base. From these results, an original semi-probabilistic model is suggested which introduces the geometrical parameters
Leroy, Richard. "Certificats de positivité et minimisation polynomiale dans la base de Bernstein multivariée." Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00349444.
Full textNous nous proposons, dans cette thèse, d'étudier ces questions dans le cas particulier où l'étude est menée sur un simplexe de $\R^k$.
L'outil essentiel dans notre travail est la base de Bernstein, plus adaptée à la situation que la traditionnelle base des monômes. Elle jouit notamment de propriétés de positivité et d'encadrement essentielles à notre étude.
Elle permet tout d'abord d'obtenir un algorithme décidant si un polynôme $f$ est positif sur un simplexe $V$, et le cas échéant, fournissant une écriture de $f$ rendant triviale cette positivité : on parle de certificat de positivité.
En outre, elle est à l'origine d'un algorithme de minimisation polynomiale sur un simplexe. Ces deux algorithmes sont certifiés, et l'étude de leur complexité est menée dans cette thèse. Ils ont également fait l'objet d'implémentation sur ordinateur.
Nesme, Vincent. "Complexité en requêtes et symétries." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2007. http://tel.archives-ouvertes.fr/tel-00156762.
Full textproblèmes symétriques, dans les cadres du calcul probabiliste classique
et du calcul quantique.
Il est montré, dans le cas quantique, une application de la méthode de
bornes inférieures dite "polynomiale" au calcul de la complexité en
requêtes des problèmes de sous-groupes cachés abéliens, via la technique de "symétrisation".
Dans le cas du calcul probabiliste, sous une hypothèse de "symétrie
transitive" des problèmes, il est donné une formule combinatoire
permettant de calculer la complexité en requêtes exacte du meilleur
algorithme non-adaptatif. De plus, il est mis en évidence que sous
certaines hypothèses de symétrie, ce meilleur algorithme non-adaptatif
est optimal même parmi les algorithmes probabilistes plus généraux, ce qui donne pour la classe de problèmes correspondante une expression exacte de la complexité en requêtes.
Diamoutani, Mamadou. "De quelques méthodes de calcul de valeurs propres de grandes matrices." Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00321850.
Full textTran, Cuong. "Calcul formel dans la base des polynômes unitaires de Chebyshev." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066379/document.
Full textWe propose a set of simple and fast algorithms for evaluating and using trigonometric expressions in the form $F=\sum_{k}f_k\cos\frac{k\pi}{n}$, $f_k\in Z$ where $d
Lagarde, Guillaume. "Contributions to arithmetic complexity and compression." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC192/document.
Full textThis thesis explores two territories of computer science: complexity and compression. More precisely, in a first part, we investigate the power of non-commutative arithmetic circuits, which compute multivariate non-commutative polynomials. For that, we introduce various models of computation that are restricted in the way they are allowed to compute monomials. These models generalize previous ones that have been widely studied, such as algebraic branching programs. The results are of three different types. First, we give strong lower bounds on the number of arithmetic operations needed to compute some polynomials such as the determinant or the permanent. Second, we design some deterministic polynomial-time algorithm to solve the white-box polynomial identity problem. Third, we exhibit a link between automata theory and non-commutative arithmetic circuits that allows us to derive some old and new tight lower bounds for some classes of non-commutative circuits, using a measure based on the rank of a so-called Hankel matrix. A second part is concerned with the analysis of the data compression algorithm called Lempel-Ziv. Although this algorithm is widely used in practice, we know little about its stability. Our main result is to show that an infinite word compressible by LZ’78 can become incompressible by adding a single bit in front of it, thus closing a question proposed by Jack Lutz in the late 90s under the name “one-bit catastrophe”. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation—when one bit is added in front of it
Blondel, Jean-Louis. "Étude et mise en oeuvre d'une méthode de décomposition additive pour le calcul des valeurs propres d'une matrice." Caen, 1988. http://www.theses.fr/1988CAEN2035.
Full textRérat, Michel. "Methode invariante de jauge pour le calcul de proprietes magnetiques : applications a de petites molecules." Paris 6, 1987. http://www.theses.fr/1987PA066024.
Full textDridi, Marwa. "Sur les méthodes rapides de résolution de systèmes de Toeplitz bandes." Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0402/document.
Full textThis thesis aims to design new fast algorithms for numerical computation via the Toeplitz matrices. First, we introduced a fast algorithm to compute the inverse of a triangular Toeplitz matrix with real and/or complex numbers based on polynomial interpolation techniques. This algorithm requires only two FFT (2n) is clearly effective compared to predecessors. A numerical accuracy and error analysis is also considered. Numerical examples are given to illustrate the effectiveness of our method. In addition, we introduced a fast algorithm for solving a linear banded Toeplitz system. This new approach is based on extending the given matrix with several rows on the top and several columns on the right and to assign zeros and some nonzero constants in each of these rows and columns in such a way that the augmented matrix has a lower triangular Toeplitz structure. Stability of the algorithm is discussed and its performance is showed by numerical experiments. This is essential to connect our algorithms to applications such as image restoration applications, a key area in applied mathematics
Garreau, Pierre-Olivier. "Contribution à un environnement pour le calcul scientifique et la modélisation : strates et systèmes polynômiaux sur les corps finis." Phd thesis, Université Joseph Fourier (Grenoble), 1994. http://tel.archives-ouvertes.fr/tel-00344983.
Full textNguyen, Viet anh. "Contributions to tensor models, Hurwitz numbers and Macdonald-Koornwinder polynomials." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0052/document.
Full textIn this thesis, I study three related subjects: tensor models, Hurwitz numbers and Macdonald-Koornwinder polynomials. Tensor models are generalizations of matrix models as an approach to quantum gravity in arbitrary dimensions (matrix models give a 2D version). I study a specific model called the quartic melonic tensor model. Its specialty is that it can be transformed into a multi-matrix model which is very interesting by itself. With the help of well-established tools, I am able to compute the first two leading orders of their 1=N expansion. Among many interpretations, Hurwitz numbers count the number of weighted ramified coverings of Riemann surfaces. They are connected to many subjects of contemporary mathematics such as matrix models, integrable equations and moduli spaces of complex curves. My main contribution is an explicit formula for one-part double Hurwitz numbers with completed 3-cycles. This explicit formula also allows me to prove many interesting properties of these numbers. The final subject of my study is Macdonald-Koornwinder polynomials, in particular their Littlewood identities. These polynomials form important bases of the algebra of symmetric polynomials. One of the most important problems in symmetric function theory is to decompose a symmetric polynomial into the Macdonald basis. The obtained decomposition (in particular, if the coefficients are explicit and reasonably compact) is called a Littlewood identity. In this thesis, I study many recent Littlewood identities of Rains and Warnaar. My own contributions include a proof of an extension of one of their identities and partial progress towards generalization of one another
Lecerf, Grégoire. "Une alternative aux methodes de reecriture pour la resolution des systemes algebriques." Palaiseau, Ecole polytechnique, 2001. http://www.theses.fr/2001EPXX0018.
Full textLebreton, Romain. "Contributions à l'algorithmique détendue et à la résolution des systèmes polynomiaux." Phd thesis, Ecole Polytechnique X, 2012. http://tel.archives-ouvertes.fr/tel-00780618.
Full text