Tesi sul tema "Données algébriques"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-33 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Données algébriques".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Kaplan, Stéphane. "Spécification algébrique de types de données à accès concurrent". Paris 11, 1987. http://www.theses.fr/1987PA112335.
Testo completoMokadem, Riad. "Signatures algébriques dans la gestion de structures de données distribuées et scalables". Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090014.
Testo completoRecent years saw emergence of new architectures, involving multiple computers. New concepts were proposed. Among most popular are those of a multicomputer or of a Network of Worksattion and more recently, of Peer to Peer and Grid Computing. This thesis consists on the design, implementation and performance measurements of a prototype SDDS manager, called SDDS-2005. It manages key based ordered files in distributed RAM of Windows machines forming a grid or P2P network. Our scheme can backup the RAM on each storage node onto the local disk. Our goal is to write only the data that has changed since the last backup. We interest also to update records and non key search (scans). Their common denominator was some application of the properties of new signature scheme based that we call algebraic signatures, which are useful in this context. Ones needs then to find only the areas that changed in the bucket since the last buckup. Our signature based scheme for updating records at the SDDS client should prove its advantages in client-server based database systems in general. It holds the promise of interesting possibilities for transactional concurrency control, beyond the mere avoidance of lost updates. We also update only data have been changed because of the using the algebraic signatures. Also, partly pre-computed algebraic signature of a string encodes each symbol by its cumulative signatures. They protect the SDDS data against incidental viewing by an unauthorized server’s administrator. The method appears attractive, it does not amply any storage overhead. It is also completly transparent for servers and occurs in client. Next, our cheme provide fast string search (match) directly on encoded data at the SDDS servers. They appear an alternative to known Karp-Rabin type schemes. Scans can explore the storage nodes in parallel. They match the records by entire non-key content or by its substring, prefix, longest common prefix or longest common string. The search complexity is almost O (1) for prefix search. One may use them also to detect and localize the silent corruption. These features should be of interest to P2P and grid computing. Then, we propose novel string search algorithm called n-Gramme search. It also appears then among the fastest known, e. G, probably often the faster one we know. It cost only a small fraction of existing records match, especially for larger strings search. The experiments prove high efficiency of our implementation. Our buckup scheme is substantially more efficient with the algebraic signatures. The signature calculus is itself substantially faster, the gain being about 30 %. Also, experiments prove that our cumulative pre-computing notably accelerates the string searchs which are faster than the partial one, at the expense of higher encoding/decoding overhead. They are new alternatives to known Karp-Rabin type schemes, and likely to be usually faster. The speed of string matches opens interesting perspectives for the popular join, group-by, rollup, and cube database operations. Our work has been subject of five publications in international conferences [LMS03, LMS05a, LMS05b, ML06, l&al06]. For convenience, we have included the latest publications. Also, the package termed SDDS-2005 is available for non-commercial use at http://ceria. Dauphine. Fr/. It builds up on earlier versions of the prototype, a cumulative effort of several folks and n-Gramme algorithm implementation. We have also presented our proposed prototype, SDDS-2005, at the Microsoft Research Academic Days 2006
Chlyah, Sarah. "Fondements algébriques pour l'optimisation de la programmation itérative avec des collections de données distribuées". Thesis, Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALM011.
Testo completoThe goal of my PhD is to study the optimization and the distribution of queries, especially recursive queries, handling large amounts of data. I start by reviewing different query languages as well as formal approaches to intermediate representations of these languages. Languages and formal approaches are reviewed in the light of a number of aspects such as expressivity, distribution, automatic optimizations, manipulating complex data, graph querying, and impedence mismatch, with a special focus on the ability to express recursion. I then propose extensions to formal approaches along two main lines of work: (1) algebras based on the relational model, for which I propose Dist-μ-RA, and (2) algebras based on generic collections of arbitrary types, for which I propose μ-monoids.Dist-μ-RA is a system that extends theμ-RA algebra to the distributed setting. Regarding the algebraic aspect, it integrates well with the relational algebra and inherits its advantages including the fact that queries are optimized regardless of their initial shape and translation into the algebra. With respect to distribution, different strategies for evaluating recursive algebraic terms in a distributed setting have been studied. These strategies are implemented as plans with automated techniques for distributing data in order to reduce communication costs. Experimental results on both real and synthetic graphs show the effectiveness of the proposed approach compared to existing systems.μ-monoids is an extension of the monoid algera with a fixpoint operator that models recursion. The extended μ-monoids algebra is suitable for modeling recursive computations with distributed data collections such as the ones found in Big Data frameworks. The major interest of the “μ” fixpoint operator is that, under prerequisites that are often met in practice, it can be considered as a monoid homomorphism and thus can be evaluated by parallel loops with one final merge rather than by a global loop requiring network overhead after each iteration. Rewriting rules for optimizing fixpoint terms, such as pushing filters, are proposed. In particular, I propose a sufficient condition on the repeatedly evaluated term (φ) regardless of its shape, as well as a method using polymorphic types and a type system such as Scala’s to check whether this condition holds. I also propose a rule to prefilter a fixpoint before a join. The third rule allows for pushing aggregation functions inside a fixpoint. Experiments with the Spark platform illustrate performance gains brought by these systematic optimizations
Dumonceaux, Frédéric. "Approches algébriques pour la gestion et l’exploitation de partitions sur des jeux de données". Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=c655f585-5cf3-4554-bea2-8e488315a2b9.
Testo completoThe rise of data analysis methods in many growing contexts requires the design of new tools, enabling management and handling of extracted data. Summarization process is then often formalized through the use of set partitions whose handling depends on applicative context and inherent properties. Firstly, we suggest to model the management of aggregation query results over a data cube within the algebraic framework of the partition lattice. We highlight the value of such an approach with a view to minimize both required space and time to generate those results. We then deal with the consensus of partitions issue in which we emphasize challenges related to the lack of properties that rule partitions combination. The idea put forward is to deepen algebraic properties of the partition lattice for the purpose of strengthening its understanding and generating new consensus functions. As a conclusion, we propose the modelling and implementation of operators defined over generic partitions and we carry out some experiences allowing to assert the benefit of their conceptual and operational use
Weisbecker, Clement. "Amélioration des solveurs multifrontaux à l'aide de représentations algébriques rang-faible par blocs". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2013. http://tel.archives-ouvertes.fr/tel-00934939.
Testo completoKoehret, Bernard. "Conception d'un simulateur de procédés". Toulouse, INPT, 1987. http://www.theses.fr/1987INPT022G.
Testo completoBernot, Gilles. "Une sémantique algébrique pour une spécification differenciée des exceptions et des erreurs : application à l'implémentation et aux primitives de structuration des spécifications formelles". Paris 11, 1986. http://www.theses.fr/1986PA112262.
Testo completoFaget, Zoé. "Un modèle pour la gestion des séquences temporelles synchronisées : Application aux données musicales symboliques". Phd thesis, Université Paris Dauphine - Paris IX, 2011. http://tel.archives-ouvertes.fr/tel-00676537.
Testo completoCoulon, Fabien. "Minimisation d'automates non-déterministes, recherche d'expressions dans un texte et comparaison de génomes". Rouen, 2004. http://www.theses.fr/2004ROUES029.
Testo completoThe initial topic of this thesis is automata minimization. I prove a technique for full minimization that was given unproved by Sengoku, together with heuristics based on state simulations, that combine left and right languages. This work provides a reduction technique for B\"uchi automata. On the other hand, I focus on managing the space complexity of determinisation by an optimized partial determinization. The following is more involved in practical applications. First, I focus on secondary expression search in genome, based on context-free grammars. I give an adaptation of Valiant's algorithm, and a CYK algorithm for single hairpin approximate search. Finally, I investigate gene-team search between several genomes. An underlying problem is the common connected set search between several graphs. I describe our new algorithm that is specific to interval graphs
Pralet, Cédric. "Un cadre algébrique général pour représenter et résoudre des problèmes de décision séquentielle avec incertitudes, faisabilités et utilités". Toulouse, ENSAE, 2006. http://www.theses.fr/2006ESAE0013.
Testo completoPaluba, Robert. "Geometry of complex character varieties". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS155/document.
Testo completoThe aim of this thesis is to study various examples of tame and wild character varieties of complex curves.In the first part, we study an example of a tame character variety of the four-holed sphere with simple poles and exotic group G₂ as the structure group. We show that for a particular choice of conjugacy classes in G₂, the resulting affine symplectic variety of complex dimension two is isomorphic to the Fricke-Klein cubic surface, known from the classical case of the character variety for the group SL₂(C). Furthermore, we interpret the braid group orbits of size 7 in this affine surface as lines passing through triples of points in the Fano plane P²(F₂).In the second part, we establish multiple cases of the so-called „echo conjecture”, corresponding to the cases of Painleve I, II and IV differential equations. We show that for the Riemann sphere with one singular point and suitably chosen behavior at the singularity, there are three infinite families of wild character varieties of complex dimension two. In these families, the rank of the structure group is not bounded and goes to infinity. The main result of this part shows that in each family all the members are affine cubic surfaces, isomorphic to the phase spaces of the aforementioned Painleve equations. By computing the geometric invariat theory quotients, we provide explicit isomorphisms between the rings of functions of the arising affine varieties and relate the coefficients of the affine surfaces.The last part is dedicated to the study of a family of spaces generalizing the Painleve I and II hierarchies for higher rank linear groups, which is done by the means of quasi-Hamiltonian geometry. In particular, for each variety Bk in the hierarchy there is a group-valued moment map and they turn out to be the Euler's continuant polynomials. These in turn admit factorisations into products of shorter continuants and we show that for a continuant of length k, the distinct factorisations into continuants of length one are counted by the Catalan number Ck. Moreover, each such factorisation provides an embedding of the fusion product of k copies of GLn(C) onto a dense open subset of B_k and the quasi-Hamiltonian structures do match up. Finally, using this result we derive the formula for the quasi-Hamiltonian two form on the space Bk, which generalises the formula known for the case of B₂
Losekoot, Théo. "Automatic program verification by inference of relational models". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS102.
Testo completoThis thesis is concerned with automatically proving properties about the input/output relation of functional programs operating over algebraic data types. Recent results show how to approximate the image of a functional program using a regular tree language. Though expressive, those techniques cannot prove properties relating the input and the output of a function, e.g., proving that the output of a function reversing a list has the same length as the input list. In this thesis, we build upon those results and define a procedure to compute or over-approximate such a relation, thereby allowing to prove properties that require a more precise relational representation. Formally, the program verification problem reduces to satisfiability of clauses over the theory of algebraic data types, which we solve by exhibiting a Herbrand model of the clauses. In this thesis, we propose two relational representations of these Herbrand models: convoluted tree automata and shallow Horn clauses. Convoluted tree automata generalize tree automata and are in turn generalized by shallow Horn clauses. The Herbrand model inference problem arising from relational verification is undecidable, so we propose an incomplete but sound inference procedure. Experiments show that this procedure performs well in practice w.r.t. state of the art tools, both for verifying properties and for finding counterexamples
Germain, Christian. "Etude algébrique, combinatoire et algorithmique de certaines structures non associatives (magmas, arbres, parenthésages)". Dijon, 1996. http://www.theses.fr/1996DIJOS018.
Testo completoRoy-Pomerleau, Xavier. "Inférence d'interactions d'ordre supérieur et de complexes simpliciaux à partir de données de présence/absence". Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/66994.
Testo completoDespite the effectiveness of networks to represent complex systems, recent work has shownthat their structure sometimes limits the explanatory power of the theoretical models, sinceit only encodes dyadic interactions. If a more complex interaction exists in the system, it isautomatically reduced to a group of pairwise interactions that are of the first order. We thusneed to use structures that can take higher-order interactions into account. However, whetherrelationships are of higher order or not is rarely explicit in real data sets. This is the case ofpresence/absence data, that only indicate which species (of animals, plants or others) can befound (or not) on a site without showing the interactions between them.The goal of this project is to develop an inference method to find higher-order interactionswithin presence/absence data. Here, two frameworks are examined. The first one is based onthe comparison of the topology of the data, obtained with a non-restrictive hypothesis, andthe topology of a random ensemble. The second one uses log-linear models and hypothesistesting to infer interactions one by one until the desired order. From this framework, we havedevelopped several inference methods to generate simplicial complexes (or hypergraphs) thatcan be studied with regular tools of network science as well as homology. In order to validatethese methods, we have developed a generative model of presence/absence data in which thetrue interactions are known. Results have also been obtained on real data sets. For instance,from presence/absence data of nesting birds in Québec, we were able to infer co-occurrencesof order two
Arsigny, Vincent. "Traitement de données dans les groupes de Lie : une approche algébrique. Application au recalage non-linéaire et à l'imagerie du tenseur de diffusion". Phd thesis, Ecole Polytechnique X, 2006. http://tel.archives-ouvertes.fr/tel-00121162.
Testo completoFavardin, Chantal. "Détermination automatique de structures géométriques destinées à la reconstitution de courbes et de surfaces à partir de données ponctuelles". Toulouse 3, 1993. http://www.theses.fr/1993TOU30006.
Testo completoDurvye, Clémence. "Algorithmes pour la décomposition primaire des idéaux polynomiaux de dimension nulle donnés en évaluation". Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2008. http://tel.archives-ouvertes.fr/tel-00275219.
Testo completoDans cette thèse, nous donnons une présentation concise de ce dernier algorithme, ainsi qu'une preuve autonome de son bon fonctionnement. Toutes nos démonstrations sont intimement liées aux algorithmes, et ont pour conséquence des résultats classiques en géométrie algébrique, comme un théorème de Bézout. Au delà de leur intérêt pédagogique, ces preuves permettent de lever certaines hypothèses de régularité, et donc d'étendre l'algorithme au calcul des multiplicités sans coût supplémentaire.
Ensuite, nous présentons un algorithme de décomposition primaire pour les idéaux de polynômes de dimension nulle. Nous en donnerons également une étude de complexité précise, complexité qui est polynomiale en le nombre de variables, en le coût dévaluation du système, et en un nombre de Bézout.
Durvye, Clémence. "Algorithmes pour la décomposition primaire des idéaux polynomiaux de dimension nulle donnés en évaluation". Phd thesis, Versailles-St Quentin en Yvelines, 2008. http://www.theses.fr/2008VERS0034.
Testo completoNowadays, polynomial system solvers are involved in sophisticated computations in algebraic geometry as well as in practical engineering. The most popular algorithms are based on Gr¨obner bases, resultants, Macaulay matrices, or triangular decompositions. In all these algorithms, multivariate polynomials are expanded in a monomial basis, and the computations mainly reduce to linear algebra. The major drawback of these techniques is the exponential explosion of the size of eliminant polynomials. Alternatively, the Kronecker solver uses data structures to represent the input polynomials as the functions that compute their values at any given point. In this PhD thesis we give a concise presentation of the Kronecker solver, with a self-contained proof of correctness. Our proofs closely follow the algorithms, and as consequences, we obtain some classical results in algebraic geometry such as a B´ezout Theorem. Beyond their pedagogical interest, these new proofs allow us to discard some regularity hypotheses, and so to enhance the solver in order to compute the multiplicities of the zeros without any extra cost. At last, we design a new algorithm for primary decomposition of a zero-dimensional polynomial ideal. We also give a cost analysis of this algorithm, which is polynomial in the number of variables, in the evaluation cost of the input system, and in a B´ezout number. Keyword: algorithm, polynomial solving, primary decomposition, complexity, effective algebraic geometry
Debarbieux, Denis. "Modélisation et requêtes des documents semi-structurés : exploitation de la structure de graphe". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2005. http://tel.archives-ouvertes.fr/tel-00619303.
Testo completoBerkouk, Nicolas. "Persistence and Sheaves : from Theory to Applications". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX032.
Testo completoTopological data analysis is a recent field of research aiming at using techniques coming from algebraic topology to define descriptors of datasets. To be useful in practice, these descriptors must be computable, and coming with a notion of metric, in order to express their stability properties with res-pect to the noise that always comes with real world data. Persistence theory was elaborated in the early 2000’s as a first theoretical setting to define such des-criptors - the now famous so-called barcodes. Howe-ver very well suited to be implemented in a compu-ter, persistence theory has certain limitations. In this manuscript, we establish explicit links between the theory of derived sheaves equipped with the convolu-tion distance (after Kashiwara-Schapira) and persis-tence theory.We start by showing a derived isometry theorem for constructible sheaves over R, that is, we express the convolution distance between two sheaves as a matching distance between their graded barcodes. This enables us to conclude in this setting that the convolution distance is closed, and that the collec-tion of constructible sheaves over R equipped with the convolution distance is locally path-connected. Then, we observe that the collection of zig-zag/level sets persistence modules associated to a real valued function carry extra structure, which we call Mayer-Vietoris systems. We classify all Mayer-Vietoris sys-tems under finiteness assumptions. This allows us to establish a functorial isometric correspondence bet-ween the derived category of constructible sheaves over R equipped with the convolution distance, and the category of strongly pfd Mayer-Vietoris systems endowed with the interleaving distance. We deduce from this result a way to compute barcodes of sheaves from already existing software.Finally, we give a purely sheaf theoretic definition of the notion of ephemeral persistence module. We prove that the observable category of persistence mo-dules (the quotient category of persistence modules by the sub-category of ephemeral ones) is equivalent to the well-known category of -sheaves
Khiari, Souad. "Problèmes inverses de points sources dans les modèles de transport dispersif de contaminants : identifiabilité et observabilité". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2301.
Testo completoThe research and the questions approached on this thesis are inverse type : the reconstruction of point-wise source or the data completion problem in parabolic models of transport of contaminants. The mathematical modelling of the problems of water pollution includes two tracers, the dissolved oxygen (DO) and the biochemical demand in oxygen (BDO) which is the quantity of oxygen necessary for the biodegradation of organic matter. Indeed, during the biodegradation process, aerobic bacteria play a leading part. These micro-organisms decompose polluting organic matters by using the dissolved oxygen in the middle. To compensate these missing data, fields, solutions of the problem, are observed directly or indirectly. The resulting inverse problems are ill-posed. Their mathematical study rises big complications and their numerical treatment isn't easy. We demonstrated a uniqueness result for fixed sources in the case of moved observations. The reality for the observation is qualified and the ideal is not acquired; direct measures on the BOD are difficult to obtain. On the Other hand to collect data on the DO is possible in real time With a moderate cost. The BOD is thus observed in indirect way, thanks to the coupling in the system of Streeter and Phelps, the information passes from the DO to the BOD. For this problem, we produced a uniqueness result for the reconstruction of source. Then, we examined the degree of instability of the equation to be solved. The behaviour of numerical methods depend on this type of information
Buchet, Mickaël. "Topological inference from measures". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112367/document.
Testo completoMassive amounts of data are now available for study. Asking questions that are both relevant and possible to answer is a difficult task. One can look for something different than the answer to a precise question. Topological data analysis looks for structure in point cloud data, which can be informative by itself but can also provide directions for further questioning. A common challenge faced in this area is the choice of the right scale at which to process the data.One widely used tool in this domain is persistent homology. By processing the data at all scales, it does not rely on a particular choice of scale. Moreover, its stability properties provide a natural way to go from discrete data to an underlying continuous structure. Finally, it can be combined with other tools, like the distance to a measure, which allows to handle noise that are unbounded. The main caveat of this approach is its high complexity.In this thesis, we will introduce topological data analysis and persistent homology, then show how to use approximation to reduce the computational complexity. We provide an approximation scheme to the distance to a measure and a sparsifying method of weighted Vietoris-Rips complexes in order to approximate persistence diagrams with practical complexity. We detail the specific properties of these constructions.Persistent homology was previously shown to be of use for scalar field analysis. We provide a way to combine it with the distance to a measure in order to handle a wider class of noise, especially data with unbounded errors. Finally, we discuss interesting opportunities opened by these results to study data where parts are missing or erroneous
Reilles, Antoine. "Réécriture et compilation de confiance". Thesis, Vandoeuvre-les-Nancy, INPL, 2006. http://www.theses.fr/2006INPL084N/document.
Testo completoMost computer processes involve the notion of transformation, in particular the compilation processes. We interest in this thesis in providing tools and methods, based on rewriting, giving the opportunity to increase the confidence we can place into those processes. We develop first a framework used to validate the compilation of matching constructs, building a formal proof of the validity of the compilation process along with a witness of this proof, for each run of the compiler. Then, in order to allow one to write safely complex transformations, we propose a tool that generates an efficient data structure integrating algebraic invariants, as well as a strategy language that enables to control the application of transformations. Those results can be seen as a first step towards the constitution of generic and safe methods for the development of trustworthy transformations
Carriere, Mathieu. "On Metric and Statistical Properties of Topological Descriptors for geometric Data". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS433/document.
Testo completoIn the context of supervised Machine Learning, finding alternate representations, or descriptors, for data is of primary interest since it can greatly enhance the performance of algorithms. Among them, topological descriptors focus on and encode the topological information contained in geometric data. One advantage of using these descriptors is that they enjoy many good and desireable properties, due to their topological nature. For instance, they are invariant to continuous deformations of data. However, the main drawback of these descriptors is that they often lack the structure and operations required by most Machine Learning algorithms, such as a means or scalar products. In this thesis, we study the metric and statistical properties of the most common topological descriptors, the persistence diagrams and the Mappers. In particular, we show that the Mapper, which is empirically instable, can be stabilized with an appropriate metric, that we use later on to conpute confidence regions and automatic tuning of its parameters. Concerning persistence diagrams, we show that scalar products can be defined with kernel methods by defining two kernels, or embeddings, into finite and infinite dimensional Hilbert spaces
Reilles, Antoine. "Réécriture et compilation de confiance". Electronic Thesis or Diss., Vandoeuvre-les-Nancy, INPL, 2006. http://www.theses.fr/2006INPL084N.
Testo completoMost computer processes involve the notion of transformation, in particular the compilation processes. We interest in this thesis in providing tools and methods, based on rewriting, giving the opportunity to increase the confidence we can place into those processes. We develop first a framework used to validate the compilation of matching constructs, building a formal proof of the validity of the compilation process along with a witness of this proof, for each run of the compiler. Then, in order to allow one to write safely complex transformations, we propose a tool that generates an efficient data structure integrating algebraic invariants, as well as a strategy language that enables to control the application of transformations. Those results can be seen as a first step towards the constitution of generic and safe methods for the development of trustworthy transformations
Lagarde, Guillaume. "Contributions to arithmetic complexity and compression". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC192/document.
Testo completoThis thesis explores two territories of computer science: complexity and compression. More precisely, in a first part, we investigate the power of non-commutative arithmetic circuits, which compute multivariate non-commutative polynomials. For that, we introduce various models of computation that are restricted in the way they are allowed to compute monomials. These models generalize previous ones that have been widely studied, such as algebraic branching programs. The results are of three different types. First, we give strong lower bounds on the number of arithmetic operations needed to compute some polynomials such as the determinant or the permanent. Second, we design some deterministic polynomial-time algorithm to solve the white-box polynomial identity problem. Third, we exhibit a link between automata theory and non-commutative arithmetic circuits that allows us to derive some old and new tight lower bounds for some classes of non-commutative circuits, using a measure based on the rank of a so-called Hankel matrix. A second part is concerned with the analysis of the data compression algorithm called Lempel-Ziv. Although this algorithm is widely used in practice, we know little about its stability. Our main result is to show that an infinite word compressible by LZ’78 can become incompressible by adding a single bit in front of it, thus closing a question proposed by Jack Lutz in the late 90s under the name “one-bit catastrophe”. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation—when one bit is added in front of it
Eytard, Jean-Bernard. "A tropical geometry and discrete convexity approach to bilevel programming : application to smart data pricing in mobile telecommunication networks". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX089/document.
Testo completoBilevel programming deals with nested optimization problems involving two players. A leader annouces a decision to a follower, who responds by selecting a solution of an optimization problem whose data depend on this decision (low level problem). The optimal decision of the leader is the solution of another optimization problem whose data depend on the follower's response (high level problem). When the follower's response is not unique, one distinguishes between optimistic and pessimistic bilevel problems, in which the leader takes into account the best or worst possible response of the follower.Bilevel problems are often used to model pricing problems.We are interested in applications in which the leader is a seller who announces a price, and the follower models the behavior of a large number of customers who determine their consumptions depending on this price.Hence, the dimension of the low-level is large. However, most bilevel problems are NP-hard, and in practice, there is no general method to solve efficiently large-scale bilevel problems.In this thesis, we introduce a new approach to tackle bilevel programming. We assume that the low level problem is a linear program, in continuous or discrete variables, whose cost function is determined by the leader. Then, the follower responses correspond to the cells of a special polyhedral complex, associated to a tropical hypersurface. This is motivated by recent applications of tropical geometry to model the behavior of economic agents.We use the duality between this polyhedral complex and a regular subdivision of an associated Newton polytope to introduce a decomposition method, in which one solves a series of subproblems associated to the different cells of the complex. Using results about the combinatorics of subdivisions, we show thatthis leads to an algorithm to solve a wide class of bilevel problemsin a time that is polynomial in the dimension of the low-level problem when the dimension of the high-level problem is fixed.Then, we identify special structures of bilevel problems forwhich this complexity bound can be improved.This is the case when the leader's cost function depends only on the follower's response. Then, we showthe optimistic bilevel problem can be solved in polynomial time.This applies in particular to high dimensional instances in which the datasatisfy certain discrete convexity properties. We also show that the solutions of such bilevel problems are limits of competitive equilibria.In the second part of this thesis, we apply this approach to a price incentive problem in mobile telecommunication networks.The aim for Internet service providers is to use pricing schemes to encourage the different users to shift their data consumption in time(and so, also in space owing to their mobility),in order to reduce the congestion peaks.This can be modeled by a large-scale bilevel problem.We show that a simplified case can be solved in polynomial time by applying the previous decomposition approach together with graph theory and discrete convexity results. We use these ideas to develop an heuristic method which applies to the general case. We implemented and validated this method on real data provided by Orange
Charignon, Cyril. "Immeubles affines et groupes de Kac-Moody". Electronic Thesis or Diss., Nancy 1, 2010. http://www.theses.fr/2010NAN10138.
Testo completoThis work aims at generalizing Bruhat-Tits theory to Kac-Moody groups over local fields. We thus try to construct a geometric space on wich such a group will act, and wich will look like the Bruhat-Tits building of a reductive group. Actually, the first part stays in the field of Bruhat-Tits theory as it exposes a family of compactification of an ordinary affine building. It is in the second part that we move to Kac-Moody theory, using the first part as a guide. The spaces obtained do not satisfy all the requirement for a building,they will be called (bounded) hovels (”masures” in french)
Malakhovski, Ian. "Sur le pouvoir expressif des structures applicatives et monadiques indexées". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30118.
Testo completoIt is well-known that very simple theoretic constructs such as Either (type-theoretic equivalent of the logical "or" operator), State (composable state transformers), Applicative (generalized function application), and Monad (generalized sequential program composition) structures (as they are named in Haskell) cover a huge chunk of what is usually needed to elegantly express most computational idioms used in conventional programs. However, it is conventionally argued that there are several classes of commonly used idioms that do not fit well within those structures, the most notable examples being transformations between trees (data types, which are usually argued to require ether generalized pattern matching or heavy metaprogramming infrastructure) and exception handling (which are usually argued to require special language and run-time support). This work aims to show that many of those idioms can, in fact, be expressed by reusing those well-known structures with minor (if any) modifications. In other words, the purpose of this work is to apply the KISS (Keep It Stupid Simple) and/or Occam's razor principles to algebraic structures used to solve common programming problems. Technically speaking, this work aims to show that natural generalizations of Applicative and Monad type classes of Haskell combined with the ability to make Cartesian products of them produce a very simple common framework for expressing many practically useful things, some of the instances of which are very convenient novel ways to express common programming ideas, while others are usually classified as effect systems. On that latter point, if one is to generalize the presented instances into an approach to design of effect systems in general, then the overall structure of such an approach can be thought of as being an almost syntactic framework which allows different effect systems adhering to the general structure of the "marriage" framework to be expressed on top of. (Though, this work does not go into too much into the latter, since this work is mainly motivated by examples that can be immediately applied to Haskell practice.) Note, however, that, after the fact, these technical observation are completely unsurprising: Applicative and Monad are generalizations of functional and linear program compositions respectively, so, naturally, Cartesian products of these two structures ought to cover a lot of what programs usually do
Kachanovich, Siargey. "Maillage de variétés avec les triangulations de Coxeter". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4072.
Testo completoThis thesis addresses the manifold meshing problem in arbitrary dimension. Intuitively, suppose we are given a manifold — such as the interior of a torus — embedded in a space like R9, our goal is to build a mesh of this manifold (for example, a triangulation). We propose three principal contributions. The central one is the manifold tracing algorithm, which constructs a piecewise-linear approximation of a given compact smooth manifold of dimension m in the Euclidean space Rd, for any m and d. The proposed algorithm operates in an ambient triangulation T that is assumed to be an affine transformation of the Freudenthal-Kuhn triangulation of Rd. It is output-sensitive and its time complexity per computed element in the output depends only polynomially on the ambient dimension d. It only requires the manifold to be accessed via an intersection oracle that answers if a given (d − m)-dimensional simplex in Rd intersects the manifold or not. As such, this framework is general, as it covers many popular manifold representations such as the level set of a multivariate function or manifolds given by a point cloud. Our second contribution is a data structure that represents the Freudenthal-Kuhn triangulation of Rd. At any moment during the execution, this data structure requires at most O(d2) storage. With this data structure, we can access in a time-efficient way the simplex that contains a given point, the faces and the cofaces of a given simplex. The simplices in the Freudenthal-Kuhn triangulation of Rd are encoded using a new representation that generalizes the representation of the d-dimensional simplices introduced by Freudenthal. Lastly, we provide a geometrical and combinatorial study of the Freudenthal-Kuhn triangulations and the closely-related Coxeter triangulations. For Coxeter triangulations, we establish that the quality of the simplices in all d-dimensional Coxeter triangulations is O(1/sqrt{d}) of the quality of the d-dimensional regular simplex. We further investigate the Delaunay property for Coxeter triangulations. Finally, we consider an extension of the Delaunay property, namely protection, which is a measure of non-degeneracy of a Delaunay triangulation. In particular, one family of Coxeter triangulations achieves the protection O(1/d2). We conjecture that both bounds are optimal for triangulations in Euclidean space
Charignon, Cyril. "Immeubles affines et groupes de Kac-Moody". Phd thesis, Nancy 1, 2010. http://tel.archives-ouvertes.fr/tel-00497961.
Testo completoDao, Ngoc Bich. "Réduction de dimension de sac de mots visuels grâce à l’analyse formelle de concepts". Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS010/document.
Testo completoIn several scientific fields such as statistics, computer vision and machine learning, redundant and/or irrelevant information reduction in the data description (dimension reduction) is an important step. This process contains two different categories : feature extraction and feature selection, of which feature selection in unsupervised learning is hitherto an open question. In this manuscript, we discussed about feature selection on image datasets using the Formal Concept Analysis (FCA), with focus on lattice structure and lattice theory. The images in a dataset were described as a set of visual words by the bag of visual words model. Two algorithms were proposed in this thesis to select relevant features and they can be used in both unsupervised learning and supervised learning. The first algorithm was the RedAttSansPerte, which based on lattice structure and lattice theory, to ensure its ability to remove redundant features using the precedence graph. The formal definition of precedence graph was given in this thesis. We also demonstrated their properties and the relationship between this graph and the AC-poset. Results from experiments indicated that the RedAttsSansPerte algorithm reduced the size of feature set while maintaining their performance against the evaluation by classification. Secondly, the RedAttsFloue algorithm, an extension of the RedAttsSansPerte algorithm, was also proposed. This extension used the fuzzy precedence graph. The formal definition and the properties of this graph were demonstrated in this manuscript. The RedAttsFloue algorithm removed redundant and irrelevant features while retaining relevant information according to the flexibility threshold of the fuzzy precedence graph. The quality of relevant information was evaluated by the classification. The RedAttsFloue algorithm is suggested to be more robust than the RedAttsSansPerte algorithm in terms of reduction
Ogasawara, Eduardo. "Une Approche Algébrique pour les Workflows Scientifiques Orientés-Données". Phd thesis, 2011. http://tel.archives-ouvertes.fr/tel-00653661.
Testo completo