To see the other types of publications on this topic, follow the link: Non-arithmetic.

Dissertations / Theses on the topic 'Non-arithmetic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Non-arithmetic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ziyang, Wang. "Non-binary Distributed Arithmetic Coding." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32318.

Full text
Abstract:
Distributed source coding (DSC) is a fundamental concept in information theory. It refers to distributed compression of correlated but geographically separated sources. With the development of wireless sensor networks, DSC has attracted great research interest in the recent years [26]. Although many channel code based DSC schemes have been developed (e.g., those based on turbo codes [11]and LDPC codes [20]), this thesis focuses on the arithmetic coding based approaches, namely, Distributed Arithmetic Coding (DAC) due to its simplicity in encoding [8]. To date, most of the DAC approaches that have been proposed deal with binary sources and can not handle non-binary cases. Little research has been done to extend DAC for non-binary sources. This work aims at developing efficient DAC techniques for the compression of non-binary sources. The key idea of DAC is representing the source symbols by overlapping intervals, as opposed to the case of conventional arithmetic coding where the intervals representing the symbols do not overlap. However the design of the overlapping intervals has been completely of heuristic nature to date. As such, the first part of this work is a thorough study of various interval-overlapping rules in binary DAC so as to understand how these rules impact the performance of DAC. The insight acquired in this study is used in the second part of this work, where two DAC algorithms are proposed to compress non-binary non-uniform sources. The first algorithm applies a designed overlap structure in DAC process, while the second converts a non-binary sequence into a binary sequence by Huffman Coding and encoding the result in binary DAC. Simulation studies are performed to demonstrate the efficiencies of the two proposed algorithms in a variety of source parameter settings.
APA, Harvard, Vancouver, ISO, and other styles
2

Lorenzo, García Elisa. "Arithmetic properties of non-hyperelliptic genus 3 curves." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/279314.

Full text
Abstract:
This thesis explores the explicit computation of twists of curves. We develope an algorithm for computing the twists of a given curve assuming that its automorphism group is known. And in the particular case in which the curve is non-hyperelliptic we show how to compute equations of the twists. The algorithm is based on a correspondence that we establish beetwen the set of twists and the set of solutions of a certain Galois embedding problem. In general is not known how to compute all the solution of a Galois embedding problem. Throughout the thesis we give some ideas of how to solve these problems. The twists of curves of genus less or equal than 2 are well-known. While the genus 0 and 1 cases go back from long ago, the genus 2 case is due to the work of Cardona and Quer. All the genus 0, 1 or 2 curves are hyperelliptic, however for genus greater than 2 almost all the curves are non-hyperelliptic. As an application to our algorithm we give a classification with equations of the twists of all plane quartic curves, that is, the non-hyperelliptic genus 3 curves, defined over any number field k. The first step for computing such twists is providing a classification of the plane quartic curves defined over a concrete number field k. The starting point for doing this is Henn classification of plane quartic curves with non-trivial automorphism group over the complex numbers. An example of the importance of the study of the set of twists of a curve is that it has been proven to be really useful for a better understanding of the behaviour of the Generalize Sato-Tate conjecture, see the work of Fité, Kedlaya and Sutherland. We show a proof of the Sato-Tate conjecture for the twists of the Fermat and Klein quartics as a corollary of a deep result of Johansson, and we compute the Sato-Tate groups and Sato-Tate distributions of them. Following with the study of the Generalize Sato-Tate conjecture, in the last chapter of this thesis we explore such conjecture for the Fermat hypersurfaces X_{n}^{m}: x_{0}^{m}+...+x_{n+1}^{m} = 0. We explicitly show how to compute the Sato-Tate groups and the Sato-Tate distributions of these Fermat hypersurfaces. We also prove the conjecture over the rational numbers for n=1 and over than the cyclotomic field of mth-roots of the unity if n is greater 1.
En esta tesis estudiamos el cálculo explícito de twists de curvas. Se desarrolla un algoritmo para calcular los twists de una curva dada asumiendo que su grupo de automorfismos en conocido. Además, en el caso particular en que la curva es no hiperelíptica se enseña como calcular ecuaciones de los twists. El algoritmo está basado es una correspondencia que establecemos entre el conjunto de twists de la curva y el conjunto de soluciones a un cierto problema de embeding de Galois. Aunque no existe un método general para resolver este tipo de problemas a lo largo de la tesis se exponen algunas ideas para resolver algunos de estos problemas en concreto. Los twists de curvas de género menor o igual que 2 son bien conocidos. Mientras que los casos de género 0 y 1 se conocen desde hace tiempo, el caso de género 2 es más reciente y se debe al trabajo de Cardona y Quer. Todas las curvas de género, 0,1 y 2 son hiperelípticas, sin embargo, las curvas de género mayor o igual que 3 son en su mayoría no hipèrelípticas. Como aplicación a nuestro algoritmo damos una clasificación con ecuaciones de los twists de todas las cuárticas planas lisas, es decir, de todas las curvas no hiperelípticas de género 3, definidas sobre un cuerpo de números k. El primer paso para calcualr estos twists es obtener una clasificación de las cuárticas planas lisas definidas sobre un cuerpo de números k arbitrario. El punto de partida para obtener esta clasificación es la clasificación de Henn de cuárticas planas definidas sobre los números complejos y con grupo de automorfismos no trivial. Un ejemplo de la importancia del estudio de los twists de curvas es que se ha probado que resulta ser de gran utilidad para el mejor entendimiento del carácter de la conjetura de Sato-Tate generalizada, como puede verse en los trabajos de entre otros: Fité, Kedlaya y Sutherland. En la tesis se prueba la conjetura de Sato-Tate para el caso de los twists de las cuárticas de Fermat y de Klein como corolario de un resultado de Johansson, además se calculan los grupos y las distribuciones de Sato-Tate de estos twists. Siguiendo con el estudio de la conjetura generalizada de Sato-Tate, en el último capítulo de la tesis se estudia la conjetura para el caso de las hipersuperficies de Fermat: X_{n}^{m}: x_{0}^{m}+...+x_{n+1}^{m} = 0. Se muestra esplícitamente como calcular los grupos de Sato-Tate y las correspondientes distribuciones. Además se prueba la conjetura para el caso n=1 sobre el cuerpo de los números racionales y para n mayor que 1 sobre el cuerpo de las raíces m-ésimas de la unidad.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Mark Jason. "Non-linear echo cancellation based on transpose distributed arithmetic adaptive filters." Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/12986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aslett, Helen J. "The function and form of the non-verbal analogue magnitude code in arithmetic processing." Thesis, University of York, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vollmer, Philipp [Verfasser], and Klaus [Akademischer Betreuer] Künnemann. "Arithmetic Divisors on Products of Curves over non-Archimedean Fields / Philipp Vollmer. Betreuer: Klaus Künnemann." Regensburg : Universitätsbibliothek Regensburg, 2016. http://d-nb.info/1110148542/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beber, Björn [Verfasser]. "Improving interpolants of non-convex polyhedra with linear arithmetic and probably approximatley correct learning for bounded linear arrangements / Björn Beber." Mainz : Universitätsbibliothek Mainz, 2018. http://d-nb.info/1160111235/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Turchetti, Danièle. "Contributions to arithmetic geometry in mixed characteristic : lifting covers of curves, non-archimedean geometry and the l-modular Weil representation." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0022/document.

Full text
Abstract:
Dans cette thèse on étudie certains phénomènes d'interactions entre caractéristique positive et caractéristique nulle. Dans un premier temps on s'occupe du problème de relèvement locale d'actions de groupes. On y montre des conditions nécessaires pour l'existence de relèvement de certains actions du groupe Z/pZ x Z/pZ. Pour une action d'un groupe fini quelconque, on y étudie les arbres de Hurwitz, en montrant que chaque arbre de Hurwitz admet un plongement dans le disque unitaire fermé de Berkovich et que ses données de Hurwitz peuvent être décrites de façon analytique. Dans une deuxième partie nous construisons un analogue de la représentation de Weil à coefficients dans un anneau intègre, et nous montrons que cela satisfait les mêmes propriétés que dans le cas de coefficients complexes
In this thesis, we study the interplay between positive and zero characteristic. In a first instance, we deal with the local lifting problem of lifting actions of curves. We show necessary conditions for the existence of liftings of some actions of Z/pZ x Z/pZ. Then, for an action of a general finite group, we study the associated Hurwitz tree, showing that every Hurwitz tree has a canonical metric embedding in the Berkovich closed unit disc, and that the Hurwitz data can be described analytically.In the last chapter, we define an analog of the Weil representation with coefficients in an integral domain, showing that such representation satisfies the same properties than in the case with complex coefficients
APA, Harvard, Vancouver, ISO, and other styles
8

Antoniou, Austin A. "On Product and Sum Decompositions of Sets: The Factorization Theory of Power Monoids." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586355818066608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Salas, Donoso Ignacio Antonio. "Packing curved objects with interval methods." Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0277/document.

Full text
Abstract:
Un problème courant en logistique, gestion d’entrepôt, industrie manufacturière ou gestion d’énergie dans les centres de données est de placer des objets dans un espace limité, ou conteneur. Ce problème est appelé problème de placement. De nombreux travaux dans la littérature gèrent le problème de placement en considérant des objets de formes particulières ou en effectuant des approximations polygonales. L’objectif de cette thèse est d’autoriser toute forme qui admet une définition mathématique (que ce soit avec des inégalités algébriques ou des fonctions paramétrées). Les objets peuvent notamment être courbes et non-convexes. C’est ce que nous appelons le problème de placement générique. Nous proposons un cadre de résolution pour résoudre ce problème de placement générique, basé sur les techniques d’intervalles. Ce cadre possède trois ingrédients essentiels : un algorithme évolutionnaire plaçant les objets, une fonction de chevauchement minimisée par cet algorithme évolutionnaire (coût de violation), et une région de chevauchement qui représente un ensemble pré-calculé des configurations relatives d’un objet (par rapport à un autre) qui créent un chevauchement. Cette région de chevauchement est calculée de façon numérique et distinctement pour chaque paire d’objets. L’algorithme sous-jacent dépend également du fait qu’un objet soit représenté par des inégalités ou des fonctions paramétrées. Des expérimentations préliminaires permettent de valider l’approche et d’en montrer le potentiel
A common problem in logistic, warehousing, industrial manufacture, newspaper paging or energy management in data centers is to allocate items in a given enclosing space or container. This is called a packing problem. Many works in the literature handle the packing problem by considering specific shapes or using polygonal approximations. The goal of this thesis is to allow arbitrary shapes, as long as they can be described mathematically (by an algebraic equation or a parametric function). In particular, the shapes can be curved and non-convex. This is what we call the generic packing problem. We propose a framework for solving this generic packing problem, based on interval techniques. The main ingredients of this framework are: An evolutionary algorithm to place the objects, an over lapping function to be minimized by the evolutionary algorithm (violation cost), and an overlapping region that represents a pre-calculated set of all the relative configurations of one object (with respect to the other one) that creates an overlapping. This overlapping region is calculated numerically and distinctly for each pair of objects. The underlying algorithm also depends whether objects are described by inequalities or parametric curves. Preliminary experiments validate the approach and show the potential of this framework
APA, Harvard, Vancouver, ISO, and other styles
10

Chakhari, Aymen. "Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S059/document.

Full text
Abstract:
Par rapport à l'arithmétique virgule flottante, l'arithmétique virgule fixe se révèle plus avantageuse en termes de contraintes de coût et de consommation, cependant la conversion en arithmétique virgule fixe d'un algorithme spécifié initialement en virgule flottante se révèle être une tâche fastidieuse. Au sein de ce processus de conversion, l'une des étapes majeures concerne l'évaluation de la précision de la spécification en virgule fixe. En effet, le changement du format des données de l'application s'effectue en éliminant des bits ce qui conduit à la génération de bruits de quantification qui se propagent au sein du système et dégradent la précision des calculs en sortie de l'application. Par conséquent, cette perte de précision de calcul doit être maîtrisée et évaluée afin de garantir l'intégrité de l'algorithme et répondre aux spécifications initiales de l'application. Le travail mené dans le cadre de cette thèse se concentre sur des approches basées sur l'évaluation de la précision à travers des modèles analytiques (par opposition à l'approche par simulations). Ce travail traite en premier lieu de la recherche de modèles analytiques pour évaluer la précision des opérateurs non lisses de décision ainsi que la cascade d'opérateurs de décision. Par conséquent, la caractérisation de la propagation des erreurs de quantification dans la cascade d'opérateurs de décision est le fondement des modèles analytiques proposés. Ces modèles sont appliqués à la problématique de l'évaluation de la précision de l'algorithme de décodage sphérique SSFE (Selective Spanning with Fast Enumeration) utilisé pour les systèmes de transmission de type MIMO (Multiple-Input Multiple-Output). Dans une seconde étape, l'évaluation de la précision des structures itératives d'opérateurs de décision a fait l'objet d'intérêt. Une caractérisation des erreurs de quantification engendrées par l'utilisation de l'arithmétique en virgule fixe est menée afin de proposer des modèles analytiques basés sur l'estimation d'une borne supérieure de la probabilité d'erreur de décision ce qui permet de réduire les temps d'évaluation. Ces modèles sont ensuite appliqués à la problématique de l'évaluation de la spécification virgule fixe de l'égaliseur à retour de décision DFE (Decision Feedback Equalizer). Le second aspect du travail concerne l'optimisation des largeurs de données en virgule fixe. Ce processus d'optimisation est basé sur la minimisation de la probabilité d'erreur de décision dans le cadre d'une implémentation sur un FPGA (Field-Programmable Gate Array) de l'algorithme DFE complexe sous contrainte d'une précision donnée. Par conséquent, pour chaque spécification en virgule fixe, la précision est évaluée à travers les modèles analytiques proposés. L'estimation de la consommation des ressources et de la puissance sur le FPGA est ensuite obtenue à l'aide des outils de Xilinx pour faire un choix adéquat des largeurs des données en visant à un compromis précision/coût. La dernière phase de ce travail traite de la modélisation en virgule fixe des algorithmes de décodage itératif reposant sur les concepts de turbo-décodage et de décodage LDPC (Low-Density Parity-Check). L'approche proposée prend en compte la structure spécifique de ces algorithmes ce qui implique que les quantités calculées au sein du décodeur (ainsi que les opérations) soient quantifiées suivant une approche itérative. De plus, la représentation en virgule fixe utilisée (reposant sur le couple dynamique et le nombre de bits total) diffère de la représentation classique qui, elle, utilise le nombre de bits accordé à la partie entière et la partie fractionnaire. Avec une telle représentation, le choix de la dynamique engendre davantage de flexibilité puisque la dynamique n'est plus limitée uniquement à une puissance de deux. Enfin, la réduction de la taille des mémoires par des techniques de saturation et de troncature est proposée de manière à cibler des architectures à faible-complexité
Traditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two
APA, Harvard, Vancouver, ISO, and other styles
11

Deng, Erya. "Conception et développement de circuits logiques de faible consommation et fiables basés sur des jonctions tunnel magnétiques à écriture par transfert de spin." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT012/document.

Full text
Abstract:
Avec la diminution du nœud de la technologie CMOS, la puissance statique et dynamique augmente spectaculairement. It est devenu l'un des principaux problèmes en raison de l'augmentation du courant de fuite et de la longue distance entre les mémoires et les circuits logiques. Au cours des dernières décennies, les dispositifs de spintronique, tels que la jonction tunnel magnétique (JTM) écrit par transfert de spin, sont largement étudiés pour résoudre le problème de la puissance statique grâce à leur non-volatilité. L'architecture logic-in-memory (LIM) hybride permet de fabriquer les dispositifs de spintronique au-dessus des circuits CMOS, réduisant le temps de transfert et la puissance dynamique. Cette thèse vise à la conception de circuits logiques et mémoires pour le système de faible puissance, en combinant les technologies JTM et CMOS. En utilisant un modèle compact JTM et le design-kit CMOS de STMicroelectronics, nous étudions les circuits hybrides MTJ/CMOS de 1-bit et multi-bit, y compris les opérations de lecture et d'écriture. Les méthodes d'optimisation sont également introduites pour améliorer la fiabilité, ce qui est extrêmement important pour les circuits logiques où les blocs de correction d'erreur ne peuvent pas être facilement intégrés sans sacrifier leurs performances ou augmenter la surface de circuit. Nous étendons la structure MTJ/CMOS hybride de multi-bit à la conception d’une mémoire MRAM avec les circuits périphériques simples. Basés sur le concept de LIM, les circuits logiques/arithmétiques non-volatiles sont conçus. Les JTMs sont intégrés non seulement comme des éléments de stockage, mais aussi comme des opérandes logiques. Tout d'abord, nous concevons et analysons théoriquement les portes logiques non-volatiles (PLNVs) comprenant NOT, AND, OR et XOR. Ensuite, les additionneurs complets non-volatiles (ACNVs) de 1-bit et 8-bit sont proposés et comparés avec l'additionneur classique basé sur la technologie CMOS. Nous étudions l'effet de la taille de transistor CMOS et des paramètres de JMT sur les performances d’ACNV. De plus, nous optimisons l’ACNV sous deux faces. Premièrement, un circuit de détection (mode de tension) de très haute fiabilité est proposé. Après, nous proposons de remplacer le JTM à deux électrodes par un JTM à trois électrodes (écrit par transfert de spin assisté par l’effet Hall de spin) en raison du temps d'écriture et de la puissance plus petit. Basé sur les PLNVs et ACNVs, d'autres circuits logiques peuvent être construits, par exemple, soustracteur non-volatile. Enfin, une mémoire adressable par contenu non-volatile (MACNV) est proposée. Deux décodeurs magnétiques visent à sélectionner des lignes et à enregistrer la position de recherche dans un état non-volatile
With the shrinking of CMOS (complementary metal oxide semi-conductor) technology, static and dynamic power increase dramatically and indeed has become one of the main challenges due to the increasing leakage current and long transfer distance between memory and logic chips. In the past decades, spintronics devices, such as spin transfer torque based magnetic tunnel junction (STT-MTJ), are widely investigated to overcome the static power issue thanks to their non-volatility. Hybrid logic-in-memory (LIM) architecture allows spintronics devices to be fabricated over the CMOS circuit plane, thereby reducing the transfer latency and the dynamic power dissipation. This thesis focuses on the design of hybrid MTJ/CMOS logic circuits and memories for low-power computing system.By using a compact MTJ model and the STMicroelectronics design kit for regular CMOS design, we investigate the hybrid MTJ/CMOS circuits for single-bit and multi-bit reading and writing. Optimization methods are also introduced to improve the reliability, which is extremely important for logic circuits where error correction blocks cannot be easily embedded without sacrificing their performances or adding extra area to the circuit. We extend the application of multi-context hybrid MTJ/CMOS structure to the memory design. Magnetic random access memory (MRAM) with simple peripheral circuits is designed.Based on the LIM concept, non-volatile logic/arithmetic circuits are designed to integrate MTJs not only as storage elements but also as logic operands. First, we design and theoretically analyze the non-volatile logic gates (NVLGs) including NOT, AND, OR and XOR. Then, 1-bit and 8-bit non-volatile full-adders (NVFAs), the basic elements for arithmetic operations, are proposed and compared with the traditional CMOS-based full-adder. The effect of CMOS transistor sizing and the MTJ parameters on the performances of NVFA is studied. Furthermore, we optimize the NVFA from two levels. From the structure-level, an ultra-high reliability voltage-mode sensing circuit is used to store the operand of NVFA. From the device-level, we propose 3-terminal MTJ switched by spin-Hall-assisted STT to replace the 2-terminal MTJ because of its smaller writing time and power consumption. Based on the NVLGs and NVFAs, other logic circuits can be built, for instance, non-volatile subtractor.Finally, non-volatile content addressable memory (NVCAM) is proposed. Two magnetic decoders aim at selecting a word line to be read or written and saving the corresponding search location in non-volatile state
APA, Harvard, Vancouver, ISO, and other styles
12

Surampudi, Venkata Prathyusha. "Improved Iterative Truncated Arithmetic Mean Filter." ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2514.

Full text
Abstract:
This thesis discusses image processing and filtering techniques with emphasis on Mean filter, Median filter, and different versions of the Iterative Truncated Arithmetic Mean (ITM) filter. Specifically, we review in detail the ITM algorithms (ITM1 and ITM2) proposed by Xudong Jiang. Although filtering is capable of reducing noise in an image, it usually also results in smoothening or some other form of distortion of image edges and file details. Therefore, maintaining a proper trade off between noise reduction and edge/detail distortion is key. In this thesis, an improvement over Xudong Jiang’s ITM filters, namely ITM3, has been proposed and tested for different types of noise and for different images. Each of the two original ITM filters performs better than the other under different conditions. Experimental results demonstrate that the proposed filter, ITM3, provides a better trade off than ITM1 and ITM2 in terms of attenuating different types of noise and preserving fine image details and edges.
APA, Harvard, Vancouver, ISO, and other styles
13

Braconnier, Thierry. "Sur le calcul des valeurs propres en précision finie." Nancy 1, 1994. http://www.theses.fr/1994NAN10023.

Full text
Abstract:
Nous avons développé un code de calcul de valeurs propres pour matrices non symétriques de grande taille utilisant la méthode d'Arnoldi-Tchebycheff. Une étude a été menée pour mettre en évidence le rôle du défaut de normalité sur l'instabilité spectrale et la stabilité numérique d'algorithmes de calcul de valeurs propres. Des outils, tels que les méthodes de perturbations associées à des méthodes statistiques, ont été expérimentés afin d'apporter des informations qualitatives sur le spectre étudié. Ces outils permettent de comprendre le comportement numérique du problème traite en précision finie, dans les cas ou le calcul direct échoue
APA, Harvard, Vancouver, ISO, and other styles
14

MaÏga, Moussa. "Surveillance préventive des systèmes hybrides à incertitudes bornées." Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2010/document.

Full text
Abstract:
Cette thèse est dédiée au développement d’algorithmes génériques pour l’observation ensembliste de l’état continu et du mode discret des systèmes dynamiques hybrides dans le but de réaliser la détection de défauts. Cette thèse est organisée en deux grandes parties. Dans la première partie, nous avons proposé une méthode rapide et efficace pour le passage ensembliste des gardes. Elle consiste à procéder à la bissection dans la seule direction du temps et ensuite faire collaborer plusieurs contracteurs simultanément pour réduire le domaine des vecteurs d’état localisés sur la garde, durant la tranche de temps étudiée. Ensuite, nous avons proposé une méthode pour la fusion des trajectoires basée sur l'utilisation des zonotopes. Ces méthodes, utilisées conjointement, nous ont permis de caractériser de manière garantie l'ensemble des trajectoires d'état hybride engendrées par un système dynamique hybride incertain sur un horizon de temps fini. La deuxième partie de la thèse aborde les méthodes ensemblistes pour l'estimation de paramètres et pour l'estimation d'état hybride (mode et état continu) dans un contexte à erreurs bornées. Nous avons commencé en premier lieu par décrire les méthodes de détection de défauts dans les systèmes hybrides en utilisant une approche paramétrique et une approche observateur hybride. Ensuite, nous avons décrit deux méthodes permettant d’effectuer les tâches de détection de défauts. Nous avons proposé une méthode basée sur notre méthode d'atteignabilité hybride non linéaire et un algorithme de partitionnement que nous avons nommé SIVIA-H pour calculer de manière garantie l'ensemble des paramètres compatibles avec le modèle hybride, les mesures et avec les bornes d’erreurs. Ensuite, pour l'estimation d'état hybride, nous avons proposé une méthode basée sur un prédicteurcorrecteur construit au dessus de notre méthode d'atteignabilité hybride non linéaire
This thesis is dedicated to the development of generic algorithms for the set-membership observation of the continuous state and the discrete mode of hybrid dynamical systems in order to achieve fault detection. This thesis is organized into two parts. In the first part, we have proposed a fast and effective method for the set-membership guard crossing. It consists in carrying out bisection in the time direction only and then makes several contractors working simultaneously to reduce the domain of state vectors located on the guard during the study time slot. Then, we proposed a method for merging trajectories based on zonotopic enclosures. These methods, used together, allowed us to characterize in a guaranteed way the set of all hybrid state trajectories generated by an uncertain hybrid dynamical system on a finite time horizon. The second part focuses on set-membership methods for the parameters or the hybrid state (mode and continuous state) of a hybrid dynamical system in a bounded error framework. We started first by describing fault detection methods for hybrid systems using the parametric approach and the hybrid observer approach. Then, we have described two methods for performing fault detection tasks. We have proposed a method for computing in a guaranteed way all the parameters consistent with the hybrid dynamical model, the actual data and the prior error bound, by using our nonlinear hybrid reachability method and an algorithm for partition which we denote SIVIA-H. Then, for hybrid state estimation, we have proposed a method based on a predictor-corrector, which is also built on top of our non-linear method for hybrid reachability
APA, Harvard, Vancouver, ISO, and other styles
15

Yaakub, Abdul Razak Bin. "Computer solution of non-linear integration formula for solving initial value problems." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/25381.

Full text
Abstract:
This thesis is concerned with the numerical solutions of initial value problems with ordinary differential equations and covers single step integration methods. focus is to study the numerical the various aspects of Specifically, its main methods of non-linear integration formula with a variety of means based on the Contraharmonic mean (C.M) (Evans and Yaakub [1995]), the Centroidal mean (C.M) (Yaakub and Evans [1995]) and the Root-Mean-Square (RMS) (Yaakub and Evans [1993]) for solving initial value problems. the applications of the second It includes a study of order C.M method for parallel implementation of extrapolation methods for ordinary differential equations with the ExDaTa schedule by Bahoshy [1992]. Another important topic presented in this thesis is that a fifth order five-stage explicit Runge Kutta method or weighted Runge Kutta formula [Evans and Yaakub [1996]) exists which is contrary to Butcher [1987] and the theorem in Lambert ([1991] ,pp 181). The thesis is organized as follows. An introduction to initial value problems in ordinary differential equations and parallel computers and software in Chapter 1, the basic preliminaries and fundamental concepts in mathematics, an algebraic manipulation package, e.g., Mathematica and basic parallel processing techniques are discussed in Chapter 2. Following in Chapter 3 is a survey of single step methods to solve ordinary differential equations. In this chapter, several single step methods including the Taylor series method, Runge Kutta method and a linear multistep method for non-stiff and stiff problems are also considered. Chapter 4 gives a new Runge Kutta formula for solving initial value problems using the Contraharmonic mean (C.M), the Centroidal mean (C.M) and the Root-MeanSquare (RMS). An error and stability analysis for these variety of means and numerical examples are also presented. Chapter 5 discusses the parallel implementation on the Sequent 8000 parallel computer of the Runge-Kutta contraharmonic mean (C.M) method with extrapolation procedures using explicit assignment scheduling Kutta RK(4, 4) method (EXDATA) strategies. A is introduced and the data task new Rungetheory and analysis of its properties are investigated and compared with the more popular RKF(4,5) method, are given in Chapter 6. Chapter 7 presents a new integration method with error control for the solution of a special class of second order ODEs. In Chapter 8, a new weighted Runge-Kutta fifth order method with 5 stages is introduced. By comparison with the currently recommended RK4 ( 5) Merson and RK5(6) Nystrom methods, the new method gives improved results. Chapter 9 proposes a new fifth order Runge-Kutta type method for solving oscillatory problems by the use of trigonometric polynomial interpolation which extends the earlier work of Gautschi [1961]. An analysis of the convergence and stability of the new method is given with comparison with the standard Runge-Kutta methods. Finally, Chapter 10 summarises and presents conclusions on the topics discussed throughout the thesis.
APA, Harvard, Vancouver, ISO, and other styles
16

Xi, Hao. "Contributions to the JML project : safe arithmetic and non-null-by-default." Thesis, 2006. http://spectrum.library.concordia.ca/9061/1/MR20784.pdf.

Full text
Abstract:
The MultiJava Compiler (MJC) is an extension to the Java programming language that adds open classes and symmetric multiple dispatch. The Java Modeling Language (JML) is a Behavioral Interface Specification Language (BISL) that can be used to specify both simple DBC and full behavioral interface specifications. The JML toolset is based on MJC and contains tools such as the JML (type) checker and the JML Runtime Assertion Checker (RAC). JMLb, is a new version of JML that supports arbitrary precision integers and safe-arithmetic. In this thesis we present the implementation of (bytecode level) support for safe-math integral arithmetic in MJC as well as a performance evaluation of this new version of MJC, in comparison with other Java compilers. Another main enhancement presented in this thesis is the implementation of a non-null statistics gathering tool in the JML checker. An overview of the desugaring process for various kinds of JML method specifications is given. In addition, rules for judging non-null usage are described by presenting examples of different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
17

CHEN, YI-JU, and 陳逸如. "6th Graders' Problem-solving Thinking Regarding Non-common Fractional Division Arithmetic Word Problems." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/55051008527824458848.

Full text
Abstract:
碩士
國立臺中教育大學
數學教育學系在職專班
103
The purpose of this study was to explore 6th graders' problem-solving thinking regarding non-common fractional division arithmetic word problems. Twenty-seven 6th graders were selected from one elementary school at Daya District, Taichung City to participate in this study. Self-constructed paper-and-pencil test was given to 27 participants. Based on their responses on the test, eighteen students were chosen for semi-structured interviews. The results were as follows: 1. 6th graders' performed better in measurement division than in determination of a unit rate . In measurement division, students did best on “mixed fraction dividing by proper fraction”, followed by “mixed fraction dividing by mixed fraction” , and “proper fraction dividing by proper fraction”. In determination of a unit rate, students did best on “mixed fraction dividing by mixed fraction”, followed by “mixed fraction dividing by proper fraction”, “proper fraction dividing by proper fraction”, and “proper fraction dividing by mixed fraction”. 2. There were two common factors influencing students problem-solving: key words and surplus information. Low-achievers solved problem at random if they couldn’t understanding the meaning of the problem, and they were disturbed by newly learning context; middle-achievers used trial-and-error and check answer strategy, and bigger number divided by smaller one strategy; high-achievers used multiple strategies, such as simplified problem, applied learning knowledge, and trial-and-error and made sure answer reasonably. 3. 6th graders would solve the problem successfully if they were guided by simplified problem strategy. 4. The reason why 6th graders had difficulties in fractional division were followings: they couldn’t understanding the meaning of the problem; they were affected by numbers and words; they couldn’t make an equation; they couldn’t know whether the answer was correct or not; they couldn’t solve problems by using keywords; and they didn’t have enough patience.
APA, Harvard, Vancouver, ISO, and other styles
18

"Hybrid Subgroups of Complex Hyperbolic Lattices." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.53622.

Full text
Abstract:
abstract: In the 1980's, Gromov and Piatetski-Shapiro introduced a technique called "hybridization'' which allowed them to produce non-arithmetic hyperbolic lattices from two non-commensurable arithmetic lattices. It has been asked whether an analogous hybridization technique exists for complex hyperbolic lattices, because certain geometric obstructions make it unclear how to adapt this technique. This thesis explores one possible construction (originally due to Hunt) in depth and uses it to produce arithmetic lattices, non-arithmetic lattices, and thin subgroups in SU(2,1).
Dissertation/Thesis
Doctoral Dissertation Mathematics 2019
APA, Harvard, Vancouver, ISO, and other styles
19

Jun, Kihwan. "Improved algorithms for non-restoring division and square root." 2012. http://hdl.handle.net/2152/19542.

Full text
Abstract:
This dissertation focuses on improving the non-restoring division and square root algorithm. Although the non-restoring division algorithm is the fastest and has less complexity among other radix-2 digit recurrence division algorithms, there are some possibilities to enhance its performance. To improve its performance, two new approaches are proposed here. In addition, the research scope is extended to seek an efficient algorithm for implementing non-restoring square root, which has similar steps to non-restoring division. For the first proposed approach, the non-restoring divider with a modified algorithm is presented. The new algorithm changes the order of the flowchart, which reduces one unit delay of the multiplexer per every iteration. In addition, a new method to find a correct quotient is presented and it removes an error that the quotient is always odd number after a digit conversion from a digit converter from the quotient with digits 1 and -1 to conventional binary number. The second proposed approach is a novel method to find a quotient bit for every iteration, which hides the total delay of the multiplexer with dual path calculation. The proposed method uses a Most Significant Carry (MSC) generator, which determines the sign of each remainder faster than the conventional carry lookahead adder and it eventually reduces the total delay by almost 22% compared to the conventional non-restoring division algorithm. Finally, an improved algorithm for non-restoring square root is proposed. The two concepts already applied to non-restoring division are adopted for improving the performance of a non-restoring square root since it has similar process to that of non-restoring division for finding square root. Additionally, a new method to find intermediate quotients is presented that removes an adder per an iteration to reduce the total area and power consumption. The non-restoring square root with MSC generator reduces total delay, area and power consumption significantly.
text
APA, Harvard, Vancouver, ISO, and other styles
20

Arazim, Dolejší Zuzana. "Filosofický výklad a možné interpretace Gödelových vět o neúplnosti." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-352495.

Full text
Abstract:
❆❜str❛❝t ❚❤❡ ❞✐♣❧♦♠❛ t❤❡s✐s ❞❡❛❧s ✇✐t❤ ♣♦ss✐❜❧❡ ♣❤✐❧♦s♦♣❤✐❝❛❧ ❛♥❛❧②s❡s ♦❢ ●ö✲ ❞❡❧✬s ✐♥❝♦♠♣❧❡t❡♥❡ss t❤❡♦r❡♠s ❛♥❞ t❤❡✐r ✐♥t❡r♣r❡t❛t✐♦♥s ✐♥ ❞✐✛❡r❡♥t ❜r❛♥❝❤❡s♦❢♣❤✐❧♦s♦♣❤②✭♣❤❡♥♦♠❡♥♦❧♦❣②✱❛♥❛❧②t✐❝❛❧♣❤✐❧♦s♦♣❤②♦❢♠✐♥❞✱ ❑❛♥t✬s ♣❤✐❧♦s♦♣❤②✮✳ P❛rt ♦❢ t❤❡ t❤❡s✐s ✐s ❞❡❞✐❝❛t❡❞ t♦ t❤❡ ❛tt✐t✉❞❡s t♦ ♠❛t❤❡♠❛t✐❝❛❧ ❞✐s❝✐♣❧✐♥❡s ❛♥❞ t❤❡✐r ❢✉♥❞❛♠❡♥t❛❧ tr❛♥s❢♦r♠❛t✐♦♥s ❝❛✉s❡❞ ❜② r❡✈♦❧✉t✐♦♥❛r② ❞✐s❝♦✈❡r✐❡s s✉❝❤ ❛s ◆♦♥✲❊✉❝❧✐❞❡❛♥ ❣❡♦♠❡t✲ r✐❡s ❛♥❞ ✐♥❝♦♠♣❧❡t♥❡ss t❤❡♦r❡♠s✳ ❚❤❡ r❡❧❛t✐♦♥s❤✐♣ ❜❡t✇❡❡♥ t❤❡ s❡❝♦♥❞ ●ö❞❡❧✬s ✐♥❝♦♠♣❧❡t❡♥❡ss t❤❡♦r❡♠✱ ●❡♥t③❡♥✬s ❝♦♥s✐st❡♥❝② ♣r♦♦❢ ♦❢ P❡❛♥♦ ❛r✐t❤♠❡t✐❝ ❛♥❞ ❍✐❧❜❡rt✬s ♣r♦❣r❛♠♠❡ ✐s ❛❧s♦ ❞✐s❝✉ss❡❞✳
APA, Harvard, Vancouver, ISO, and other styles
21

Glivická, Jana. "Logické základy forcingu." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-324411.

Full text
Abstract:
This thesis examines the method of forcing in set theory and focuses on aspects that are set aside in the usual presentations or applications of forcing. It is shown that forcing can be formalized in Peano arithmetic (PA) and that consis- tency results obtained by forcing are provable in PA. Two ways are presented of overcoming the assumption of the existence of a countable transitive model. The thesis also studies forcing as a method giving rise to interpretations between theories. A notion of bi-interpretability is defined and a method of forcing over a non-standard model of ZFC is developed in order to argue that ZFC and ZF are not bi-interpretable. 1
APA, Harvard, Vancouver, ISO, and other styles
22

Νίκας, Ιωάννης. "Αριθμητική επίλυση μη γραμμικών παραμετρικών εξισώσεων και ολική βελτιστοποίηση με διαστηματική ανάλυση." Thesis, 2011. http://hdl.handle.net/10889/4919.

Full text
Abstract:
Η παρούσα διδακτορική διατριβή πραγματεύεται το θέμα της αποδοτικής και με βεβαιότητα εύρεσης όλων των ριζών της παραμετρικής εξίσωσης f(x;[p]) = 0, μιας συνεχώς διαφορίσιμης συνάρτησης f με [p] ένα διάνυσμα που περιγράφει όλες τις παραμέτρους της παραμετρικής εξίσωσης και τυποποιούνται με τη μορφή διαστημάτων. Για την επίλυση αυτού του προβλήματος χρησιμοποιήθηκαν εργαλεία της Διαστηματικής Ανάλυσης. Το κίνητρο για την ερευνητική ενασχόληση με το παραπάνω πρόβλημα προέκυψε μέσα από ένα κλασικό πρόβλημα αριθμητικής ανάλυσης: την αριθμητική επίλυση συστημάτων πολυωνυμικών εξισώσεων μέσω διαστηματικής ανάλυσης. Πιο συγκεκριμένα, προτάθηκε μια ευρετική τεχνική αναδιάταξης του αρχικού πολυωνυμικού συστήματος που φαίνεται να βελτιώνει σημαντικά, κάθε φορά, τον χρησιμοποιούμενο επιλυτή. Η ανάπτυξη, καθώς και τα αποτελέσματα αυτής της εργασίας αποτυπώνονται στο Κεφάλαιο 2 της παρούσας διατριβής. Στο επόμενο Κεφάλαιο 3, προτείνεται μια μεθοδολογία για την αποδοτική και αξιόπιστη επίλυση μη-γραμμικών εξισώσεων με διαστηματικές παραμέτρους, δηλαδή την αποδοτική και αξιόπιστη επίλυση διαστηματικών εξισώσεων. Πρώτα, δίνεται μια νέα διατύπωση της Διαστηματικής Αριθμητικής και αποδεικνύεται η ισοδυναμία της με τον κλασσικό ορισμό. Στη συνέχεια, χρησιμοποιείται η νέα διατύπωση της Διαστηματικής Αριθμητικής ως θεωρητικό εργαλείο για την ανάπτυξη μιας επέκτασης της διαστηματικής μεθόδου Newton που δύναται να επιλύσει όχι μόνο κλασικές μη-παραμετρικές μη-γραμμικές εξισώσεις, αλλά και παραμετρικές (διαστηματικές) μη-γραμμικές εξισώσεις. Στο Κεφάλαιο 4 προτείνεται μια νέα προσέγγιση για την αριθμητική επίλυση του προβλήματος της Ολικής Βελτιστοποίησης με περιορισμούς διαστήματα, χρησιμοποιώντας τα αποτελέσματα του Κεφαλαίου 3. Το πρόβλημα της ολικής βελτιστοποίησης, ανάγεται σε πρόβλημα επίλυσης διαστηματικών εξισώσεων, και γίνεται εφικτή η επίλυσή του με τη βοήθεια των θεωρητικών αποτελεσμάτων και της αντίστοιχης μεθοδολογίας του Κεφαλαίου 3. Στο τελευταίο Κεφάλαιο δίνεται μια νέα αλγοριθμική προσέγγιση για το πρόβλημα της επίλυσης διαστηματικών πολυωνυμικών εξισώσεων. Η νέα αυτή προσέγγιση, βασίζεται και γενικεύει την εργασία των Hansen και Walster, οι οποίοι πρότειναν μια μέθοδο για την επίλυση διαστηματικών πολυωνυμικών εξισώσεων 2ου βαθμού.
In this dissertation the problem of finding reliably and with certainty all the zeros a pa-rameterized equation f(x;[p]) = 0, of a continuously differentiable function f is considered, where [p] is an interval vector describing all the parameters of the Equation, which are formed with interval numbers. For this kind of problem, methods of Interval Analysis are used. The incentive to this scientific research was emerged from a classic numerical analysis problem: the numerical solution of polynomial systems of equations using interval analysis. In particular, a heuristic reordering technique of the initial polynomial systems of equations is proposed. This approach seems to improve significantly the used solver. The proposed technique, as well as the results of this publication are presented in Chapter 2 of this dissertation. In the next Chapter 3, a methodology is proposed for solving reliably and efficiently parameterized (interval) equations. Firstly, a new formulation of interval arithmetic is given and the equivalence with the classic one is proved. Then, an extension of interval Newton method is proposed and developed, based on the new formulation of interval arithmetic. The new method is able to solve not only classic non-linear equations but, non-linear parameterized (interval) equation too. In Chapter 4 a new approach on solving the Box-Constrained Global Optimization problem is proposed, based on the results of Chapter 3. In details, the Box-Constrained Global Optimization problem is reduced to a problem of solving interval equations. The solution of this reduction is attainable through the methodology developed in Chapter 3. In the last Chapter of this dissertation a new algorithmic approach is given for the problem of solving reliably and with certainty an interval polynomial equation of degree $n$. This approach consists in a generalization of the work of Hansen and Walster. Hansen and Walster proposed a method for solving only quadratic interval polynomial equations
APA, Harvard, Vancouver, ISO, and other styles
23

Waters, Ronald S. "Total delay optimization for column reduction multipliers considering non-uniform arrival times to the final adder." Thesis, 2014. http://hdl.handle.net/2152/24858.

Full text
Abstract:
Column Reduction Multiplier techniques provide the fastest multiplier designs and involve three steps. First, a partial product array of terms is formed by logically ANDing each bit of the multiplier with each bit of the multiplicand. Second, adders or counters are used to reduce the number of terms in each bit column to a final two. This activity is commonly described as column reduction and occurs in multiple stages. Finally, some form of carry propagate adder (CPA) is applied to the final two terms in order to sum them to produce the final product of the multiplication. Since forming the partial products, in the first step, is simply forming an array of the logical AND's of two bits, there is little opportunity for delay improvement for the first step. There has been much work done in optimizing the reduction stages for column multipliers in the second reduction step. All of the reduction approaches of the second step result in non-uniform arrival times to the input of the final carry propagate adder in the final step. The designs for carry propagate adders have been done assuming that the input bits all have the same arrival time. It is not evident that the non-uniform arrival times from the columns impacts the performance of the multiplier. A thorough analysis of the several column reduction methods and the impact of carry propagate adder designs, along with the column reduction design step, to provide the fastest possible final results, for an array of multiplier widths has not been undertaken. This dissertation investigates the design impact of three carry propagate adders, with different performance attributes, on the final delay results for four column reduction multipliers and suggests general ways to optimize the total delay for the multipliers.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography