To see the other types of publications on this topic, follow the link: Quantization errors.

Dissertations / Theses on the topic 'Quantization errors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 dissertations / theses for your research on the topic 'Quantization errors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tangboondouangjit, Aram. "Sigma-Delta quantization number theoretic aspects of refining quantization error /." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3793.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Pötzelberger, Klaus. "The Consistency ot the Empirical Quantization Error." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/1790/1/document.pdf.

Full text
Abstract:
We study the empirical quantization error in case the number of prototypes increases with the size of the sample. We present a proof of the consistency of the empirical quantization error and of corresponding estimators of the quantization dimensions of distributions. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
3

LaDue, Mark D. "Quantization error problems for classes of trigonometric polynomials." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/29176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blanchard, Bart. "Quantization effects and implementation considerations for turbo decoders." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000107.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains xiii, 91 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
5

Mao, Jie. "Reduction of the quantization error in fuzzy logic controllers by dithering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36717.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

SAWADA, Manabu, Hiraku OKADA, Takaya YAMAZATO, and Masaaki KATAYAMA. "Influence of ADC Nonlinearity on the Performance of an OFDM Receiver." IEICE, 2006. http://hdl.handle.net/2237/9582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Tomas. "On error-robust source coding with image coding applications." Licentiate thesis, Stockholm : Department of Signals, Sensors and Systems, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McMichael, Joseph Gary. "Timing offset and quantization error trade-off in interleaved multi-channel measurements." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66035.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 117-118).
Time-interleaved analog-to-digital converters (ADCs) are traditionally designed with equal quantization granularity in each channel and uniform sampling offsets. Recent work suggests that it is often possible to achieve a better signal-to-quantization noise ratio (SQNR) with different quantization granularity in each channel, non-uniform sampling, and appropriate reconstruction filtering. This thesis develops a framework for optimal design of non-uniform sampling constellations to maximize SQNR in time-interleaved ADCs. The first portion of this thesis investigates discrepancies between the additive noise model and uniform quantizers. A simulation is implemented for the multi-channel measurement and reconstruction system. The simulation reveals a key inconsistency in the environment of time-interleaved ADCs: cross-channel quantization error correlation. Statistical analysis is presented to characterize error correlation between quantizers with different granularities. A novel ADC architecture is developed based on weighted least squares (WLS) to exploit this correlation, with particular application for time-interleaved ADCs. A "correlated noise model" is proposed that incorporates error correlation between channels. The proposed model is shown to perform significantly better than the traditional additive noise model for channels in close proximity. The second portion of this thesis focuses on optimizing channel configurations in time-interleaved ADCs. Analytical and numerical optimization techniques are presented that rely on the additive noise model for determining non-uniform sampling constellations that maximize SQNR. Optimal constellations for critically sampled systems are always uniform, while solution sets for oversampled systems are larger. Systems with diverse bit allocations often exhibit "clusters" of low-precision channels in close proximity. Genetic optimization is shown to be effective for quickly and accurately determining optimal timing constellations in systems with many channels. Finally, a framework for efficient design of optimal channel configurations is formulated that incorporates statistical analysis of cross-channel quantization error correlation and solutions based on the additive noise model. For homogeneous bit allocations, the framework proposes timing offset corrections to avoid performance degradation from the optimal scenario predicted by the additive noise model. For diverse bit allocations, the framework proposes timing corrections and a "unification" of low-precision quantizers in close proximity. This technique results in significant improvements in performance above the previously known optimal additive noise model solution.
by Joseph Gary McMichael.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Wandeto, John Mwangi. "Self-organizing map quantization error approach for detecting temporal variations in image sets." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD025/document.

Full text
Abstract:
Une nouvelle approche du traitement de l'image, appelée SOM-QE, qui exploite quantization error (QE) des self-organizing maps (SOM) est proposée dans cette thèse. Les SOM produisent des représentations discrètes de faible dimension des données d'entrée de haute dimension. QE est déterminée à partir des résultats du processus d'apprentissage non supervisé du SOM et des données d'entrée. SOM-QE d'une série chronologique d'images peut être utilisé comme indicateur de changements dans la série chronologique. Pour configurer SOM, on détermine la taille de la carte, la distance du voisinage, le rythme d'apprentissage et le nombre d'itérations dans le processus d'apprentissage. La combinaison de ces paramètres, qui donne la valeur la plus faible de QE, est considérée comme le jeu de paramètres optimal et est utilisée pour transformer l'ensemble de données. C'est l'utilisation de l'assouplissement quantitatif. La nouveauté de la technique SOM-QE est quadruple : d'abord dans l'usage. SOM-QE utilise un SOM pour déterminer la QE de différentes images - typiquement, dans un ensemble de données de séries temporelles - contrairement à l'utilisation traditionnelle où différents SOMs sont appliqués sur un ensemble de données. Deuxièmement, la valeur SOM-QE est introduite pour mesurer l'uniformité de l'image. Troisièmement, la valeur SOM-QE devient une étiquette spéciale et unique pour l'image dans l'ensemble de données et quatrièmement, cette étiquette est utilisée pour suivre les changements qui se produisent dans les images suivantes de la même scène. Ainsi, SOM-QE fournit une mesure des variations à l'intérieur de l'image à une instance dans le temps, et lorsqu'il est comparé aux valeurs des images subséquentes de la même scène, il révèle une visualisation transitoire des changements dans la scène à l'étude. Dans cette recherche, l'approche a été appliquée à l'imagerie artificielle, médicale et géographique pour démontrer sa performance. Les scientifiques et les ingénieurs s'intéressent aux changements qui se produisent dans les scènes géographiques d'intérêt, comme la construction de nouveaux bâtiments dans une ville ou le recul des lésions dans les images médicales. La technique SOM-QE offre un nouveau moyen de détection automatique de la croissance dans les espaces urbains ou de la progression des maladies, fournissant des informations opportunes pour une planification ou un traitement approprié. Dans ce travail, il est démontré que SOM-QE peut capturer de très petits changements dans les images. Les résultats confirment également qu'il est rapide et moins coûteux de faire la distinction entre le contenu modifié et le contenu inchangé dans les grands ensembles de données d'images. La corrélation de Pearson a confirmé qu'il y avait des corrélations statistiquement significatives entre les valeurs SOM-QE et les données réelles de vérité de terrain. Sur le plan de l'évaluation, cette technique a donné de meilleurs résultats que les autres approches existantes. Ce travail est important car il introduit une nouvelle façon d'envisager la détection rapide et automatique des changements, même lorsqu'il s'agit de petits changements locaux dans les images. Il introduit également une nouvelle méthode de détermination de QE, et les données qu'il génère peuvent être utilisées pour prédire les changements dans un ensemble de données de séries chronologiques
A new approach for image processing, dubbed SOM-QE, that exploits the quantization error (QE) from self-organizing maps (SOM) is proposed in this thesis. SOM produce low-dimensional discrete representations of high-dimensional input data. QE is determined from the results of the unsupervised learning process of SOM and the input data. SOM-QE from a time-series of images can be used as an indicator of changes in the time series. To set-up SOM, a map size, the neighbourhood distance, the learning rate and the number of iterations in the learning process are determined. The combination of these parameters that gives the lowest value of QE, is taken to be the optimal parameter set and it is used to transform the dataset. This has been the use of QE. The novelty in SOM-QE technique is fourfold: first, in the usage. SOM-QE employs a SOM to determine QE for different images - typically, in a time series dataset - unlike the traditional usage where different SOMs are applied on one dataset. Secondly, the SOM-QE value is introduced as a measure of uniformity within the image. Thirdly, the SOM-QE value becomes a special, unique label for the image within the dataset and fourthly, this label is used to track changes that occur in subsequent images of the same scene. Thus, SOM-QE provides a measure of variations within the image at an instance in time, and when compared with the values from subsequent images of the same scene, it reveals a transient visualization of changes in the scene of study. In this research the approach was applied to artificial, medical and geographic imagery to demonstrate its performance. Changes that occur in geographic scenes of interest, such as new buildings being put up in a city or lesions receding in medical images are of interest to scientists and engineers. The SOM-QE technique provides a new way for automatic detection of growth in urban spaces or the progressions of diseases, giving timely information for appropriate planning or treatment. In this work, it is demonstrated that SOM-QE can capture very small changes in images. Results also confirm it to be fast and less computationally expensive in discriminating between changed and unchanged contents in large image datasets. Pearson's correlation confirmed that there was statistically significant correlations between SOM-QE values and the actual ground truth data. On evaluation, this technique performed better compared to other existing approaches. This work is important as it introduces a new way of looking at fast, automatic change detection even when dealing with small local changes within images. It also introduces a new method of determining QE, and the data it generates can be used to predict changes in a time series dataset
APA, Harvard, Vancouver, ISO, and other styles
10

Burns, Jason R. "Effects of quantization error on the global positioning system software receiver interference mitigation." Ohio University / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1174580139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pötzelberger, Klaus. "The General Quantization Problem for Distributions with Regular Support." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/1508/1/document.pdf.

Full text
Abstract:
We study the asymptotic behavior of the quantization error for general information functions and prove results for distributions P with regular support. We characterize the information functions for which the uniform distribution on the set of prototypes converges weakly to P. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
12

Salhany, David Salim. "Performance analysis of a multistage multicarrier demultiplexer/demodulator (M-MCDD) in the presence of interference and quantization error." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0019/MQ47831.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Khirirat, Sarit. "First-Order Algorithms for Communication Efficient Distributed Learning." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263738.

Full text
Abstract:
Technological developments in devices and storages have made large volumes of data collections more accessible than ever. This transformation leads to optimization problems with massive data in both volume and dimension. In response to this trend, the popularity of optimization on high performance computing architectures has increased unprecedentedly. These scalable optimization solvers can achieve high efficiency by splitting computational loads among multiple machines. However, these methods also incur large communication overhead. To solve optimization problems with millions of parameters, communication between machines has been reported to consume up to 80% of the training time. To alleviate this communication bottleneck, many optimization algorithms with data compression techniques have been studied. In practice, they have been reported to significantly save communication costs while exhibiting almost comparable convergence as the full-precision algorithms. To understand this intuition, we develop theory and techniques in this thesis to design communication-efficient optimization algorithms. In the first part, we analyze the convergence of optimization algorithms with direct compression. First, we outline definitions of compression techniques which cover many compressors of practical interest. Then, we provide the unified analysis framework of optimization algorithms with compressors which can be either deterministic or randomized. In particular, we show how the tuning parameters of compressed optimization algorithms must be chosen to guarantee performance. Our results show explicit dependency on compression accuracy and delay effect due to asynchrony of algorithms. This allows us to characterize the trade-off between iteration and communication complexity under gradient compression. In the second part, we study how error compensation schemes can improve the performance of compressed optimization algorithms. Even though convergence guarantees of optimization algorithms with error compensation have been established, there is very limited theoretical support which guarantees improved solution accuracy. We therefore develop theoretical explanations, which show that error compensation guarantees arbitrarily high solution accuracy from compressed information. In particular, error compensation helps remove accumulated compression errors, thus improving solution accuracy especially for ill-conditioned problems. We also provide strong convergence analysis of error compensation on parallel stochastic gradient descent across multiple machines. In particular, the error-compensated algorithms, unlike direct compression, result in significant reduction in the compression error. Applications of the algorithms in this thesis to real-world problems with benchmark data sets validate our theoretical results.
Utvecklandet av kommunikationsteknologi och datalagring har gjort stora mängder av datasamlingar mer tillgängliga än någonsin. Denna förändring leder till numeriska optimeringsproblem med datamängder med stor skala i volym och dimension. Som svar på denna trend har populariteten för högpresterande beräkningsarkitekturer ökat mer än någonsin tidigare. Skalbara optimeringsverktyg kan uppnå hög effektivitet genom att fördela beräkningsbördan mellan ett flertal maskiner. De kommer dock i praktiken med ett pris som utgörs av betydande kommunikationsomkostnader. Detta orsakar ett skifte i flaskhalsen för prestandan från beräkningar till kommunikation. När lösning av verkliga optimeringsproblem sker med ett stort antal parametrar, dominerar kommunikationen mellan maskiner nästan 80% av träningstiden. För att minska kommunikationsbelastningen, har ett flertal kompressionstekniker föreslagits i litteraturen. Även om optimeringsalgoritmer som använder dessa kompressorer rapporteras vara lika konkurrenskraftiga som sina motsvarigheter med full precision, dras de med en förlust av noggrannhet. För att ge en uppfattning om detta, utvecklar vi i denna avhandling teori och tekniker för att designa kommunikations-effektiva optimeringsalgoritmer som endast använder information med låg precision. I den första delen analyserar vi konvergensen hos optimeringsalgoritmer med direkt kompression. Först ger vi en översikt av kompressionstekniker som täcker in många kompressorer av praktiskt intresse. Sedan presenterar vi ett enhetligt analysramverk för optimeringsalgoritmer med kompressorer, som kan vara antingen deterministiska eller randomiserade. I synnerhet visas val av parametrar i komprimerade optimeringsalgoritmer som avgörs av kompressorns parametrar som garanterar konvergens. Våra konvergensgarantier visar beroende av kompressorns noggrannhet och fördröjningseffekter på grund av asynkronicitet hos algoritmer. Detta låter oss karakterisera avvägningen mellan iterations- och kommunikations-komplexitet när kompression används. I den andra delen studerarvi hög prestanda hos felkompenseringsmetoder för komprimerade optimeringsalgoritmer. Även om konvergensgarantier med felkompensering har etablerats finns det väldigt begränsat teoretiskt stöd för konkurrenskraftiga konvergensgarantier med felkompensering. Vi utvecklar därför teoretiska förklaringar, som visar att användande av felkompensering garanterar godtyckligt hög lösningsnoggrannhet från komprimerad information. I synnerhet bidrar felkompensering till att ta bort ackumulerade kompressionsfel och förbättrar därmed lösningsnoggrannheten speciellt för illa konditionerade kvadratiska optimeringsproblem. Vi presenterar också stark konvergensanalys för felkompensering tillämpat på stokastiska gradientmetoder med ett kommunikationsnätverk innehållande ett flertal maskiner. De felkompenserade algoritmerna resulterar, i motsats till direkt kompression, i betydande reducering av kompressionsfelet. Simuleringar av algoritmer i denna avhandling på verkligaproblem med referensdatamängder validerar våra teoretiska resultat.

QC20191120

APA, Harvard, Vancouver, ISO, and other styles
14

Hassoun, Alain. "Quantification des erreurs de reconstruction dues aux variations aléatoires des mesures en tomographie d'émission : comparaison expérimentale des techniques intervallistes et statistiques." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20272.

Full text
Abstract:
En médecine nucléaire, les images de Tomographie d'Emission Mono-Photonique (TEMP) permettent de diagnostiquer un certain nombre de maladies dégénératives, comme la maladie de Parkinson. Le principe de ce type de diagnostic est de comparer l'activité reconstruite au sein de deux régions d'intérêt particulières. Cette comparaison est problématique car les mesures ayant des fluctuations aléatoires, les activités reconstruites en ont aussi. La statistique de ces fluctuations étant inconnue, le diagnostic obtenu est souvent peu fiable. Pour rendre ce type de diagnostic plus fiable, il est important de quantifier, dans chaque région d'intérêt, l'impact des fluctuations aléatoires des mesures de projection sur l'activité reconstruite. Dans cette thèse, nous nous intéressons à cette quantification via des méthodes de reconstruction basées sur une nouvelle modélisation du processus d'acquisition tomographique. Une des particularités de ces méthodes est que les valeurs des activités reconstruites ne sont pas des valeurs précises mais des intervalles d'activités. Une propriété importante de ces estimations intervallistes est que la largeur des intervalles quantifie l'erreur statistique de reconstruction. Une contribution importante de ce travail est la mise en place d'un protocole dérivé de la méthode de comparaison quantitative appliquée en routine clinique. Ce protocole peut être utilisé, d'une part, pour évaluer la performance d'une méthode de quantification d'erreur dans une tâche de comparaison d'activités reconstruites et d'autre part, pour comparer les performances de plusieurs algorithmes de quantification. Nous montrons et discutons les performances de deux méthodes de quantification intervallistes et d'une méthode de quantification choisie comme référence
In nuclear medicine, Single-Photon Emission Computed Tomography (SPECT) images are used to diagnose a certain number of degenerative diseases such as Parkinson's disease. The principle of this kind of diagnosis is to compare the activity reconstructed in two specific regions of interest. The random fluctuations of the reconstructed activities, due to the random fluctuations of the measurement, have unknown statistical properties. This lack of knowledge makes the comparison, and thus the diagnosis, unreliable. To make the diagnosis more reliable, it is important to quantify the impact of random fluctuations of the projection measurements on the reconstructed activities in each region of interest.In this thesis, we focused on this quantification by using reconstruction methods based on a new modelling of the acquisition tomography process. A special feature of the obtained reconstructions is that the reconstructed activities are not precise- but interval-valued activities. The width of the reconstructed intervals quantifies the reconstruction error. As important contribution, we have proposed a protocol derived from the quantitative comparison method applied in clinical routine. This protocol can be used to evaluate the performance of an error quantification algorithm or, to compare the performances of two quantification algorithms. We show and discuss the performance of two interval-based quantification methods and a chosen reference method
APA, Harvard, Vancouver, ISO, and other styles
15

Grill, Andreas, and Robin Englund. "Analysis of Fix‐point Aspects for Wireless Infrastructure Systems." Thesis, Karlstad University, Faculty of Technology and Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-4550.

Full text
Abstract:

A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy.

The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code.

The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created.

The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits.


En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet.

Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem.

MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox.

Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler.


fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus
APA, Harvard, Vancouver, ISO, and other styles
16

Lefèvre, Pascal. "Protection des contenus multimédias pour la certification des données." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2273/document.

Full text
Abstract:
Depuis plus de vingt ans, l'accès à la technologie est devenu très facile étant donné son omniprésence dans le quotidien de chacun et son faible coût. Cet accès aux technologies du numérique permet à toute personne équipée d'un ordinateur ou d'un smartphone de visualiser et de modifier des contenus digitaux. Avec les progrès en matière de stockage en ligne, la quantité de contenus digitaux tels que le son, l'image ou la vidéo sur internet a explosé et continue d'augmenter.Savoir identifier la source d'une image et certifier si celle-ci a été modifiée ou non sont des informations nécessaires pour authentifier une image et ainsi protéger la propriété intellectuelle et les droits d’auteur par exemple. Une des approches pour résoudre ces problèmes est le tatouage numérique. Il consiste à insérer une marque dans une image qui permettra de l'authentifier.Dans cette thèse, nous étudions premièrement le tatouage numérique dans le but de proposer des méthodes plus robustes aux modifications d'image grâce aux codes correcteurs. En étudiant la structure des erreurs produites par la modification d’une image marquée, un code correcteur sera plus efficace qu’un autre. Nous proposons aussi d’intégrer de nouveaux codes correcteurs appelés codes en métrique rang pour le tatouage.Ensuite, nous proposons d’améliorer l'invisibilité des méthodes de tatouage pour les images couleur. A l’insertion d’une marque, les dégradations de l’image sont perçues différemment par le système visuel humain en fonction de la couleur. Nous proposons un modèle biologique de la perception des couleurs qui nous permet de minimiser les distorsions psychovisuelles de l’image à l’insertion.Toutes ces techniques sont testées sur des images naturelles dans un contexte d’insertion d’information
For more than twenty years, technology has become more and more easy to access. It is omnipresent in everyday life and is low cost. It allows anyone using a computer or a smartphone to visualize and modify digital contents. Also, with the impressive progress of online massive data storage (cloud), the quantity of digital contents has soared and continues to increase. To ensure the protection of intellectual property and copyright, knowing if an image has been modified or not is an important information in order to authenticate it. One approach to protect digital contents is digital watermarking. It consists in modifying an image to embed an invisible mark which can authenticate the image. In this doctorate thesis, we first study how to improve the robustness of digital image watermarking against image processings thanks to error correcting codes. By studying the error structure produced by the image processing applied on a watermarked image, we can find an optimal choice of error correcting code for the best correction performances. Also, we propose to integrate a new type of error correcting codes called rank metric codes for watermarking applications. Then, we propose to improve the invisibility of color image watermarking methods. At the embedding step, a host image suffers some distortions which are perceived differently in function of the color by the human visual system. We propose a biological model of color perception which allows one to minimize psychovisual distortions applied on the image to protect
APA, Harvard, Vancouver, ISO, and other styles
17

Suh, Sangwook. "Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42716.

Full text
Abstract:
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.
APA, Harvard, Vancouver, ISO, and other styles
18

Hadri, Salah Eddine. "Contribution à la synthèse de structures optimales pour la réalisation des filtres et de régulateurs en précision finie." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL129N.

Full text
Abstract:
Un des principaux problèmes dans le traitement numérique du signal est la précision finie des calculs. La présente étude consiste en la minimisation des effets néfastes des erreurs numériques sur les performances des filtres et des régulateurs digitaux. Dans un premier temps, on présente des méthodes analytiques qui permettent de développer une expression quantitative de l'erreur due à la quantification dans les filtres et les régulateurs digitaux. On procède ensuite à l'étude et à l'analyse des paramètres qui influencent les performances d'un réglage digital compte tenu de la précision finie, ainsi que leurs interactions. L’étape suivante est consacrée à la synthèse de structures pour les filtres et les régulateurs qui possèdent les meilleures propriétés numériques en termes de certains critères d'optimalité. Les méthodes et les résultats existants sont présentés. Notre contribution a été d'établir, pour des hypothèses moins restrictives, des conditions d'optimalité plus générales, qui offrent un ensemble de réalisations optimales plus étendu. Ce dernier inclut la réalisation optimale utilisant un minimum de coefficients. Nous avons montré que les conditions d'optimalité données dans les travaux antérieurs sont suffisantes et non nécessaires. Les méthodes utilisées permettent de résoudre le problème d'optimisation en aboutissant à des solutions particulières. Toutefois, les quantités prises comme mesures du bruit de calcul et de la sensibilité de la fonction de transfert, par rapport à la quantification des coefficients, ne permettent pas de comparer les performances de différentes réalisations. Notre méthodologie nous a permis d'unifier plusieurs objectifs et concepts qui ont été jusqu'à présent traites indépendamment. Elle a permis, entre autres, d'appliquer d'une manière directe les idées développées pour les filtres au cas des régulateurs (prise en compte de la boucle)
APA, Harvard, Vancouver, ISO, and other styles
19

Shang, Lei, and lei shang@ieee org. "Modelling of Mobile Fading Channels with Fading Mitigation Techniques." RMIT University. Electrical and Computer Engineering, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20061222.113303.

Full text
Abstract:
This thesis aims to contribute to the developments of wireless communication systems. The work generally consists of three parts: the first part is a discussion on general digital communication systems, the second part focuses on wireless channel modelling and fading mitigation techniques, and in the third part we discuss the possible application of advanced digital signal processing, especially time-frequency representation and blind source separation, to wireless communication systems. The first part considers general digital communication systems which will be incorporated in later parts. Today's wireless communication system is a subbranch of a general digital communication system that employs various techniques of A/D (Analog to Digital) conversion, source coding, error correction, coding, modulation, and synchronization, signal detection in noise, channel estimation, and equalization. We study and develop the digital communication algorithms to enhance the performance of wireless communication systems. In the Second Part we focus on wireless channel modelling and fading mitigation techniques. A modified Jakes' method is developed for Rayleigh fading channels. We investigate the level-crossing rate (LCR), the average duration of fades (ADF), the probability density function (PDF), the cumulative distribution function (CDF) and the autocorrelation functions (ACF) of this model. The simulated results are verified against the analytical Clarke's channel model. We also construct frequency-selective geometrical-based hyperbolically distributed scatterers (GBHDS) for a macro-cell mobile environment with the proper statistical characteristics. The modified Clarke's model and the GBHDS model may be readily expanded to a MIMO channel model thus we study the MIMO fading channel, specifically we model the MIMO channel in the angular domain. A detailed analysis of Gauss-Markov approximation of the fading channel is also given. Two fading mitigation techniques are investigated: Orthogonal Frequency Division Multiplexing (OFDM) and spatial diversity. In the Third Part, we devote ourselves to the exciting fields of Time-Frequency Analysis and Blind Source Separation and investigate the application of these powerful Digital Signal Processing (DSP) tools to improve the performance of wireless communication systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Fujdiak, Radek. "Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Full text
Abstract:
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
APA, Harvard, Vancouver, ISO, and other styles
21

Gordon, Steven J., and Warren P. Seering. "Real-Time Part Position Sensing." 1988. http://hdl.handle.net/1721.1/6045.

Full text
Abstract:
A light stripe vision system is used to measure the location of polyhedral features of parts from a single frame of video camera output. Issues such as accuracy in locating the line segments of intersection in the image and combining redundant information from multiple measurements and multiple sources are addressed. In 2.5 seconds, a prototype sensor was capable of locating a two inch cube to an accuracy (one standard deviation) of .002 inches (.055 mm) in translation and .1 degrees (.0015 radians) in rotation. When integrated with a manipulator, the system was capable of performing high precision assembly tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

"The Consistency ot the Empirical Quantization Error." Department of Statistics and Mathematics, 1999. http://epub.wu-wien.ac.at/dyn/dl/wp/epub-wu-01_a40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

YEH, CHIN-MING, and 葉志明. "Robust Vector Quantization for Burst Error Channels." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/07170897368647510873.

Full text
Abstract:
碩士
中原大學
電機工程研究所
90
The objective of thesis is to implement a robust vector quantization (VQ) for a burst error channel (BEC). Since the simple binary symmetry channel (BSC) model can’t be effective to describe the BEC, the algorithms based on the model may not be useful for many practical situations. In light of these facts, this thesis uses the Gilbert-Elliot model to describe the BEC. Based on the model, the objective of the algorithm is to minimize the average distortion of good state subject to the distortion constraint in the bad state. Numerical results show that, when delivering information over the BEC, our algorithm significantly outperforms the VQ techniques optimizing the design only to the simple BSC.
APA, Harvard, Vancouver, ISO, and other styles
24

LIU, XING-LIN, and 劉幸霖. "Quantization error analysis of the fast hartley transform." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/81264059322244022723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

"Trellis-coded quantization with unequal distortion." 2001. http://library.cuhk.edu.hk/record=b5890829.

Full text
Abstract:
Kwong Cheuk Fai.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 72-74).
Abstracts in English and Chinese.
Acknowledgements --- p.i
Abstract --- p.ii
Table of Contents --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Quantization --- p.2
Chapter 1.2 --- Trellis-Coded Quantization --- p.3
Chapter 1.3 --- Thesis Organization --- p.4
Chapter 2 --- Trellis-Coded Modulation --- p.6
Chapter 2.1 --- Convolutional Codes --- p.7
Chapter 2.1.1 --- Generator Polynomials and Generator Matrix --- p.9
Chapter 2.1.2 --- Circuit Diagram --- p.10
Chapter 2.1.3 --- State Transition Diagram --- p.11
Chapter 2.1.4 --- Trellis Diagram --- p.12
Chapter 2.2 --- Trellis-Coded Modulation --- p.13
Chapter 2.2.1 --- Uncoded Transmission verses TCM --- p.14
Chapter 2.2.2 --- Trellis Representation --- p.17
Chapter 2.2.3 --- Ungerboeck Codes --- p.18
Chapter 2.2.4 --- Set Partitioning --- p.19
Chapter 2.2.5 --- Decoding for TCM --- p.22
Chapter 3 --- Trellis-Coded Quantization --- p.26
Chapter 3.1 --- Scalar Trellis-Coded Quantization --- p.26
Chapter 3.2 --- Trellis-Coded Vector Quantization --- p.31
Chapter 3.2.1 --- Set Partitioning in TCVQ --- p.33
Chapter 3.2.2 --- Codebook Optimization --- p.34
Chapter 3.2.3 --- Numerical Data and Discussions --- p.35
Chapter 4 --- Trellis-Coded Quantization with Unequal Distortion --- p.38
Chapter 4.1 --- Design Procedures --- p.40
Chapter 4.2 --- Fine and Coarse Codebooks --- p.41
Chapter 4.3 --- Set Partitioning --- p.44
Chapter 4.4 --- Codebook Optimization --- p.45
Chapter 4.5 --- Decoding for Unequal Distortion TCVQ --- p.46
Chapter 5 --- Unequal Distortion TCVQ on Memoryless Gaussian Source --- p.47
Chapter 5.1 --- Memoryless Gaussian Source --- p.49
Chapter 5.2 --- Set Partitioning of Codewords of Memoryless Gaussian Source --- p.49
Chapter 5.3 --- Numerical Results and Discussions --- p.51
Chapter 6 --- Unequal Distortion TCVQ on Markov Gaussian Source --- p.57
Chapter 6.1 --- Markov Gaussian Source --- p.57
Chapter 6.2 --- Set Partitioning of Codewords of Markov Gaussian Source --- p.58
Chapter 6.3 --- Numerical Results and Discussions --- p.59
Chapter 7 --- Conclusions --- p.70
Bibliography --- p.72
APA, Harvard, Vancouver, ISO, and other styles
26

Nein, Hsi-Wen, and 粘溪文. "Incorporating Error Shaping Technique and Spectral Dynamics into LSF Vector Quantization." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/34635051216752948292.

Full text
Abstract:
博士
國立交通大學
電機與控制工程系
89
In this thesis, the error shaping technique and the information of spectral dynamics between two successive frames of speech spectra are simultaneously incorporated into the LSF vector quantization (VQ) to improve the performance of LSF quantizers. The error shaping technique can be used to make better use of the perceptual property of human ear, and the spectral dynamics information incorporated into the LSF VQ can smooth the spectral quantization error so as to reduce the perceived distortion. The error shaping technique based on the weighted log-spectral distortion (WLSD) measure can be used to shape the spectral distortion distribution of quantization error into any frequency-dependent curve depending on what kind of weighting function is used. The WLSD measure is approximated to a quadratic distortion measure or the weighted mean squared error (WMSE) measure since the high computational complexity of the WLSD measure deters this error shaping technique from practical use. The optimal WMSE weights (i.e., the optimal weights of LSF parameters) also are determined based on the theoretical analysis of the WLSD measure in this error shaping technique. To incorporate the information of the spectral dynamics of LPC spectra into LSF VQ, an innovative technique is proposed. It is based on a modified weighted log-spectral distortion (MWLSD) measure. The MWLSD measure can be used to shape the spectral quantization distortion distribution into any frequency-dependent shaping curve, and simultaneously reduce the spectral-dynamics distortion between quantized spectra and unquantized spectra. That is, both the spectral distortion and spectral-dynamics distortion between quantized spectra and unquantized spectra can be taken into account simultaneously in designing a quantizer for a desired error shaping function by %using the proposed technique. using the MWLSD measure. In order to reduce the high computational complexity of the MWLSD measure during the search procedure in the LSF VQ, a quadratically weighted distortion (QWD) measure used to approximate the MWLSD measure is derived based on the theoretical analysis of the MWLSD measure. A simplified quadratically weighted distortion (SQWD) measure is also proposed to further reduce the computational complexity of the QWD measure for practical applications, whose computational complexity is almost equal to that of weighted mean square error (WMSE) measure. The error shaping technique and the spectral dynamics information are finally applied to the LSF quantization of CELP and MELP coders to test how it affects the overall speech quality in actual speech coding algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Roychowdhury, Lakshmi 1975. "Optimal Points for a Probability Distribution on a Nonhomogeneous Cantor Set." Thesis, 2012. http://hdl.handle.net/1969.1/149228.

Full text
Abstract:
The objective of my thesis is to find optimal points and the quantization error for a probability measure defined on a Cantor set. The Cantor set, we have considered in this work, is generated by two self-similar contraction mappings on the real line with distinct similarity ratios. Then we have defined a nonhomogeneous probability measure, the support of which lies on the Cantor set. For such a probability measure first we have determined the n-optimal points and the nth quantization error for n = 2 and n = 3. Then by some other lemmas and propositions we have proved a theorem which gives all the n-optimal points and the nth quantization error for all positive integers n. In addition, we have given some properties of the optimal points and the quantization error for the probability measure. In the end, we have also given a list of n-optimal points and error for some positive integers n. The result in this thesis is a nonhomogeneous extension of a similar result of Graf and Luschgy in 1997. The techniques in my thesis could be extended to discretise any continuous random variable with another random variable with finite range.
APA, Harvard, Vancouver, ISO, and other styles
28

"Optimal soft-decoding combined trellis-coded quantization/modulation." 2000. http://library.cuhk.edu.hk/record=b5890376.

Full text
Abstract:
Chei Kwok-hung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (leaves 66-73).
Abstracts in English and Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Typical Digital Communication Systems --- p.2
Chapter 1.1.1 --- Source coding --- p.3
Chapter 1.1.2 --- Channel coding --- p.5
Chapter 1.2 --- Joint Source-Channel Coding System --- p.5
Chapter 1.3 --- Thesis Organization --- p.7
Chapter Chapter 2 --- Trellis Coding --- p.9
Chapter 2.1 --- Convolutional Codes --- p.9
Chapter 2.2 --- Trellis-Coded Modulation --- p.12
Chapter 2.2.1 --- Set Partitioning --- p.13
Chapter 2.3 --- Trellis-Coded Quantization --- p.14
Chapter 2.4 --- Joint TCQ/TCM System --- p.17
Chapter 2.4.1 --- The Combined Receiver --- p.17
Chapter 2.4.2 --- Viterbi Decoding --- p.19
Chapter 2.4.3 --- Sequence MAP Decoding --- p.20
Chapter 2.4.4 --- Sliding Window Decoding --- p.21
Chapter 2.4.5 --- Block-Based Decoding --- p.23
Chapter Chapter 3 --- Soft Decoding Joint TCQ/TCM over AWGN Channel --- p.25
Chapter 3.1 --- System Model --- p.26
Chapter 3.2 --- TCQ with Optimal Soft-Decoder --- p.27
Chapter 3.3 --- Gaussian Memoryless Source --- p.30
Chapter 3.3.1 --- Theorem Limit --- p.31
Chapter 3.3.2 --- Performance on PAM Constellations --- p.32
Chapter 3.3.3 --- Performance on PSK Constellations --- p.36
Chapter 3.4 --- Uniform Memoryless Source --- p.38
Chapter 3.4.1 --- Theorem Limit --- p.38
Chapter 3.4.2 --- Performance on PAM Constellations --- p.39
Chapter 3.4.3 --- Performance on PSK Constellations --- p.40
Chapter Chapter 4 --- Soft Decoding Joint TCQ/TCM System over Rayleigh Fading Channel --- p.42
Chapter 4.1 --- Wireless Channel --- p.43
Chapter 4.2 --- Rayleigh Fading Channel --- p.44
Chapter 4.3 --- Idea Interleaving --- p.45
Chapter 4.4 --- Receiver Structure --- p.46
Chapter 4.5 --- Numerical Results --- p.47
Chapter 4.5.1 --- Performance on 4-PAM Constellations --- p.48
Chapter 4.5.2 --- Performance on 8-PAM Constellations --- p.50
Chapter 4.5.3 --- Performance on 16-PAM Constellations --- p.52
Chapter Chapter 5 --- Joint TCVQ/TCM System --- p.54
Chapter 5.1 --- Trellis-Coded Vector Quantization --- p.55
Chapter 5.1.1 --- Set Partitioning in TCVQ --- p.56
Chapter 5.2 --- Joint TCVQ/TCM --- p.59
Chapter 5.2.1 --- Set Partitioning and Index Assignments --- p.60
Chapter 5.2.2 --- Gaussian-Markov Sources --- p.61
Chapter 5.3 --- Simulation Results and Discussion --- p.62
Chapter Chapter 6 --- Conclusion and Future Work --- p.64
Chapter 6.1 --- Conclusion --- p.64
Chapter 6.2 --- Future Works --- p.65
Bibliography --- p.66
Appendix-Publications --- p.73
APA, Harvard, Vancouver, ISO, and other styles
29

Kuo, Ko-Chiang, and 郭克強. "2-way Color Image Compression, Via Neural Network and Quantization error correction." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/54585057818142787660.

Full text
Abstract:
碩士
國立海洋大學
電機工程學系
87
In the first part of this thesis, we propose a MICS scheme for image compression of monochrome images based on the discrete wavelet transform and PVC algorithm. Implementation of the MICS scheme involves four major steps. Firstly, apply discrete wavelet transform to obtain a set of biorthogonal subbands of input image. The original image is decomposed at different scales using a pyramidal algorithm. Secondly, use the Karhunen-Loeve transform (KLT) to project wavelet coefficients of some subbands onto fewer principal components. Thirdly, apply a self-creating algorithm, namely, the periodical vitality conservation (PVC) to quantize both the output of KLT and the remaining subbands. Finally, the adaptive arithmetic coding is employed to encode the outputs of PVC. Performance comparisons with the embedded wavelet hierarchical image coder (EZW) (Shapiro, 1993), multi-threshold wavelet coder (MTWC) (Wang and Huo, 1997) and Lazar's coder (Lazar et al., 1996) were conducted. All simulation results indicate that quality reconstructed images can be obtained by using the MICS scheme, even at very low bit rate. The second part of this thesis focuses on color image compression. The proposed CICS scheme first utilizes PVC to design a limited color palette. Then, a 2-way quantization approach is employed to divide all the image blocks into two classes, namely low-frequency and high-frequency classes. The training vectors of two classes can be presented separately and concurrently to two PVC networks for quantization. Furthermore, each training vector is composed of some number of representative colors, depending to which class the block belongs. Finally, when incorporated with QEC, SAQ and the excellent quantization performance delivered by the PVC network, the 2-way approach is shown capable of minimizing the effect of quantization error induced by the high-frequency class. Experimental results have been shown to justy all the claims in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Yueh-Chi, and 吳岳騏. "Soft Error Analysis for CNN Inference Accelerator with Efficient Dynamic Fixed Point Quantization." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/38gpt7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

盧俊志. "The effect of quantization error model on output SNR of image coding system." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37289819479057176413.

Full text
Abstract:
碩士
國立交通大學
電信工程系
91
In TMN8 rate control structure, both the rate model and the distortion model are used to calculate the quantization parameters such that the MSE distortion is minimized subject to the constraint that the resultant rate meet the target bit rate. However, the distortion model in TMN8 is based on the assumption that the DCT value follows the uniform distribution. This assumption is not consistent with the actual distortion, thus with TMN8 the actual distortion is not minimized. In this thesis Laplacian distortion model is proposed based on the fact that the DCT value follows the Laplacian distribution. We have first shown that the Laplacian distortion model is actually more close to the actual distortion. Thus with the Laplacian distortion model in TMN8 rate control, the resutant MSE distortion is smaller than that with the uniform distortion model under the same target bit rate.
APA, Harvard, Vancouver, ISO, and other styles
32

Kuan-Hsien, Li, and 李冠賢. "Error-Tolerant Analysis and Design of Discrete Wavelet Transform and Quantization in JPEG2000 Codec." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/tx57eb.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
102
With the advance of integrated circuits, yield and reliability have become one of the critical and popular issues to be addressed. Error-tolerance is a novel notion that can improve yield and reliability efficiently. The research of this notion is still in progress and needs further investigation to make this notion applicable in real applications. This thesis addresses error-tolerant analysis and design issues for JPEG2000 codec. JPEG2000 is a high performance image compression standard that can outperform JPEG with higher compression ratio under the same quality. Targeting the discrete wavelet transform (DWT), inverse discrete wavelet transform (IDWT) and quantization blocks of the JPEG2000 codec, we find that error-tolerance is quite adequate to be applied to these blocks. In this thesis we implement these components and inject faults to carry out fault analysis procedures. We carefully analyze the resulting image quality produced by the faulty components using the attributes of PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity). The analysis results show that under the constraint of acceptable image quality, there are up to {76.7%, 78.6%, 68.4%} acceptable faults in the implemented {DWT, IDWT, quantization} blocks. Furthermore, in addition to demonstrating the error-tolerability of these blocks, the analysis results also show that the sub-circuits that contain acceptable faults can be simplified so as to reduce the area, critical path delay and power consumption. This can also be helpful to increase chip yield. By properly simplifying the target blocks while still maintaining acceptable image quality, the {area, critical path delay, power consumption} of DWT and IDWT blocks are reduced by {63.3%, 24.7%, 54.6%} and {63.4%, 27.4%, 53.2%}, respectively. The resulting image quality of DWT (IDWT) is in the range of 26.28 dB~39.91 dB (25.42 dB~40.81 dB) in terms of PSNR and 0.92~0.99 (0.93~0.99) in terms of SSIM.
APA, Harvard, Vancouver, ISO, and other styles
33

Chiang, Tien-Szu, and 江天賜. "Image coding using vector quantization with finite state entropy coding and error-compensated reconstruction." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/94784594883685620072.

Full text
Abstract:
碩士
國立交通大學
電子研究所
81
We consider vector quantization (VQ) for image coding. Three specific techniques are investigated, namely, VQ with finite- state entropy coding, error-compensated reconstruction, and progressive VQ. In VQ with finite-state entropy coding, all the states employ the same universal VQ codebook, but each state has its own variable-length codebook for mapping of the VQ code vectors into corresponding channel codewords. The finite-state entropy coding exploits the inter-block correlation among nearby image blocks (vectors) and it is found that even a simple scheme can reduce the bit rate significantly. The reconstructed image quality in the decoder can be enhanced with an error-compensating postprocessing stage which seeks to maintain edge continuity across block boundaries. This postprocessing function can be used with VQ encoders with or without memory. Simulation results show that it yields an improvement of 1 dB over ordinary VQ. Thus a combined use of finite-state entropy coding and error-compensated reconstruction in VQ-based image coding can lead to a better coded image quality at a lower bit rate than conventional VQ of the same size. The progressnive VQ is proposed to capitalize on the potential gain of large-vector VQ while keeping the complexity low. It has five stages and operates in the DCT domain. At the lowest stage, the dc coefficients are coded by scalar quantization with finite-state entropy coding to exploit the inter-block correlation. At the other four stages, ordinary VQ is employed on other subsets of the DCT coefficients.In addition to reducing complexity, the progressive VQ can be used to dynamically allocate bits to each block based on its characteristics to achieve a more uniform coded image quality.A particular method for doing so is discussed.In this method, the first two stages of the progressive VQ is always performed and the other three stages are selected based on a comparison of the sum of squared errors in each block and some threshold.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Zhengwei. "Performance of Full-rate Full-diversity Space-time Codes Under Quantization and Channel Estimation Error." Thesis, 2010. http://hdl.handle.net/1807/25640.

Full text
Abstract:
In this work, we investigate the performance of full-rate full-diversity space-time codes (FRFD-STCs) under practical conditions. In this thesis, we first discuss the performance of FRFD-STCs in the moderate SNR region. We also compare FRFD-STCs with spatial multiplexing, using the same transmission rate, for both un-coded and coded systems. The results show that spatial multiplexing is as good as FRFD-STCs with channel coding. Secondly, we investigate the issue of quantization, i.e. the effect of the quantization error in the space-time encoding matrix. Our analysis and results show that the performance loss is negligible. Finally, we propose two receiver structures in the presence of imperfect channel state information (CSI). The two receivers use the VEM module as the channel estimator and MIMO detector respectively. Both receivers are of low complexity, and have better performance than the methods proposed previously.
APA, Harvard, Vancouver, ISO, and other styles
35

Кобяков, А. В., and A. V. Kobyakov. "Разработка схем управления зеркальными антеннами 600 метрового радиотелескопа на основе цифровой обработки сигналов : магистерская диссертация." Master's thesis, 2017. http://hdl.handle.net/10995/54217.

Full text
Abstract:
В данной работе представлена разработка схемы управления зеркальными антеннами 600 метрового радиотелескопа на основе цифровой обработки сигналов. Был произведен анализ диаграммы направленности радиотелескопа при цифровом методе формирования, а также оценено влияние фазовых ошибок на диаграмму направленности радиотелескопа, возникающих в процессе оцифровке аналогового сигнала на несущей частоте. Было произведено математическое моделирование и оценка влияния параметров цифровой элементной базы на характеристики диаграммы направленности радиотелескопа, предложено оборудование для построения диаграммообразующей схемы радиотелескопа.
This work contains the development of a control scheme for mirror antennas of a 600-meter radio telescope based on digital signal processing. An analysis was made of the radiation pattern of the radio telescope under the digital method of formation. The influence of phase errors on the radiation pattern of the radio telescope, which arise in the process of digitizing an analog signal at a carrier frequency, was estimated. Mathematical modeling and estimation of the effect of the parameters of the digital element base on the characteristics of the radiation pattern of the radio telescope were made, equipment for constructing a radio telescope was proposed.
APA, Harvard, Vancouver, ISO, and other styles
36

Uttarwar, Tushar. "A digital multiplying delay locked loop for high frequency clock generation." Thesis, 2011. http://hdl.handle.net/1957/25739.

Full text
Abstract:
As Moore���s Law continues to give rise to ever shrinking channel lengths, circuits are becoming more digital and ever increasingly faster. Generating high frequency clocks in such scaled processes is becoming a tough challenge. Digital phase locked loops (DPLLs) are being explored as an alternative to conventional analog PLLs but suffer from issues such as low bandwidth and higher quantization noise. A digital multiplying delay locked loop (DMDLL) is proposed which aims at leveraging the benefit of high bandwidth of DLL while at the same time achieving the frequency multiplication property of PLL. It also offers the benefits of easier portability across process and occupies lesser area. The proposed DMDLL uses a simple flip-flop as 1-bit TDC (Time Digital Converter) for Phase Detector (PD). A digital accumulator acts as integrator for loop filter while a ��-�� DAC in combination with a VCO acts like a DCO. A carefully designed select logic in conjunction with a MUX achieves frequency multiplication. The proposed digital MDLL is taped out in 130nm process and tested to obtain 1.4GHz output frequency with 1.6ps RMS jitter, 17ps peak-to-peak jitter and -50dbC/Hz reference spurs.
Graduation date: 2012
APA, Harvard, Vancouver, ISO, and other styles
37

Christou, Cameron. "Optimal Dither and Noise Shaping in Image Processing." Thesis, 2008. http://hdl.handle.net/10012/3867.

Full text
Abstract:
Dithered quantization and noise shaping is well known in the audio community. The image processing community seems to be aware of this same theory only in bits and pieces, and frequently under conflicting terminology. This thesis attempts to show that dithered quantization of images is an extension of dithered quantization of audio signals to higher dimensions. Dithered quantization, or ``threshold modulation'', is investigated as a means of suppressing undesirable visual artifacts during the digital quantization, or requantization, of an image. Special attention is given to the statistical moments of the resulting error signal. Afterwards, noise shaping, or ``error diffusion'' methods are considered to try to improve on the dithered quantization technique. We also take time to develop the minimum-phase property for two-dimensional systems. This leads to a natural extension of Jensen's Inequality and the Hilbert transform relationship between the log-magnitude and phase of a two-dimensional system. We then describe how these developments are relevant to image processing.
APA, Harvard, Vancouver, ISO, and other styles
38

Alain, Benoît. "Rendu d'images en demi-tons par diffusion d'erreur sensible à la structure." Thèse, 2009. http://hdl.handle.net/1866/4319.

Full text
Abstract:
Le présent mémoire comprend un survol des principales méthodes de rendu en demi-tons, de l’analog screening à la recherche binaire directe en passant par l’ordered dither, avec une attention particulière pour la diffusion d’erreur. Ces méthodes seront comparées dans la perspective moderne de la sensibilité à la structure. Une nouvelle méthode de rendu en demi-tons par diffusion d’erreur est présentée et soumise à diverses évaluations. La méthode proposée se veut originale, simple, autant à même de préserver le caractère structurel des images que la méthode à l’état de l’art, et plus rapide que cette dernière par deux à trois ordres de magnitude. D’abord, l’image est décomposée en fréquences locales caractéristiques. Puis, le comportement de base de la méthode proposée est donné. Ensuite, un ensemble minutieusement choisi de paramètres permet de modifier ce comportement de façon à épouser les différents caractères fréquentiels locaux. Finalement, une calibration détermine les bons paramètres à associer à chaque fréquence possible. Une fois l’algorithme assemblé, toute image peut être traitée très rapidement : chaque pixel est attaché à une fréquence propre, cette fréquence sert d’indice pour la table de calibration, les paramètres de diffusion appropriés sont récupérés, et la couleur de sortie déterminée pour le pixel contribue en espérance à souligner la structure dont il fait partie.
This work covers some important methods in the domain of halftoning: analog screening, ordered dither, direct binary search, and most particularly error diffusion. The methods will be compared in the modern perspective of sensitivity to structure. A novel halftoning method is also presented and subjected to various evaluations. It produces images of visual quality comparable to that of the state-of-the-art Structure-aware Halftoning method; at the same time, it is two to three orders of magnitude faster. First is described how an image can be decomposed into its local frequency content. Then, the basic behavior of the proposed method is given. Next, a carefully chosen set of parameters is presented that allow modifications to this behavior, so as to maximize the eventual reactivity to frequency content. Finally, a calibration step determines what values the parameters should take for any local frequency information encountered. Once the algorithm is assembled, any image can be treated very efficiently: each pixel is attached to its dominant frequency, the frequency serves as lookup index to the calibration table, proper diffusion parameters are retrieved, and the determined output color contributes in expectation to underline the structure from which the pixel comes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography