To see the other types of publications on this topic, follow the link: Discrete sine transform.

Dissertations / Theses on the topic 'Discrete sine transform'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Discrete sine transform.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MASERA, MAURIZIO. "Transform algorithms and architectures for video coding." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2709432.

Full text
Abstract:
During the last years, the increasing popularity of very high resolution formats and the growing of video applications have posed critical issues on both the coding efficiency and the complexity of video compression systems. This thesis focuses on the transform coding stage of the most recent video coding technologies, by addressing both the complexity evaluation and the design of custom hardware architectures. First, the thesis thoroughly analyzes the HEVC transform complexity, by relying on the proposed CI metric. A tool-by-tool investigation is performed to quantify the complexity of the transform stage as function of different coding options, thus identifying which parameters are mainly related to the quality-complexity trade-off. The analysis is concluded with the determination of the transform stage requirements for real life HEVC encoders working in real-time. The obtained results motivate the need of transform hardware architectures. Therefore, an exploration of alternative solutions for hardware architectures to compute the transforms required by the standard is provided. Different DCT and DST factorizations and approximations have been compared in terms of arithmetic cost and sharing degree between multiple transform sizes in order to select the best promising ones for hardware implementation. Then, 1D and 2D transform architectures are proposed and their performance are evaluated within HEVC. The rate-distortion analysis and the hardware synthesis show that the proposed DCT and DST architectures achieve significant area and power reduction with respect to the state-of-art implementations at the expense of very small coding efficiency loss. Finally, this thesis investigates some adaptive transform coding techniques for beyond HEVC video compression, which try to represent the signal with a more sparse representation. The first technique employs odd type sinusoidal transforms. Therefore, some relationships for odd type DCTs and DSTs are recalled and exploited to reuse known factorizations of the DCT-VI and DST-VII to obtain other odd type transforms by applying simple permutations and sign inversions. Low-complexity DCT-V and DCT-VIII are derived and implemented as accelerators of transform functions of the future video coding technology. The synthesis results show lower hardware costs and improved efficiency with respect to the reference implementations. The second technique relies on the steerable method, which is extended to design directional transforms starting from any two-dimensional separable transform. Then, the HEVC steerable integer DCT and DST are defined and integrated on the top of the standard. The simulations show coding efficiency gains when using transforms with different orientation. Finally, the application of steerable transforms to partial video encryption is also analyzed.
APA, Harvard, Vancouver, ISO, and other styles
2

Martucci, Stephen A. "Symmetric convolution and the discrete sine and cosine transforms : principles and applications." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Logette, Patrice. "Etude et réalisation d'un processeur acousto-optique numérique de traitement des signaux." Valenciennes, 1997. https://ged.uphf.fr/nuxeo/site/esupversions/bbfd31df-2499-46a6-843d-52f346b1db41.

Full text
Abstract:
Un système hybride acousto-optique/numérique axe principalement sur le filtrage F. I. R. A été développé au laboratoire dans le cadre d'une thèse antérieure. L'objet du présent travail est, d'une part, d'améliorer le système existant et, d'autre part, de tester les aptitudes du système ainsi modifié à effectuer d'autres types de calcul. Nous commençons par un résumé des travaux relatifs à l'ancien système, afin de bien positionner le problème. Nous exposons ensuite la conception du nouveau système. Une première partie décrit les modifications des circuits électroniques, avec l'utilisation de circuits de logique programmable de marque Altera. Une seconde partie est dédiée à l'aspect commande. On y détaille le programme de pilotage du système, la création et l'utilisation de modules indépendants pour chaque type de calcul, ainsi que les utilitaires associés (simulation, génération d'algorithmes). Nous terminons par une présentation de quelques exemples de calculs (FIR, IIR, DFT, DCT, corrélation) et évaluons les performances de notre système pour chacun de ces types d’opérations. Le bilan est assez satisfaisant dans l'ensemble, bien que l'apport des circuits Altera ne se soit pas révélé à la hauteur de nos espérances. Le filtrage IIR est le moins performant et nécessiterait la recherche d'autres algorithmes. Cependant, pour être réellement opérationnel, il faudrait améliorer la partie acousto-optique, ou à moyen terme, passer au tout numérique. Nous pourrions, alors, disposer d'un système simple et pratique pour simuler, tester ou valider, sur maquette, des algorithmes ou des sous-systèmes développés au laboratoire dans divers domaines du traitement de signal.
APA, Harvard, Vancouver, ISO, and other styles
4

Coudoux, François-Xavier. "Evaluation de la visibilité des effets de blocs dans les images codées par transformée : application à l'amélioration d'images." Valenciennes, 1994. https://ged.uphf.fr/nuxeo/site/esupversions/a0a7cc38-609d-4d86-9c3a-a018590bc012.

Full text
Abstract:
Les méthodes de codage avec perte basées sur la Transformée en Cosinus Discrète constituent la base de la plupart des standards actuels de compression d'images numériques. Ce type de codage par Transformée nécessite une segmentation préalable de l'image en blocs de pixels disjoints. L'effet de blocs constitue le principal défaut de ce type de codage: les frontières entre blocs adjacents deviennent visibles pour des taux de compression élevés. Ce défaut est particulièrement gênant pour l'observateur, et affecte sévèrement la qualité visuelle de l'image reconstruite. Le but de notre étude est de proposer une méthode de détection locale des artefacts, ainsi qu'une mesure de leur importance visuelle. Cette mesure, qui prend en compte plusieurs propriétés du système visuel humain, caractérise la dégradation introduite dans l'image par la distorsion de blocs. Elle est utilisée afin d'établir un critère de qualité globale des images codées JPEG. Ce critère permet de quantifier la qualité des images reconstruites, en attribuant une note de qualité à l'image dégradée. Une application directe des résultats de la mesure des visibilités des effets de blocs concerne la détection et la correction de ces défauts à l'intérieur d'une image traitée par blocs. Nous présentons une méthode originale de réduction des effets de blocs; elle consiste en un filtrage local adaptatif à la visibilité des artefacts de blocs dans l'image. La correction apportée permet de réduire sensiblement les défauts les plus visibles, sans dégrader le reste de l'image. La méthode est validée dans le cas d'images fixes codées JPEG; son extension immédiate aux standards MPEG1 et MPEG2 de compression de séquences d'images reste possible, moyennant éventuellement la prise en compte des propriétés temporelles de la vision. Une implémentation matérielle est envisagée sous forme d'un circuit électronique, qui pourra être utilisé sur les terminaux de consultation multimédia afin d'améliorer la qualité visuelle de l'image avant affichage final.
APA, Harvard, Vancouver, ISO, and other styles
5

Coquelin, Loïc. "Contribution aux traitements des incertitudes : application à la métrologie des nanoparticules en phase aérosol." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-01066760.

Full text
Abstract:
Cette thèse a pour objectif de fournir aux utilisateurs de SMPS (Scanning Mobility Particle Sizer) une méthodologie pour calculer les incertitudes associées à l'estimation de la granulométrie en nombre des aérosols. Le résultat de mesure est le comptage des particules de l'aérosol en fonction du temps. Estimer la granulométrie en nombre de l'aérosol à partir des mesures CNC revient à considérer un problème inverse sous incertitudes.Une revue des modèles existants est présentée dans le premier chapitre. Le modèle physique retenu fait consensus dans le domaine d'application.Dans le deuxième chapitre, un critère pour l'estimation de la granulométrie en nombre qui couple les techniques de régularisation et de la décomposition sur une base d'ondelettes est décrit.La nouveauté des travaux présentés réside dans l'estimation de granulométries présentant à la fois des variations lentes et des variations rapides. L'approche multi-échelle que nous proposons pour la définition du nouveau critère de régularisation permet d'ajuster les poids de la régularisation sur chaque échelle du signal. La méthode est alors comparée avec la régularisation classique. Les résultats montrent que les estimations proposées par la nouvelle méthode sont meilleures (au sens du MSE) que les estimations classiques.Le dernier chapitre de cette thèse traite de la propagation de l'incertitude à travers le modèle d'inversiondes données. C'est une première dans le domaine d'application car aucune incertitude n'est associée actuellement au résultat de mesure. Contrairement à l'approche classique qui utilise un modèle fixe pour l'inversion en faisant porter l'incertitude sur les entrées, nous proposons d'utiliser un modèle d'inversion aléatoire (tirage Monte-Carlo) afin d'intégrer les erreurs de modèle. Une estimation moyenne de la granulométrie en nombre de l'aérosol et une incertitude associée sous forme d'une région de confiance à 95 % est finalement présentée sur quelques mesures réelles.
APA, Harvard, Vancouver, ISO, and other styles
6

Cook, James Allen. "A decompositional investigation of 3D face recognition." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16653/1/James_Allen_Cook_Thesis.pdf.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
7

Cook, James Allen. "A decompositional investigation of 3D face recognition." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16653/.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
8

Vezzoli, Massimiliano. "Intrinsic kinetics of titania photocatalysis : simplified models for their investigation." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/51574/1/Massimiliano_Vezzoli_Thesis.pdf.

Full text
Abstract:
Even though titanium dioxide photocatalysis has been promoted as a leading green technology for water purification, many issues have hindered its application on a large commercial scale. For the materials scientist the main issues have centred the synthesis of more efficient materials and the investigation of degradation mechanisms; whereas for the engineers the main issues have been the development of appropriate models and the evaluation of intrinsic kinetics parameters that allow the scale up or re-design of efficient large-scale photocatalytic reactors. In order to obtain intrinsic kinetics parameters the reaction must be analysed and modelled considering the influence of the radiation field, pollutant concentrations and fluid dynamics. In this way, the obtained kinetic parameters are independent of the reactor size and configuration and can be subsequently used for scale-up purposes or for the development of entirely new reactor designs. This work investigates the intrinsic kinetics of phenol degradation over titania film due to the practicality of a fixed film configuration over a slurry. A flat plate reactor was designed in order to be able to control reaction parameters that include the UV irradiance, flow rates, pollutant concentration and temperature. Particular attention was paid to the investigation of the radiation field over the reactive surface and to the issue of mass transfer limited reactions. The ability of different emission models to describe the radiation field was investigated and compared to actinometric measurements. The RAD-LSI model was found to give the best predictions over the conditions tested. Mass transfer issues often limit fixed film reactors. The influence of this phenomenon was investigated with specifically planned sets of benzoic acid experiments and with the adoption of the stagnant film model. The phenol mass transfer coefficient in the system was calculated to be km,phenol=8.5815x10-7Re0.65(ms-1). The data obtained from a wide range of experimental conditions, together with an appropriate model of the system, has enabled determination of intrinsic kinetic parameters. The experiments were performed in four different irradiation levels (70.7, 57.9, 37.1 and 20.4 W m-2) and combined with three different initial phenol concentrations (20, 40 and 80 ppm) to give a wide range of final pollutant conversions (from 22% to 85%). The simple model adopted was able to fit the wide range of conditions with only four kinetic parameters; two reaction rate constants (one for phenol and one for the family of intermediates) and their corresponding adsorption constants. The intrinsic kinetic parameters values were defined as kph = 0.5226 mmol m-1 s-1 W-1, kI = 0.120 mmol m-1 s-1 W-1, Kph = 8.5 x 10-4 m3 mmol-1 and KI = 2.2 x 10-3 m3 mmol-1. The flat plate reactor allowed the investigation of the reaction under two different light configurations; liquid and substrate side illumination. The latter of particular interest for real world applications where light absorption due to turbidity and pollutants contained in the water stream to be treated could represent a significant issue. The two light configurations allowed the investigation of the effects of film thickness and the determination of the catalyst optimal thickness. The experimental investigation confirmed the predictions of a porous medium model developed to investigate the influence of diffusion, advection and photocatalytic phenomena inside the porous titania film, with the optimal thickness value individuated at 5 ìm. The model used the intrinsic kinetic parameters obtained from the flat plate reactor to predict the influence of thickness and transport phenomena on the final observed phenol conversion without using any correction factor; the excellent match between predictions and experimental results provided further proof of the quality of the parameters obtained with the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodesten, Stephan. "Program för frekvensanalys." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-58157.

Full text
Abstract:
Denna rapport täcker arbetsprocessen bakom att skapa en spektrumanalysator. Läsaren kommer att få läsa om den valda metoden men även alternativa metoder. Utöver detta kommer även de teoretiska delarna bakom varje moment att undersökas samt jämföras med potentiella alternativa lösningar. Projektet har utförts på uppdrag av KA Automation. Syftet med projektet var att skapa en basplattform för analys av ljudfrekvenser. Målet med detta var att kunna identifiera ljudegenskaper i form av frekvenserna hos exempelvis servomotorer i vattenpumpar. Tanken var att i ett senare utvecklingsskede kunna identifiera om och när nya frekvenser dykt upp i ljudprofilen vilket i sådana fall kan resultera i att motorn är i behov av service. Basplattformen är uppbyggd med hjälp av C# och ljudbehandlingsbiblioteket NAudio. Från resultatet kan slutsatsen dras att detta program kan analysera ljud och visa de olika frekvensernas styrka och därmed är en lämplig basplattform för vidareutveckling.
This report will cover the work process behind creating a spectrum analyzer. The reader will be able to read about the chosen method but also the alternative methods. Apart from this the theoretical parts behind every moment will also be covered and compared to potential alternative solutions. The project has been carried out on behalf of KA Automation. The purpose of the project was to create a base for analyzing sound frequencies. The goal was to be able to identify sound properties in the form of frequencies in servo motors in for example water pumps. The idea was to be able to in a later development stage be able to identify when new frequencies have entered the audio profile which might result in the motor to be in need of service. The base is created with the help of C# and the sound library NAudio. From the result one can conclude that this program can analyze sound and display the magnitude of its frequency components and is therefore a suitable base for future development.
APA, Harvard, Vancouver, ISO, and other styles
10

Peng, Chun-cheng, and 彭俊程. "IMAGE INTERPOLATION BY APPLICATIONOF DISCRETE SINE TRANSFORM." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/31767727391176020662.

Full text
Abstract:
碩士
大同大學
電機工程學系(所)
101
Interpolation has been widely utilized in the generally adapted techniques for image processing nowadays, such as polynomial interpolation and Discrete Fourier Transform with zero padding. Somehow there usually deficiencies exist within and further optimization is highly demanded for the even finer needs for the current visual applications. In this treatise, it has been found and discovered that interpolation techniques with Discrete Sine/Cosine Transform perform satisfactorily on in image processing one-dimensional, especially the so-called DST-Ⅰ according to the experiments in this thesis. Some limitations of DST-Ⅰfound in previously works have been alleviated by our team while the improved version of DST-Ⅱhas also been derived and been utilized in the interpolation techniques. The visual differences of processed images after enlargement would be displayed and compared between DST-Ⅰ &; DST-Ⅱ in the later chapters. The picture viewer software handy usually generate defects as enlarging the images, it would bring unpleasant experiences for some applications require visual details. Our Team has successfully improved the Discrete Sine Transform technique that previously not applicable to one-dimensional image processing and pleasant results have been obtained. The procedures and effects DST-Ⅰ &; DST-Ⅱtechniques are depicted and the images after enlargement are also displayed and compared in this work..
APA, Harvard, Vancouver, ISO, and other styles
11

蔡懿飛. "Implementation of FPGA-Based 2-D Discrete Sine Transform." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/73146423204710703802.

Full text
Abstract:
碩士
明新科技大學
電機工程研究所
97
The Discrete sine transforms (DST) can be applied to different fields according to the characteristic, in signal processing, digital filtering, image coding. This paper adopt row-column method for 2-D discrete sine transform implementation. The purpose of this method can reduce the implement cost. In this structure, We use a recursive algorithm to implement one-dimensional discrete sine transform (1-D DST), this algorithm decomposition to separate higher order DST into two low order DST, In order to reduce the circuit complexity, the algorithm propose a computing structure of recursive structure, which requires fewer multipliers and adders.
APA, Harvard, Vancouver, ISO, and other styles
12

林佑城. "A Fast Recursive Algorithm For computing the Discrete sine Transform." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96571008701786920907.

Full text
Abstract:
碩士
明新科技大學
電機工程研究所
95
The discrete sine transform(DST) is widely applied in various fields , including image data compression and digital filtering, image reconstraction, and image coding . This paper presents a recursive algorithm for DST with a structure that allows the generation of the next higher order DST from two identical lower order DST’s. As a result, the method for implementing this recursive DST requires fewer multipliers and adders than other DST algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Min Jea, and 劉明傑. "Studies of Discrete Sine Transform for Channel Estimation of OFDM Systems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78313744771085475330.

Full text
Abstract:
碩士
大同大學
通訊工程研究所
98
Orthogonal Frequency Division Multiplex (OFDM) is the nowadays fourth gener-ation mobile communication system which is attentive of modulation technology. In re-cently, the wireless area standard IEEE 802.11a 、 WiMAX (Worldwide Interoperability for Microwave Access)、 LTE(Long-Term Evolution Technology) etc. , all of use OFDM multicarrier modulation,which is based on the idea of dividing a given high-bit-rate data stream into several parallel lower bit-rate streams and modulation each stream on separate carriers. It can use wideband efficiently, reduce noise, improving cryptogram and reducing multi-path fading etc. Rayleigh fading is an influence wireless correspondence quality of important rea-son. Pilot-aided channel estimation for OFDM is proposed to overcome Rayleigh fading. In IEEE 802.11a,the pilot of left and right outside have signaling transmission, at present general use DCT channel estimation interpolation of method,which could not estimation pilot left nd right outside all of subcarrier of status,this is not discussion of problem unitl now. Therefore,we use DST method to solve this problem. When we use DST channel estimation of method, the up sampling factor is even, we propose forward transform by DST type II, and inverse transform by DST type I (cover 0, 180 degree) to solve the problem that due to the up sampling factor is even to cause pilot position not correct of problem. In IEEE 802.11a and Rayleigh fading environment, the resulting of simulator by DST channel estimation interpolation of method better than by DCT channel estima-tion interpolation of method of effecting. Key word: 802.11a, OFDM, DCT, DST channel estimation.
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, Hong-Ren, and 林泓任. "THE INTRINSIC STUDIES OF DISCRETE COSINE AND SINE TRANSFORM AND THEIR APPLICATIONS ON DIGITAL INTERPOLATION." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/96508578209503785158.

Full text
Abstract:
碩士
大同大學
通訊工程研究所
101
Interpolation is the process of constructing new data points based on the range of a discrete set of known data. Traditional polynomial interpolation employs the complexity of computing to enhance accuracy in time domain. Traditional interpolation also utilizes zero-padding after discrete fourier transform (DFT) but this method may cause edge effect to distort estimated results. Discrete Cosine Transformation (DCT) and Discrete Sine Transformation (DST) have various types of definitions which always make users confused with number of data points or data intervals. This thesis developed a classification based on even/odd expansion and data/virtual –point symmetry. We also discussed various applications of interpolation depending on different conditions. DCT, being widely utilized in compression and decompression, is considered as being unable to apply to interpolation of Discrete Cosine Transform type II (DCT-II). In this thesis, we investigate the intrinsic properties via even symmetry and proposed improved algorithms. The proposed DCT-II can be applied on interpolation after proper modification, meanwhile Discrete Sine Transform type I (DST-I) has extraordinary performance due to zero points in the front and end positions yet limits the form of data. We also proposed algorithms to allow any types of data to be available for interpolation. Moreover, DST-II can be applied on interpolation accurately as well with new modification.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Tung-Ming, and 吳通明. "THE APPLICATION OF IMAGE ZOOMING BY USING THE COMBINATION OF LINEAR INTERPOLATION AND DISCRETE SINE TRANSFORM." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/87z886.

Full text
Abstract:
碩士
大同大學
通訊工程研究所
102
Interpolation has been widely used in the digital image processing.The interpolation method often used at present, includes the interpolation by using discrete Fourier transform with zero padding or polynomial interpolation. Recently, discrete sine transform and cosine transform interpolation method can perform better in this topic in the current literature. The discrete sine transform type I on a one-dimensional signal processing have been shown to perform outstanding in our team experiment. While application the discrete sine transform type I and discrete sine transform type II on the image interpolation, there suffers the artificial ripples effect, especially in the significant color shading area. In this thesis, we are dedicated to depress the ripple meanwhile keep the visual resolution. We proposed to apply the combination of the linear interpolation and discrete sine transform for image interpolation. In fixed ratio, the ripple images can be eliminated to keep good resolution. The simulation results show that the proposed algorithm works well in the interpolation with different kind of images.
APA, Harvard, Vancouver, ISO, and other styles
16

Suresh, K. "MDCT Domain Enhancements For Audio Processing." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1184.

Full text
Abstract:
Modified discrete cosine transform (MDCT) derived from DCT IV has emerged as the most suitable choice for transform domain audio coding applications due to its time domain alias cancellation property and de-correlation capability. In the present research work, we focus on MDCT domain analysis of audio signals for compression and other applications. We have derived algorithms for linear filtering in DCT IV and DST IV domains for symmetric and non-symmetric filter impulse responses. These results are also extended to MDCT and MDST domains which have the special property of time domain alias cancellation. We also derive filtering algorithms for the DCT II and DCT III domains. Comparison with other methods in the literature shows that, the new algorithm developed is computationally MAC efficient. These results are useful for MDCT domain audio processing such as reverb synthesis, without having to reconstruct the time domain signal and then perform the necessary filtering operations. In audio coding, the psychoacoustic model plays a crucial role and is used to estimate the masking thresholds for adaptive bit-allocation. Transparent quality audio coding is possible if the quantization noise is kept below the masking threshold for each frame. In the existing methods, the masking threshold is calculated using the DFT of the signal frame separately for MDCT domain adaptive quantization. We have extended the spectral integration based psychoacoustic model proposed for sinusoidal modeling of audio signals to the MDCT domain. This has been possible because of the detailed analysis of the relation between DFT and MDCT; we interpret the MDCT coefficients as co-sinusoids and then apply the sinusoidal masking model. The validity of the masking threshold so derived is verified through listening tests as well as objective measures. Parametric coding techniques are used for low bit rate encoding of multi-channel audio such as 5.1 format surround audio. In these techniques, the surround channels are synthesized at the receiver using the analysis parameters of the parametric model. We develop algorithms for MDCT domain analysis and synthesis of reverberation. Integrating these ideas, a parametric audio coder is developed in the MDCT domain. For the parameter estimation, we use a novel analysis by synthesis scheme in the MDCT domain which results in better modeling of the spatial audio. The resulting parametric stereo coder is able to synthesize acceptable quality stereo audio from the mono audio channel and a side information of approximately 11 kbps. Further, an experimental audio coder is developed in the MDCT domain incorporating the new psychoacoustic model and the parametric model.
APA, Harvard, Vancouver, ISO, and other styles
17

Chang, Chun. "A Spatially-filtered Finite-difference Time-domain Method with Controllable Stability Beyond the Courant Limit." Thesis, 2012. http://hdl.handle.net/1807/32460.

Full text
Abstract:
This thesis introduces spatial filtering, which is a technique to extend the time step size beyond the conventional stability limit for the Finite-Difference Time-Domain (FDTD) method, at the expense of transforming field nodes between the spatial domain and the discrete spatial-frequency domain and removing undesired spatial-frequency components at every FDTD update cycle. The spatially-filtered FDTD method is demonstrated to be almost as accurate as and more efficient than the conventional FDTD method via theories and numerical examples. Then, this thesis combines spatial filtering and an existing subgridding scheme to form the spatially-filtered subgridding scheme. The spatially-filtered subgridding scheme is more efficient than existing subgridding schemes because the former allows the time step size used in the dense mesh to be larger than the dense mesh CFL limit. However, trade-offs between accuracy and efficiency are required in complicated structures.
APA, Harvard, Vancouver, ISO, and other styles
18

Chang, Chun-Hao, and 張峻豪. "Random Discrete Fractional Cosine and Sine Transforms with Applications." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/78531034712466320492.

Full text
Abstract:
碩士
中原大學
通訊工程碩士學位學程
103
In this thesis, we propose new transforms related to Random Discrete Fractional Cosine Transform and Random Discrete Fractional Sine Transform.They include the Real Random Discrete Fractional Cosine Transform and Real Random Discrete Fractional Sine Transform of types I、IV、V and VIII,which are real transforms of RDFRCT and RDFRST.We also propose the Random Generalized Discrete Fractional Fourier Transform and Random Generalized Discrete Fractional Hartley Transform matrices with reduced computations.They have the properties of fast algorithms,which reduce half computations of the RGDFRFT and RGDFRHT.These transforms are all random transforms so that they can be applied in image encryption and image watermarking.In image watermarking experiments,we find RDFRST of type VIII has the best robusteness.It can resist the largest region of cropping attack,such that it is the transform with best robusteness.
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Qing Huang, and 魏清煌. "A study on the implementations of discrete sine and cosine transforms." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/63549551112012669131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhong, Yangfan. "Joint Source-Channel Coding Reliability Function for Single and Multi-Terminal Communication Systems." Thesis, 2008. http://hdl.handle.net/1974/1207.

Full text
Abstract:
Traditionally, source coding (data compression) and channel coding (error protection) are performed separately and sequentially, resulting in what we call a tandem (separate) coding system. In practical implementations, however, tandem coding might involve a large delay and a high coding/decoding complexity, since one needs to remove the redundancy in the source coding part and then insert certain redundancy in the channel coding part. On the other hand, joint source-channel coding (JSCC), which coordinates source and channel coding or combines them into a single step, may offer substantial improvements over the tandem coding approach. This thesis deals with the fundamental Shannon-theoretic limits for a variety of communication systems via JSCC. More specifically, we investigate the reliability function (which is the largest rate at which the coding probability of error vanishes exponentially with increasing blocklength) for JSCC for the following discrete-time communication systems: (i) discrete memoryless systems; (ii) discrete memoryless systems with perfect channel feedback; (iii) discrete memoryless systems with source side information; (iv) discrete systems with Markovian memory; (v) continuous-valued (particularly Gaussian) memoryless systems; (vi) discrete asymmetric 2-user source-channel systems. For the above systems, we establish upper and lower bounds for the JSCC reliability function and we analytically compute these bounds. The conditions for which the upper and lower bounds coincide are also provided. We show that the conditions are satisfied for a large class of source-channel systems, and hence exactly determine the reliability function. We next provide a systematic comparison between the JSCC reliability function and the tandem coding reliability function (the reliability function resulting from separate source and channel coding). We show that the JSCC reliability function is substantially larger than the tandem coding reliability function for most cases. In particular, the JSCC reliability function is close to twice as large as the tandem coding reliability function for many source-channel pairs. This exponent gain provides a theoretical underpinning and justification for JSCC design as opposed to the widely used tandem coding method, since JSCC will yield a faster exponential rate of decay for the system error probability and thus provides substantial reductions in complexity and coding/decoding delay for real-world communication systems.
Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2008-05-13 22:31:56.425
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography