Dissertations / Theses on the topic 'VQ'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'VQ.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pan, Jenq-Shyang. "Improved algorithms for VQ codeword search, codebook design and codebook index assignment." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15581.
Full textPecher, Pascal [Verfasser]. "Proteine mit VQ-Motiv : Substrate pflanzlicher MAP-Kinasen und potentielle Regulatoren der Immunabwehr / Pascal Pecher." Halle, 2017. http://d-nb.info/1155760905/34.
Full textWeyhe, Martin [Verfasser], Dierk [Gutachter] Scheel, Ulla [Gutachter] Bonas, and Thorsten [Gutachter] Nürnberger. "Transcriptional regulation of defence gene expression by a VQ-motif containing protein / Martin Weyhe ; Gutachter: Dierk Scheel, Ulla Bonas, Thorsten Nürnberger." Halle (Saale) : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2019. http://d-nb.info/121072927X/34.
Full textWeyhe, Martin [Verfasser], Dierk Gutachter] Scheel, Ulla [Gutachter] [Bonas, and Thorsten [Gutachter] Nürnberger. "Transcriptional regulation of defence gene expression by a VQ-motif containing protein / Martin Weyhe ; Gutachter: Dierk Scheel, Ulla Bonas, Thorsten Nürnberger." Halle (Saale) : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2019. http://nbn-resolving.de/urn:nbn:de:gbv:3:4-1981185920-142191.
Full textDelgado, Júlio António Rocha. "Aplicação empírica da realized volatility ao índice PSI20." Master's thesis, Instituto Superior de Economia e Gestão, 2005. http://hdl.handle.net/10400.5/17746.
Full textNesta dissertação é feito um estudo sobre o novo método ou procedimento (não paramétrico) de estimação da volatilidade recentemente proposto na literatura, a Realized Volatility (RV), obtido pela soma dos produtos cruzados dos retornos de alta- frequência intra-diários. O objectivo principal do estudo consiste em fazer uma aplicação empírica da RV ao índice PSI20, focando sobretudo nos estudos das propriedades das distribuições condicionais e não condicionais, confrontando com os resultados já obtidos na literatura. Considerando duas séries de frequências de 5 e 30 minutos verificamos que ambas as distribuições empíricas da RV são não normais e fortemente enviesado à direita, enquanto que as distribuições marginais do logaritmo da RV são aproximadamente normais, assim como as distribuições dos retornos estandardizados. O logaritmo da RV apresenta uma forte dependência temporal e parece ser bem descrita por um processo de memória longa. Esses resultados são consistentes com os estudos já efectuados. Verificamos ainda que logaritmo da RV exibe o efeito assimétrico da volatilidade. Considerando a volatilidade como uma variável observável, em vez de latente como nos modelos ARCH (AutoRegressive Conditional Heteroskedastic) e de Volatilidade Estocástica (VE), propomos modelar as características dinâmicas do logaritmo da RV através do modelo ARFIMA (Autoregressive Fractionally Integrated Moving Average).
ln this dissertation, it is done a study about a new (non-parametric) method or procedure to estimate the volatility recently proposed in literatures, Realized Volatility (RV), which can be obtained by sumrning of cross-products of high frequency intraday returns. The main objective of this study is to perform an empirical application of RV to the PSI20 index, focused on the conditional and unconditional properties, and to compare with the results that have appeared in literature. Considering two series of frequencies 5 and 30 minutes we find for both that the distributions of RV are not normal and are highly skewed to the right, while the log of RV are approximately normal, as are the distributions of the standardized returns. The log of RV show strong temporal dependence and appear to be well described by long memory processes. Our results are consistent with the findings in literature. We also find that the log of RV exhibit the asymmetric volatility effect .Considering the volatility as an observable variable, instead of latent as in ARCH (AutoRegressive Conditional Heteroskedastic) and Stochastic Volatility (VE) models, we modelled the dynarnic characteristics of the log of RV series using ARFIMA (Autoregressive Fractionally Integrated Moving Average) models.
N/A
Campos, Victor de Abreu [UNESP]. "Arcabouço para reconhecimento de locutor baseado em aprendizado não supervisionado." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151725.
Full textApproved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-28T13:43:21Z (GMT) No. of bitstreams: 1 campos_va_me_sjrp.pdf: 5473435 bytes, checksum: 1e76ecc15a4499dc141983740cc79e5a (MD5)
Made available in DSpace on 2017-09-28T13:43:21Z (GMT). No. of bitstreams: 1 campos_va_me_sjrp.pdf: 5473435 bytes, checksum: 1e76ecc15a4499dc141983740cc79e5a (MD5) Previous issue date: 2017-08-31
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
A quantidade vertiginosa de conteúdo multimídia acumulada diariamente tem demandado o desenvolvimento de abordagens eficazes de recuperação. Nesse contexto, ferramentas de reconhecimento de locutor capazes de identificar automaticamente um indivíduo pela sua voz são de grande relevância. Este trabalho apresenta uma nova abordagem de reconhecimento de locutor modelado como um cenário de recuperação e usando algoritmos de aprendizado não supervisionado recentes. A abordagem proposta considera Coeficientes Cepstrais de Frequência Mel (MFCCs) e Coeficientes de Predição Linear Perceptual (PLPs) como características de locutor, em combinação com múltiplas abordagens de modelagem probabilística, especificamente Quantização Vetorial, Modelos por Mistura de Gaussianas e i-vectors, para calcular distâncias entre gravações de áudio. Em seguida, métodos de aprendizado não supervisionado baseados em ranqueamento são utilizados para aperfeiçoar a eficácia dos resultados de recuperação e, com a aplicação de um classificador de K-Vizinhos Mais Próximos, toma-se uma decisão quanto a identidade do locutor. Experimentos foram conduzidos considerando três conjuntos de dados públicos de diferentes cenários e carregando ruídos de diversas origens. Resultados da avaliação experimental demonstram que a abordagem proposta pode atingir resultados de eficácia altos. Adicionalmente, ganhos de eficácia relativos de até +318% foram obtidos pelo procedimento de aprendizado não supervisionado na tarefa de recuperação de locutor e ganhos de acurácia relativos de até +7,05% na tarefa de identificação entre gravações de domínios diferentes.
The huge amount of multimedia content accumulated daily has demanded the development of effective retrieval approaches. In this context, speaker recognition tools capable of automatically identifying a person through their voice are of great relevance. This work presents a novel speaker recognition approach modelled as a retrieval scenario and using recent unsupervised learning methods. The proposed approach considers Mel-Frequency Cepstral Coefficients (MFCCs) and Perceptual Linear Prediction Coefficients (PLPs) as features along with multiple modelling approaches, namely Vector Quantization, Gaussian Mixture Models and i-vector to compute distances among audio objects. Next, rank-based unsupervised learning methods are used for improving the effectiveness of retrieval results and, based on a K-Nearest Neighbors classifier, an identity decision is taken. Several experiments were conducted considering three public datasets from different scenarios, carrying noise from various sources. Experimental results demonstrate that the proposed approach can achieve very high effectiveness results. In addition, effectiveness gains up to +318% were obtained by the unsupervised learning procedure in a speaker retrieval task. Also, accuracy gains up to +7,05% were obtained by the unsupervised learning procedure in a speaker identification task considering recordings from different domains.
FAPESP: 2015/07934-4
Cronvall, Per. "Vektorkvantisering för kodning och brusreducering." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2377.
Full textThis thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.
Hsu, Ping-Hsen, and 許平顯. "Quadtree-based BTC-VQ Compression." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/35824021316228171636.
Full text國立交通大學
資訊科學學系
83
A quadtree-based BTC-VQ coding method is proposed in the thesis for image compression. First, the image is partitioned into nonoverlapping 32*32 subimages. For each subimage, according to the smoothness of the gray values, the subimage is partioned using a quadtree segmentation. In order to improve the efficiency of the conventional quadtree structure, we propose three new methods, called position quadtree, bitmap quadtree, and position-VQ quadtree, to encode the segmentation quadtree. The number of bits needed to encode the segmentation quadtree is reduced significantly by using these three new methods. While coding the segmented blocks, the blocks with the size of 32*32, 16*16, and 8*8 are all coded using the mean value because they are identified as smooth blocks. On the other hand, the blocks with the size of 4*4 are classified into three categories: the smooth blocks, the texture blocks, and the edge blocks. A smooth block is coded using the mean value of the block. A texture block is coded with a 2-level BTC-VQ, while each edge block is coded using a 3-level BTC with VQ technique to improve the perceived quality. Simulation results show that good visual quality and low MSE are obtained. In addition, the bit rate is in the range of 0.7-0.9 bpp when the proposed QTBTC- VQ is used.
Xu, Ping-Xian, and 許平顯. "QUADTREE-BASED BTC-VQ COMPRESSION." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/68729136187374137773.
Full textNarasimaham, M. V. S. Phani. "Modified VQ Coders For ECG." Thesis, 1998. http://etd.iisc.ernet.in/handle/2005/2178.
Full textLi, Kuo Yang, and 李國陽. "Context-based Coding for VQ Indices." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/05897064943677893992.
Full text大葉大學
資訊工程學系碩士在職專班
96
In image compression using vector quantization, the compression rate of the index file produced from encoding image by the codebook could be improved by proceeding lossless compression. A simple and fast lossless compression design to encode the vector quantized indexes for 2-D images is proposed in this paper. Based on the connections in the index neighborhood, the context model of an index is first classified into one of seven classes. The index is coded with a context designed for that class. The index is compared with the previously encoded indices in a predefined search order to check whether the current index value can be found in the neighboring region or be in the memory array which records the n different indexes of previously encoded indexes. If the current index satisfies the conditions, it can be encoded by lesser bits and therefore yields a better compression. Otherwise, the current index is encoded by the prefix code and the original index. The computation complexity of the proposed method is quite low and its memory requirement is small. Experimental results show that the proposed scheme achieves better compression efficiency than lossless index coding scheme proposed by Chen and Yu in 2005. The newly proposed algorithm achieves significant reduction of bit rate.
Lin, Chang Shyan, and 林昌賢. "Studies of VQ for Noisy Channel." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/29813430352382926590.
Full text國立中正大學
電機工程研究所
82
Vector Quantization (VQ) is a powerful and effective sc- heme that is widely used in speech and image coding applicat ions. One basic problem associated with VQ is its sensitivity to channel errors. In this thesis, the performance of a low- complexity VQ----the Tree-Structured VQ (TSVQ) when used over noisy channels is first analyzed. Next, we study a class of VQ with memory that is known as Side-Match VQ (SMVQ) in the prese nce of channel noise. Especially, when channel noise is present , the ordinary SMVQ performance degrades drastically. We inve stigate, respectively, the modified algorithm that is taking the channel noise into account for TSVQ and SMVQ. Extensive sim ulation results are given for the image source. Comparsion the ordinary TSVQ and SMVQ designed for the noiseless channel show substantial improvements when the channel is noisy. 4 dB gain is approximately obtained for our TSVQ design. ALSO, 4.7 dB improvement is achieved by the proposed SMVQ decoding algorithm.
梁志成. "A study of steganalysis over VQ steganography." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/40029353046806466051.
Full textLin, Meng-Chieh, and 林孟潔. "Embedding Secret Information in VQ Index Tables." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/sfjpz4.
Full textTSAI, SHUN-YEN, and 蔡順諺. "Performance Improvement on VQ-based Image Compression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/75286780979764722820.
Full text亞洲大學
資訊工程學系
104
Promotion Vector-Quantization (VQ) compression in index table and do further compression is purpose of the paper. According to method proposed by predecessors, for example: Search order coding (SOC) methods and Locally adaptive scheme (LAS) methods that can do further compression against in index table, but complex image of Locally adaptive scheme (LAS) that compression performance of index table is not good enough, Locally adaptive scheme (LAS) compression ratio higher than Vector-Quantization (VQ) compression ratio, Search order coding (SOC) that the first row in index table, most of index value and index value disperse places that did not achieve effect of compression. Therefore, we propose a compression method which combine Locally adaptive scheme (LAS) and Search order coding (SOC). In this thesis, we use the index value for Locally adaptive scheme (LAS) which Search order coding (SOC) can’t be compressed to do the compression. Therefore, it can further increase the compression for VQ index table in order to make the size of file which after the compression can be smaller and more favorable for storage and transmission. In our experiment results, the compression rate of our proposed method can achieve a higher performance then Locally adaptive scheme (LAS), Search order coding (SOC) and Vector-Quantization (VQ) compression.
Su, Wei-Kai, and 蘇瑋凱. "VQ-style Secret Image Sharing and Recovery." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/05078730844011694112.
Full text國立交通大學
資訊科學系所
93
This thesis includes two parts: the image sharing part and the image recovery part. In the first part, two methods of secret image sharing of VQ-style are proposed. One is the fault-tolerant sharing and the other is the progressive sharing. In the method of fault-tolerant sharing, we achieve the (r, n) threshold scheme by the mixed information of the codebooks. The mixed information is generated by some operations. When collecting any r shadows, the user can retrieve the codebooks and the code indices of the secret image from these r shadows, and use the codebooks and the code indices to reconstruct the secret image. However, no information of the secret image can be achieved when there are insufficient number of shadow images being collected. In the second part of image sharing, a lossless progressive image sharing method is proposed. The more the shadow images being gotten, the better the quality of the secret image being recovered. The user does not need to care about which shadow images he/she gets, and just needs to care about how many shadow images he/she collects. After receiving all shadows, the user can reconstruct a lossless secret image. In the second part of this thesis, we introduce an error correction method of secret image by search-order coding (SOC). By an additional image called SOC-image, we can repair the damaged image to a better one. Notably, the SOC-image alone reveals nothing about the secret image. Thus the SOC-image is safer than duplicating the secret image directly. Besides, we also modify the SOC-image to an advanced version one. The advanced version of the SOC-image can repair the damaged image by two different ways according to the availability of the hash table. Both two ways are useful in the correction. Finally, we combine the SOC-image and the VQ-style sharing. The technique of the SOC-image error correction can not only repair the damaged code indices, but also improve the recovery quality of the progressive sharing.
Hsu, Wei-Jun, and 徐偉俊. "Adaptive Data Hiding for VQ Compressed Images." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/31063751528692493503.
Full text義守大學
資訊工程學系
89
Due to the rapid growth of network communications, the research of data hiding becomes an interesting topic in recent years. Data hiding is a process of embedding data into various forms of digital media such as text, image, audio, video, etc. To reduce the amount of data storage, some compression techniques are applied on these digital media. One of most popular compression techniques is the vector quantization (VQ). Since VQ is a low-bit rate compression scheme and has simple decoding operations, it has been successfully applied to the encoding of images and speech. In this thesis, we focus the hiding problem on VQ compressed images. Compared with traditional methods, proposed algorithm has better results in both aspects of hiding capability and image quality. Some experimental results will be shown to demonstrate these comparisons.
Chung-Ching, Yen, and 顏忠慶. "Fast Classified VQ Coding Algorithm Using Frequency Domain." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/79964668331736747897.
Full text義守大學
資訊工程學系
90
In recent years, vector quantization (VQ) has been applied widely in video and image compression because of its simple coding structure and high compression ratio. The disadvantages of VQ are the encoding complexity and the edge degradation. Classified VQ (CVQ) is a commonly used algorithm to solve these two problems. In this thesis, a highly efficient classification scheme is proposed. The scheme, called 68 classes of concentric circle classification algorithm, is performed using three DCT coefficients in the frequency domain. This algorithm does speed-up the encoder efficiently, in which the image qualities are almost the same as those in VQ. In addition, Dihedral group transformations are applied to this algorithm to reduce the effect of edge degradation effectively.
Nguyen, Thai-Son, and 阮泰山. "Reversible Data Hiding Techniques for VQ-Compressed Domain." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/63867107570423918359.
Full text逢甲大學
資訊工程所
99
Data hiding is designed to solve the problem of secure information exchange through networks such as Internet. In that, reversible data hiding is the good technique for recovering original images without any distortion after secret data are extracted from the stego image. This technique continues to attract attention from many researchers. In this thesis, three reversible data hiding schemes are proposed to embed secret data into the VQ and SMVQ compressed image. The first scheme is a reversible data hiding scheme for VQ indices by depending on locally adaptive coding. This scheme is proposed to improve Chang et al.’s scheme [30] in term of embedding capacity and compression rate. Experimental results confirm that the hiding capacity of the first proposed scheme is around 1.36 bpi in most digital images, which is typically higher than that of Chang et al.’s [30]. Moreover, the average compression rate that can be achieved with this proposed scheme is 0.49bpp, which outperforms both Lin and Chang’s scheme (0.50bpp), Tsai (0.50bpp), Chang et al.’s scheme (0.53bpp), and Yang and Lin’s scheme (0.53bpp). The second scheme is a reversible data hiding scheme for VQ indices by using absolute difference tree. The second scheme exploits the differences between two adjacent indices to embed secret data. Experimental results show that this scheme can achieve a higher compression rate than an earlier scheme by Yang and Lin [27]. Our scheme’s average compression rate, 0.44 bpp, outperforms that of Yang and Lin’s scheme, which averages 0.53 bpp. Moreover, the embedding capacity of the second scheme can rise to 1.45 bpi, which also is superior to that of Yang and Lin’s scheme (0.91 bpi) as well as Chang et al.’s scheme [26] (0.74 bpi). The third scheme presents a reversible data hiding in SMVQ compression domain. Based on the combination of SMVQ and SOC, secret bits are embedded to achieve embedding rate while maintaining the high embedding capacity. Experimental results show that when a state codebook sized 64 is used, the average compression rate with our scheme is 0.39bpp, which is much better than can be achieved with the schemes proposed by Chang et al. [51] (0.48bpp), Chang and Wu [52] (0.49bpp), or Jo and Kim [53] (0.5bpp). Our scheme offers a slightly higher embedding rate than either Jo and Kim’s scheme or Chang and Wu’s scheme.
Chen, Yen-Chang, and 陳延菖. "Reversible Data Embedding Schemes for VQ-Compressed Images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/14392753136114845476.
Full text逢甲大學
資訊工程學系
101
In this paper, we propose two novel reversible data hiding scheme in the index tables of the vector quantization (VQ) compressed images based on index mapping mechanism and index set construction strategy. The first scheme is a reversible data hiding scheme for VQ indices by index mapping. On the sender side, the VQ indices with zero occurrence numbers in a given index table of an image are utilized to construct a series of index mappings together with some indices with the largest occurrence numbers. The indices in each constructed mapping correspond to the full binary representations with the length of the mapping bit number. Through the mapping optimization by index histogram, the optimal vector of mapping bit numbers can be obtained, which leads to the highest hiding capacity. Data embedding procedure can be easily achieved by the simple index substitutions according the current subset of secret bits for hiding. The same index mappings reconstructed on the receiver side ensure the correctness of secret data extraction and the lossless recovery of index table. The second scheme is a reversible data hiding scheme for VQ indices by index set construction strategy. On the sender side, three index sets are constructed, in which the first set and the second set include the indices with greater and less occurrence numbers in the given VQ index table, respectively. The index values in the index table belonging to the second set are added with prefixes from the third set to eliminate the collision with the two derived mapping sets of the first set, and this prefix adding operation has data hiding capability additionally. The main data embedding procedure can be easily achieved by mapping the index values in the first set to the corresponding values in the two derived mapping sets. The same three index sets reconstructed on the receiver side ensure the correctness of secret data extraction and the lossless recovery of index table. The third method is used Search-Order Coding(SOC) method, this method is compress image and hiding the secret information. It is can be correctly after completing the extraction of secret information and recover the original image. This method is record the index difference value, according the difference value hiding secret information. Experimental results demonstrate the effectiveness of the proposed scheme.
Lee, Allen, and 李宗憲. "A VQ Coding Based Method for Object Detection." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/92553801409712081969.
Full textChang, Shun-Chieh, and 張舜傑. "The Research of VQ-Based Fast Search Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/k7gsrf.
Full text國立臺北科技大學
電機工程系博士班
100
This dissertation proposes a fast search algorithm for vector quantization (VQ) based on a fast locating method, and uses learning and trade-off analysis to implement this algorithm. The proposed algorithm is a binary search space VQ (BSS-VQ) that determines a search subspace by binary search in each dimension, and the full search VQ (FSVQ) or partial distance elimination (PDE) is subsequently used to obtain the best-matched codeword. In trade-off analysis, a slight loss occurred in quantization quality; however, a substantial computational saving was achieved. In learning analysis, the BSS was built by the learning process, which uses full search VQ (FSVQ) as an inferred function. The BSS-VQ algorithm is applied to the line spectral pairs (LSP) encoder of the G.729 standard, which is a two-stage VQ encoder with a codebook size of 128 and two small codebook sizes of 32. In addition, a moving average (MA) filters the LSP parameter beforehand, and the high correlation characteristics are degraded between consecutive speech frames. These factors present a challenge for developing fast and efficient search algorithms for VQ. In the experiment, the computational savings of DITIE, TSVQ, and BSS-VQ are 61.72%, 88.63%, and 92.36%, respectively, and the quantization accuracy of DITIE, TSVQ, and BSS-VQ are 100%, 26.07%, and 99.22%, respectively, which confirmed the excellent performance of the BSS-VQ algorithm. Moreover, unlike the TIE method, the BSS-VQ algorithm does not depend on the high correlation characteristics of signals to reduce the amount of computation; thus, it is suitable for the LSP encoder of the G.729 standard.
Tzeng, Chung-Yi, and 曾崇儀. "A Study on Binary Search Algorithm for VQ." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/xq9d2e.
Full text國立臺北科技大學
電機工程系所
94
VQ is a popular and efficient technology for signal processing and data compression. However, the computation requirement on the full-search algorithm is dramatically large. To overcome this issue, many approaches such as TIE, PDE,and Tree-structured … are employed to reduce the computation. Binary search is a powerful algorithm. It will reduce the search space dramatically. However, the binary search can be worked on the one-dimension space only. The space dimension of VQ is very large for the goal on data compression. In the past, the tree-structure VQ was proposed to eliminate the search space. In this approach, the search space is really down. However, the quality and correction-rate is also down. In this thesis, the new algorithm for binary search on VQ application is proposed. In this approach, the search space is really down. Moreover, the quality is also kept. With comparison to the traditional VQ, the average correction-rate is 99.6% and the 57% of search space is reduced. These performances is superior than tree-structured VQ.
Lee, Shao-Yin, and 李韶茵. "Digital Watermarking Based on Hybrid VQ and SVD." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/95792764699204303476.
Full text國立臺灣海洋大學
資訊工程學系
103
Nowadays, the rapid growth of science and technology makes the transmission of digital information more convenient. People can easily modify, transform, copy, and save digital data; hence, how to protect intellectual property rights is an important issue now. Digital watermarking is a technology for solving this problem. The intellectual property right can be identified by the watermark embedded in the digital multimedia. Plenty of watermarking schemes based on vector quantization (VQ) have been proposed. In the embedding process, a codebook is used for embedding watermark; besides, the codebook is necessary for watermark extraction. JPEG, a common method of lossy compression, is now widely used for compressing digital images. However, it is usually difficult for the VQ-based digital watermarking schemes to resist JPEG compression attack. Furthermore, the staircase effects can be easily seen in the watermarked images. To solve these problems, we proposed a watermarking scheme combining VQ with SVD (Singular Value Decomposition). In our scheme, the codebook for embedding process is not required for extraction. According to our experiment result, the staircase effects are unobvious on the watermarked images and our scheme is strong against JPEG compression attack.
Leu, Jiunn Huei, and 呂俊輝. "Studies of SBC and VQ for noisy channel." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/75210418921232111043.
Full text國立中正大學
電機工程研究所
82
We propose an adaptive mean filter for subband coding with vector quantization (SBC-VQ) system to protect the coding system against channel noise. The specific filter exploits characteristics of high-frequency subbands in SBC-VQ to obtain edge information. According to this edge information, three different threshold values to determine whether a signal is corrupted by noise are designed for three different statistical regions--- uniform, edge and texture region. If a signal is determined to be noise-corrupted, then we replace it by the mean estimation inside a 3 times 3 window. Otherwise, we don't change it. By this way, we reduce the noise effect. The post- filter improves peak signal-to-noise ratio (PSNR) and visual quality in SBC-VQ image coding scheme. At 0.01 bit-error-rate (BER), it provides about 5.5 dB improvement. The proposed post- filter promotes the robustness of SBC-VQ to channel noise without any additional channel coding.
Yang, Gwo Nan, and 楊國楠. "The Research of Adaptive Coding Technique for VQ." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/19716933157920573305.
Full text國立中山大學
資訊工程研究所
82
With the descent of multimedia age, the huge amount digital image data becomes a tremendous burden in storage and transmission capacity. The digital image compression technique is now a popular issue. One of the technique is Vector Quantization which can get a high compression ratio. But it has some disadvantages, such as hard to get an optimal codebook, time-consuming encoding course, etc. In thesis, we will propose some techniques for improving the disadvantages. First, we will briefly introduce Vector Quantization, including the theory, codebook generation and encoding/decoding technique. Then, in order to improve the disadvantage of time- consuming of LBG algorithm, we use the ART algorithm to generate codebook. Exploiting the different representative degree of the codewords in codebook and the entropy concept, we propose an adaptive coding technique which can be used for reducing the encoding time and the bit rate. Finally, we adopt the variable dimensional concept to implement a variable dimensional Vector Quantization with a 3-level quadtree strategy for getting a higher compression ratio.
XIE, WAN-MING, and 謝萬明. "Fast algorithms for VQ codebook design and search." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/16068615464186125222.
Full texthan, ku wei, and 谷威涵. "A DATA HIDING SCHEME BASED ON VQ AND SOC." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/21064329963879025170.
Full text亞洲大學
電腦與通訊學系碩士班
95
In recent years, the computer and network communication techniques have been developed quickly, people attach great importance to secret communication. For this reason, information hiding was being proposed. As the information hiding techniques go far, it is mainly classified into two categories: copyright marking and steganography. Copyright marking is used to embed the copyright information, like a signature or a trademark, into the cover image for ensuring the copyright. The emphasis on copyright marking is that the watermark hidden in the cover image must be able to be retrieved, even if the stego-image has been applied lots manipulations such as cropping, lossy compression, resize, etc. Watermark is usually very small, and composes of visible watermarking and imperceptible watermarking. Steganography is different from watermarking. The goal of steganography is how to embed secret data into the cover image, let the interceptors neither feel something peculiar nor want to attack the stego-image. In the other word, steganography is to provide a secret communication. The key issues in steganography are how to embed more secret data and to remain the stegoimage’s good quality. The technique of data hiding in images can be classified into two categories, i.e., data hiding in spatial domain schemes and data hiding in frequency domain schemes. Neither data hiding in spatial domain schemes nor data hiding in frequency domain schemes can be used in other domain. In this thesis, we propose a new steganographic scheme based on Vector Quantization compression (VQ) and Search-Order Coding (SOC) for gray-level images. The embedded capacity of this scheme is higher than the others in which are based on VQ and the quality of stego-image is acceptive. In general, the capacity of the schemes based on VQ are usually 1-bit per block. In 2004, Yu et al proposed a scheme to increase the capacity to 3-bit per pair of blocks. In our scheme, we used multiple codebooks and chose the combinations of per pair of blocks, to embed the secret data. The capacity was increased to 6-bit per pair of blocks, and the quality is the same as Yu et al.’s scheme.
Hus, Chun-I., and 許竣壹. "Fast VQ Codeword Search Algorithm by Multiple Eliminating Conditions." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/64595187327595390033.
Full text大葉大學
資訊工程學系碩士班
98
Vector Quantization (VQ) is a widely application technology in recent ten years such as video, speech and image. During the encoding process for image vector quantization compress technology, is require search the close codeword in codebook by Full Search. The complexity is proportional to the dimensions K of codeword and the size N of codebook. In this paper, for reduce the complexity of Full Search(FS) , we employed the Mean-distance ordered Partial codebook Search algorithm (MPS) [1]to determine initial codeword and search sequence. In addition, we calculated out four projected values for the block that will be encoding, and the four projected values for the codeword by fast projection algorithm. According to four eliminate rule that are formulate under four projection value, we delete the codeword that is impossible become the close codeword. On the other hand for the codeword that is selected, we speed up the search for the close codeword to arrive the goal of reduce the search and execution time by the partial distance elimination (PDE) [2]algorithm. In the experimental result, when the dimensions of codeword K is 16, and the size of codebook N is 128, the overall average time is 11.69ms, the percentage of Full Search(FS) execution time is 5.89%, when the size of codebook N is 256, the overall average time is 17.17ms, the percentage of FS execution time is 4.34%, when the size of codebook N is 512, the overall average time is 26.73ms, the percentage of FS execution time is 3.39%, when the size of codebook N is 1024, the overall average time is 43.35ms, the percentage of FS execution time is 2.75%. When the dimensions of codeword K is 64, and the size of codebook N is 128, the overall average time is 10.53ms, the percentage of FS execution time is 5.48%, when the size of codebook N is 256, the overall average time is 16.48ms, the percentage of FS execution time is 4.27%, when the size of codebook N is 512, the overall average time is 26.88ms, the percentage of FS execution time is 3.59%, when the size of codebook N is 1024, the overall average time is 44.08ms, the percentage of FS execution time is 3.25%.According to data of experiment, the effect of this method is fast and efficiency. That the proposed algorithm using the four eliminate rule, avoid unnecessary Euclidean distance calculation out to save consumption of vector projection conversion calculation, only integer calculation, to reduce the vector floating point time calculation, PDE can early stop Euclidean distance calculation, and use early detection to stop the search and reduce the time to achieve results fast searching.
Lin, Chin-Tsen, and 林璟岑. "Reversible data hiding base on VQ and halftoning technique." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/64875663704506991234.
Full text萬能科技大學
資訊管理研究所
99
Nowadays, the prevalence of the internet causes digital data easily to be repro-duced and circulated illegally. To prohibit people with bad intention from illegally stealing and revising on purpose, the means to protect the contents of digital data has recently become as a important topic of technological research. Several scholars have proposed a concept of Data Hiding, which is to hide important information into “Cover image”; furthermore, there is a category of Data Hiding, steganography, which mainly achieve to enable “stego-image” to have not only a great amount of hidden data but also a high quality. In addition, it has drawn much attention among technological researchers in re-cent years that Reversible Data Hiding technique, which is a means to maintain the original image without any distortion after extraction of the hidden data and which allows cover image to have reversible and undistorted trait. Besides, there are two rough categories of Reversible Data Hiding technique: Difference Expansion tech-nique and Histogram Modification technique. Reversible Data Hiding technique, however, has problems such as a small amount of embedded data. Hence, this paper proposes a new Reversible Image (Data) Hiding technique based on Histogram Modification technique to hide much more data without distort cover image. We use Reversible Data Hiding technique to embed Halftone image and Difference secret image into Cover image. Difference secret image is in a vector quantization (VQ) way to encode the information that is lost from secret digital im-age in edge-based LIH (ELIH) process. In data extraction, we use Reversible Data Hiding technique to extract the Halftone image and difference secret image and use ELIH and VQ to recover cover image. Finally, our experimental result supports that our new Reversible Image Hiding technique not only embed huge amounts of data but also keep stego-image the same as the cover-image and maintain the cover im-age’s quality.
Li, Kuan-ting, and 李冠霆. "High capacity VQ-based hiding scheme using grouping strategies." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88763343612485306164.
Full text朝陽科技大學
資訊管理系碩士班
99
Due to the blooming development of the Internet, the scales of technological and applicable changes have extensively and intensively influenced our lives. Advances in communication and information technologies have allowed rapid increase in transmission capacity, while users are requiring immediate to transmit more great deal of multimedia contents. The explosive growth of e-commerce and the pervasive personal and business uses of the Internet have created a growing demand for information security. Steganography and cryptography are counter parts in information security. The comprehensible advantage of steganography over cryptography is that hiding messages in multimedia contents do not incur attention to themselves. Generally, data hiding technique enhances the security of communication over the Internet. In addition, since internet bandwidth is limited, how to increase the data transfer rate through internet becomes a vital concern. However, the general public’s solution towards this concern will be increasing bandwidth and reducing amount of data. Increasing the bandwidth would certainly raise the cost, which is not economically feasible. Hence data are being compressed before transmitting through internet so that the transfer rate can be increased while lowering data storage space. Data compression such as vector quantization (VQ), side-match vector quantization (SMVQ), provides a solution to this problem because the sizes of compressed image are greatly reduced and compressed media are easily stored or transmitted over the Internet without occupying huge amounts of bandwidth. To take communication bandwidth and data hiding into both considerations, this thesis makes a steganographic communication more efficient and secure by compressed the confidential messages while they are being hidden in the carrier--VQ index. VQ-based data hiding technique are adopted to meet the following requirements: privacy security, adequate transmission rate, high data embedding capacity. In this thesis, we design codeword grouping strategies to increase the quantities of codewords gathered together within a cluster, which increases the numbers of carriers to hide more information. Two aspects of strategies are adopted. On one hand we design three clustering algorithms to arrange similar codewords into a cluster so that a codeword in a group can embed a sub-message while preserving high embedding capacity. The proposed algorithms also have those isolation codewords to be recycled as carriers to raise the amounts of message embedding. According to our experimental results, the cordword grouping strategies provide better performance in terms of embedding capacity and visual quality compared to that of Lin et al. On the other hand, we also proposes a reversible data hiding technique on Vector Quantization (VQ) encoding.The main purpose is to improve Yang et al.’s codebook grouping. We first count the VQ index values related to an image and then sort the codewords in the VQ codebook in descending order according the frequencies of index values. The rearranged codebook is evenly divided into five groups which are employed to embed secret messages into an index table and then a complete stego index table with embedded secret information is created. From experimental results, our scheme is capable of increasing the embedding capacity of secret data compared to Yang et al.’s method. Furthermore, the proposed scheme can completely restore the altered VQ-index values into their original natures after the secret message has been extracted.
Lin, En-yu, and 林恩宇. "A VQ Encoding Based Motion Detection in HSV Colorspace." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/10668379936592012468.
Full text逢甲大學
通訊工程所
97
Motion detection is always one of the important projects in the vision surveillance related research. The purpose of the object detection is to separate the foreground object from an image accurately. This paper aims at two of the most important stages in background subtraction: background model construction and illumination variation, which build a background model based on codebook model with a shadow suppression algorithm in HSV colorspace. In the background model, each pixel can have one or more codewords representing the background. Samples at each pixel are clustered into the set of codewords based on a color distortion and brightness constraint. Finally, during the foreground detection procedure, if an incoming pixel happens to be a member of a codeword’s set, that codeword is updated accordingly; if not, the pixel is classified as foreground.
Chiang, Hsin-Kuan, and 江信寬. "Designing the Nearest Neighbor Classifiers via the VQ Method." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/22392078713707735102.
Full textWu, Chung-Chi, and 吳忠智. "An Algorithmic Study on Lossless Compression of VQ Index." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59606839825268201057.
Full text國立成功大學
資訊工程學系碩博士班
91
In this thesis, our main purpose will appear the partial same characteristic of a lot of index values by image at doing amount of vector quantization coding, because index value form that the alike characteristic of the close area piece gain after making the coding, and also prove through many related researches, by the method of some adequacy are worth to do to these indexes further compression will the huge exaltation amount of vector turn the gain the compression ratio, reducing the bit rate. For the sake of further carry on reducing the bit rate in compression for the index value form of the image.We will put forward two coding that keep view and valid algorithms, in order to and well make use of close together area a likeness, reach to reduce the bit rate effectively under the condition of not increase any image lose.In algorithm first, we establish a form that call it as index associated list after each codeword, make connect down at the each area piece that do the vector coding at and area coded in times before an index is worth to do ratio to, if can not find the same index value and then can find out the same index value possibly through the index associated list, and we also put forward four valid methods to establish this index associated list and, because this form of the establishment all can trains first before coding, so also can''t increase the excessive burden for plait to solution yard.Through experiment certificate, four methods that we put forward are all obvious reduced the bit rate, and don''t increase any image to lose .And in the algorithm second, we adhere to carry on physically condition for amount of vector to encode, put forward a more valid coding structure, make us be able to reduce the bit rate obviously.
Lai, Jui-Min, and 賴睿敏. "A VQ based coding method for license plate localization." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/f38hpx.
Full text國立中山大學
機械與機電工程學系研究所
95
The operation of a complete license plate recognition system includes three parts: license plate localization, character segmentation, and character identification. Among these three parts, license plate localization is relatively more difficult and complicated. Until now, differentiating background and real license plate images in real and random traffic conditions remains to be a very difficult task. Via a VQ coding technique, this study introduces a method resolve this problem. As a preprocessing step, this method first converts an image to be classified into binary form by using statistics generated from a license plate image database. The next step of the proposed approach is to use a VQ method to represent the image by a series of codewords. By computing the probability of these codewords used by the license plate and background images, these codewords are renumbered. By using neural networks to classify such images, our experimental results show that the proposed approach can differentiate background and real license plate images with a very high successful rate.
RUAN, SHI-ZHOU, and 阮士洲. "Fast algorithm and architecture for VQ based image video coding." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/49377441274425963056.
Full textQu, Zhong-Zheng, and 瞿忠正. "VQ-based image compression using modified fuzzy C-Means method." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/01706199175015486439.
Full text陳俊甫. "Adaptive Reversible Information Hiding Methods Based on VQ Codebook Clustering." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/99424447776997039441.
Full text國立彰化師範大學
資訊工程學系
96
With the convenience of the Internet, the transmission of digital images is gradually popularized. When the number or the size of the transmitted digital images becomes larger, the images will be compressed before transmission for the purpose of saving time. However, it is also important to transmit the data safely and to decrease the possibility that the interceptors may retrieve the data. Steganography solves the problem. This technique makes slight changes in the original media after secret information is embedded. It enhances not only the difficulty of being identified by the interceptors but also the security in the process of transmission. Besides, if some destruction is caused in the original image after the secret data is extracted and if the original digital image can not be recovered, serious effects will be caused in the transmission of images such as military and medical ones. Thus reversible information hiding methods are also significant. In the thesis, we focused on digital images and proposed a reversible information hiding method based on VQ codebook clustering. The codebook is often divided into several clusters by using thresholds in the traditional codebook clustering. In our scheme, we analyzed the codeword frequency and calculated the distance between two codewords in the encoding. The closest pair of codewords is picked out of the distance table, which contains the distance values in the ascent order. The identical codeword is not picked in the codebook till the clustering is finished. Our strategy is to utilize the more frequent codeword in each pair to embed our data. The experimental results reveal that when the hidden capacity is small, this strategy can perform good image quality. For increasing the embedding capacity and keeping good image quality, we rank the frequency of the codewords, and start with the most frequent codeword, because the codewords with high frequency have an obvious influence in the encoding of the image. Besides, one threshold is added in our scheme to set the range of codeword selection and to control the embedded image quality. The experimental results show that the embedding capacity is enhanced and good image quality remains. To sum up, compared with the other information hiding methods nowadays, our method can perform better image quality before the maximum hidden capacity is achieved.
Tseng, Chi-Hung, and 曾吉宏. "A VQ-Based Image Compression for Grey-Level Image Sequences." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/29017028723748615469.
Full text大葉大學
資訊工程學系碩士班
93
Abstract A number of methods have been proposed for the compression of continuous image sequences. However, they only deal with binary images, which greatly limit their popularity in applications. In this thesis, we proposed a VQ-based method for compressing continuous grey-level images which have great similarity between two adjacent images. Four sets of continuous image sequences, each consists of 9 images with image size of 256x256 pixels, were used for testing the performance of the proposed method. Each image was first segmented into a number of 3x3 or 4x4 blocks, and then LBG algorithm was used for training a set of codebook consisting of 512 codewords capable of delineating features of the continuous image sequence. For further increasing the compression performance, JPEG-LS algorithm was applied to compress the codebook and index images of the sequential images. The results show that the compression ratio achieved by using the proposed method is significantly higher than AVI, while the image quality of the reconstructed images has been hold at a satisfied level. Future works will expand the method to application of lossless compression in medical image sequences. Keywords - Vector quantization, continuous image, image compression, AVI.
Peng, Chien-Kai, and 彭建凱. "Real-Time Video Compression Using Matching Pursuit and Adaptive VQ." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/24206981832181292020.
Full text國立清華大學
電機工程學系
87
Abstract In recent years, there are increasing demands for audio-visual communication using low bit rate channels (with bit rates ranging from 10 to 60 kilobits per second.) The process of developing new technology for video compression in the context of the emerging ISO-MPEG-4 standard has resulted in a large amount of work being produced in the framework of motion-compensated prediction-based video coding. All existing video compression standards are hybrid systems in that the compression is archived in two stages: first, motion compensation followed by encoding the residual frame due to the prediction error of motion compensation. In the current video compression standards, such as widely known ISO MPEG-1 and MPEG-2 standards as well as the ITU-T video coding standards H.261 and H.263, block-based discrete cosine transform (DCT) has been used to encode these prediction errors. DCT based video compression scheme is efficient but it introduces undesirable blocking artifacts at low bit rates. Hence, In this thesis, we propose a codebook adaptation algorithm, such that Kiefer-Wolfowitz method, for low bit rate, real-time video compression based on matching pursuit (MP). Although adaptive codebook design has been studied in the past, its implementation at low bit rate coding suitable for MPEG-4 standard remains significantly challenging. In our adaptation algorithm, we use a subset of 2-D separable Gabor function as our initial dictionary. Our initial dictionary is selected the same as Neff and Zakhor's. Also, in order to speed up the convergence rate of our adaptation algorithm, the basis functions in our dictionary is formed by tensor product of x-component and y-component. Each basis consists of two code vectors, one from the x-direction and the other from the y-direction. We start with an initial dictionary and on-line adapt this dictionary suitable for current sequences. Experiments of our adaptive matching pursuit video codec on several MPEG-4 Class A, Class B, and Class C sequences.
Lai, Ming Young, and 賴銘勇. "Using VQ and DP in Tandem with Isolated Word Recognition." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/31218977097606597895.
Full textWang, Yu-Lun, and 王友倫. "The Study on Data Hiding Techniques in VQ and AMBTC." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/85235975684264693714.
Full text國立中興大學
資訊管理學系所
105
With the rapid development of information technology in recent years, and network faster and faster, more and more digital multimedia files such as images upload on the internet. Because of the rise of the social media, leading to modern people increasingly rely on social media to share their lives. And the popularity of smart mobile devices, make readily take pictures readily upload photos and textures possible. These uploaded files are easy to spread, misappropriation, and even be used to engage in illegal activities. Therefore, embedding personal name, logo etc. into the uploaded image file can prove ownership of the image file by and extraction algorithm. And different from the watermark method, the image will not be perceived this image has been data hidden processing. Compressed image have become a well-known research field, and a variety of compression method have been proposed in the 20th century. After constant modification to achieve better compression rate, nowadays, social media in the image uploaded processing, in order to quickly achieve the purpose of uploading, usually compress the image first and then upload. Therefore, if we want to apply data hiding scheme to social media, it is necessary to hide the data in the compressed image file. This proposal will propose data hiding techniques for different compression methods and meet criteria such as compression rates and the amount of data hidden in compressed image files.
Lee, Jung-Che, and 李榮哲. "On the Robustness of VQ-Based Digital Image Watermarking Algorithms." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/93981573266611795809.
Full text國立暨南國際大學
資訊工程學系
94
The robustness of watermarking algorithm is discussed in the dissertation. Usually, the invisible watermark is embedded into an image for copyright protection, we analyze several embedding techniques and show the dominated factors related with the robustness of watermarking techniques VQ (vector quantization) watermarking plays a newly developed branch in digital watermarking research fields. The embedding process can be performed in spatial domain, transform domain, or mixed domain. The VQ based watermarking is a kind of embedding in spatial domain. According to the generating scheme of a codebook proposed in the literature, our experimental results show that the robustness of such watermarking method is not strong enough when the watermarked image was corrupted, such as JPEG compression, burring, cropping, and shaping image processing. Hence, in order to improve the robustness, we adapt the generating process of a codebook to have a more robust watermarking algorithm. Besides, we also propose a new editing way of codeword to improve the quality of watermarked image, but the robustness is decreased a little.
Sung, Min-Tsang, and 宋敏菖. "Robust VQ-Based Digital Watermarking for the Memoryless Binary Symmetric Channel." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/28576552341867070317.
Full text國立高雄應用科技大學
電子與資訊工程研究所碩士班
92
Vector quantization (VQ) has been distinguished for its high compression rate in lossy data compression applications. And VQ-based watermarking plays a newly developed branch in digital watermarking research fields. In this paper, we propose optimized schemes for VQ-based image watermarking. We overcome the VQ index assignment problem with genetic algorithm (GA), which is suitable for transmitting the watermarked image over noisy channels. We obtain better robustness of the watermarking algorithm against the effects caused by channel noise in our simulations after inspecting the results from several test images. In addition, to compare with existing schemes in literature, the watermarked image quality in our scheme has approximately the same quality, with better performance in robustness, to the schemes proposed by other researchers. This also proves the effectiveness of our proposed schemes in VQ-based image watermarking for copyright protection.
Deng, Wen Rui, and 鄧雯瑞. "Clustering of weighted MRI by VQ and a browser of MRI." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/88200965830391045886.
Full textYoung, Chieh-neng, and 楊傑能. "Electrooculogram Signals for the Detection of REM Sleep Via VQ Methods." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/h372fr.
Full text國立中山大學
機械與機電工程學系研究所
95
One primary topic of sleep studies is the depth of sleep. According to definitions of R&K rules, human sleep can be roughly divided into three different stages: Awake, Non-rapid-eye-movement (NREM) Sleep, and Rapid-eye-movement (REM) Sleep. Moreover, sleep stages are scored mainly by EEG signals and complementally by EOG and EMG signals. Many researchers have indicated that diseases or disorders occur during sleep will affect life quality of patients. For example, REM sleep-related dyssomnia is highly correlated with neurodegenerative or mental disorders such as major depression. Furthermore, sleep apnea is one of the most common sleep disorders at present. Untreated sleep apnea can increase the risk of mental and cardiovascular diseases. This research proposes a detection method of REM sleep. Take into account the environment of homecare, we just extract and analyze EOG signals for the sake of convenience in comparison with EEG channels. By analyzing elementary waveforms of EOG signals based on VQ method, the proposed method performs a classification accuracy of 67.71% in a group application. The corresponding sensitivity and specificity are 73.38% and 68.95% respectively. In contrast, the average classification accuracy is 82.02% in personalized applications. And the corresponding average sensitivity and specificity are 83.05% and 81.62% respectively. Experimental results demonstrate the feasibility of detecting REM sleep via the proposed method, especially in personalized applications. This will be propitious to a long term tracing and research of personal sleep status.
Yan, Long-Jhe, and 顏龍晢. "A Novel Fast Search Algorithm for VQ-Based Speech/Image Coding." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/d3w2ym.
Full text國立臺北科技大學
電機工程系所
98
This dissertation presents an efficient quasi-binary search algorithm for vector quantization (VQ). The proposed algorithm adopts a tree-structured VQ with overlapped codewords (TSOC) to reduce computational complexity and enhance quantization quality. This algorithm uses overlapped codewords to expand the scope of the search path to traverse more appropriate codewords. In our speech experiment, compared with the full search VQ (FSVQ), the average computational savings for triangle inequality elimination (TIE), tree-structured VQ (TSVQ) and TSOC are 24.68%, 88.67% and 58.08%, respectively. In this experiment, the average quantization accuracy of TIE, TSVQ and TSOC are 100%, 46.49% and 99.15%, respectively. To further evaluate computations at each stage of the proposed algorithm, both speech and images are considered. With codebook sizes of 256, 512 and 1024, the corresponding optimal computational savings for images are 84.59%, 91.08% and 93.51% respectively, compared with the FSVQ. For speech, the optimal computational savings reached 59.43% for a codebook size of 128. The results indicate that the proposed algorithm can save a significant number of computations, depending on the size of codebook. The TSOC algorithm is a trade-off between TSVQ and TIE, which provides a satisfactory computation quality. Moreover, unlike the TIE method, our algorithm does not depend on the high correlation characteristics of signals to reduce the amount of computation, but the TIE method can be incorporated into our algorithm to dramatically reduce the amount of computation.
Guo, Shu-wei, and 郭書瑋. "Image Hiding Based on Hybrid VQ Compression and Discrete Wavelet Transformation." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/69878854536405822343.
Full text國立中興大學
資訊科學研究所
92
There are a lot of dangers to transfer important information in computer networks today. Thus, many researchers focus on the field of data hiding. The main method of data hiding is to embed the important information into a cover image. Then a stego-image is generated. By transferring the stego-image in computer networks, the important information is transferred to the receiver. Vector Quantization (VQ) compression is an effective compression method of digital images for the purpose of transmission and storage. The major advantages of VQ compression are that the compression rate is very high, the encoding and decoding are simple, and there is a feature that each block is independent (i.e. the other blocks are not destroyed if some of the blocks are destroyed.). On the other hand, Discrete Wavelet Transformation (DWT) is a method to transfer an image from spatial domain into frequency domain. The advantage of DWT is that an image is divided into several data with different importance. Thus, each kind of data can be processed in different methods. This thesis proposes a hybrid VQ compression and discrete wavelet transformation for image hiding. The proposed image hiding method is robust. The proposed method is to compress the gray-level secret image with VQ compression before the secret image is hidden. The proposed method proposes a method to train a codebook such that the quality of the secret image can be better in encryption and decryption. Then the compressed data of the gray-level secret image is encrypted before embedding into the DWT coefficients of the cover image. After inverse DWT calculating, the stego-image is generated. Finally, the sender only needs to transfer the stego-image, two keys, and a set of the initial codewords to the receivers. The secret image can be rebuilt with the received information. If the stego-image is destroyed, the proposed method can also recover the secret image partially.
Ta-Chin, Chin, and 勤大慶. "A flexible codeword expansion method for VQ trained nearest neighbor classifiers." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/67423967607302917863.
Full textPeng, Chung-Yun, and 彭中鋆. "A Novel Harmonic Competitive Neural Network─Applied to VQ, Clustering and Classification." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/90209094200523147361.
Full text國立海洋大學
電機工程學系
87
This thesis presents a harmonic on-line learning algorithm useful for training self-creating and self-organizing competitive neural networks. The resulting network is called Harmonic Competitive Neural Network (HCNN). It is shown that, by employing dual local resource counters to record the activity of each node during the competitive learning process, the equi-error and equi-probable criteria can be coherently harmonized. Training in HCNN is smooth and incremental, it not only achieves the biologically plausible on-line learning property, but it also avoids the stability-and-plasticity dilemma, the dead-node problem, as well as the deficiency of local minimum. Vector quantization, clustering and classification are essential techniques in image processing and pattern recognition. We apply the HCNN to perform the three important tasks. In vector quantization, the proposed HCNN is very effective in on-line learning vector quantization. Comparison studies on learning vector quantization involving stationary and non-stationary, structured and non-structured inputs demonstrate that the HCNN outperforms other competitive networks in terms of quantization error, training speed and harmonization of MSE and entropy. Augmented with an agglomerating algorithm, the HCNN can be easily tailored for clustering tasks. Unlike the k-means algorithm and the MST clustering method, the proposed HCNN-based clustering scheme is fully autonomous in that the number of clusters needs not be given in advance, and it consumes less computation time. Finally, we applied HCNN to learning classification. Tested with the two-spiral and iris data, simulation results have shown that HCNN is capable of performing accurate classification.