To see the other types of publications on this topic, follow the link: Fast search.

Dissertations / Theses on the topic 'Fast search'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Fast search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vassef, Hooman. "Combining fast search and learning for scalable similarity search." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86566.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.<br>Includes bibliographical references (leaves 38-39).<br>by Hooman Vassef.<br>S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Schlieder, Torsten. "Fast similarity search in XML data." [S.l.] : [s.n.], 2003. http://www.diss.fu-berlin.de/2003/108/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kibriya, Ashraf Masood. "Fast Algorithms for Nearest Neighbour Search." The University of Waikato, 2007. http://hdl.handle.net/10289/2463.

Full text
Abstract:
The nearest neighbour problem is of practical significance in a number of fields. Often we are interested in finding an object near to a given query object. The problem is old, and a large number of solutions have been proposed for it in the literature. However, it remains the case that even the most popular of the techniques proposed for its solution have not been compared against each other. Also, many techniques, including the old and popular ones, can be implemented in a number of ways, and often the different implementations of a technique have not been thoroughly compared either. This research presents a detailed investigation of different implementations of two popular nearest neighbour search data structures, KDTrees and Metric Trees, and compares the different implementations of each of the two structures against each other. The best implementations of these structures are then compared against each other and against two other techniques, Annulus Method and Cover Trees. Annulus Method is an old technique that was rediscovered during the research for this thesis. Cover Trees are one of the most novel and promising data structures for nearest neighbour search that have been proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
4

Chung, Hing-yip Ronald, and 鍾興業. "Fast motion estimation with search center prediction." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31220721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Soongsathitanon, Somphob. "Fast search algorithms for digital video coding." Thesis, University of Newcastle Upon Tyne, 2004. http://hdl.handle.net/10443/1003.

Full text
Abstract:
Motion Estimation algorithm is one of the important issues in video coding standards such as ISO MPEG-1/2 and ITU-T H.263. These international standards regularly use a conventional Full Search (FS) Algorithm to estimate the motion of pixels between pairs of image blocks. Since a FS method requires intensive computations and the distortion function needs to be evaluated many times for each target block. the process is very time consuming. To alleviate this acute problem, new search algorithms, Orthogonal Logarithmic Search (OLS) and Diagonal Logarithmic Search (DLS), have been designed and implemented. The performance of the algorithms are evaluated by using standard 176x 144 pixels quarter common intermediate format (QCIF) benchmark video sequences and the results are compared to the traditional well-known FS Algorithm and a widely used fast search algorithm called the Three Step Search (3SS), The fast search algorithms are known as sub-optimal algorithms as they test only some of the candidate blocks from the search area and choose a match from a subset of blocks. These algorithms can reduce the computational complexity as they do not examine all candidate blocks and hence are algorithmically faster. However, the quality is generally not as good as that of the FS algorithms but can be acceptable in terms of subjective quality. The important metrics, time and Peak Signal to Noise Ratio are used to evaluate the novel algorithms. The results show that the strength of the algorithms lie in their speed of operation as they are much faster than the FS and 3SS. The performance in speed is improved by 85.37% and 22% over the FS and 3SS respectively for the OLS. For the DLS, the speed advantages are 88.77% and 40% over the FS and 3SS. Furthermore, the accuracy of prediction of OLS and DLS are comparahle to the 3SS.
APA, Harvard, Vancouver, ISO, and other styles
6

Nelson, Jelani (Jelani Osei). "External-memory search trees with fast insertions." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37084.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.<br>Includes bibliographical references (p. 65-68).<br>This thesis provides both experimental and theoretical contributions regarding external-memory dynamic search trees with fast insertions. The first contribution is the implementation of the buffered repository B-tree, a data structure that provably outperforms B-trees for updates at the cost of a constant factor decrease in query performance. This thesis also describes the cache-oblivious lookahead array, which outperforms B-trees for updates at a logarithmic cost in query performance, and does so without knowing the cache parameters of the system it is being run on. The buffered repository B-tree is an external-memory search tree that can be tuned for a tradeoff between queries and updates. Specifically, for any E [1/ lg B, 1] this data structure achieves O((1/EBl-E)(1 + logB(N/B))) block transfers for INSERT and DELETE and 0((/(1 + logB(N/B))) block transfers for SEARCH. The update complexity is amortized and is O((1/e)(1 + logB(N/B))) in the worst case. Using the value = 1/2, I was able to achieve a 17 times increase in insertion performance at the cost of only a 3 times decrease in search performance on a database with 12-byte items on a disk with a 4-kilobyte block size.<br>(cont.) This thesis also shows how to build a cache-oblivious data structure, the cache-oblivious lookahead array, which achieves the same bounds as the buffered repository B'-tree in the case where e = 1/ lg B. Specifically, it achieves an update complexity of O((1/B) log(N/B)) and a query complexity of O(log(N/B)) block transfers. This is the first data structure to achieve these bounds cache-obliviously. The research involving the cache-oblivious lookahead array represents joint work with Michael A. Bender, Jeremy Fineman, and Bradley C. Kuszmaul.<br>by Jelani Nelson.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Minz, Ian. "Modeling cooperative gene regulation using Fast Orthogonal Search." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Begin, Steve. "A search for fast pulsars in globular clusters." Thesis, Link to full text, 2006. http://hdl.handle.net/2429/69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Huan-sheng. "Fast search techniques for video motion estimation and vector quantization." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kauffman, Kyle J. "Fast target tracking technique for synthetic aperture radars." Oxford, Ohio : Miami University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=miami1250263416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hazell, Georgina Grace Joan. "Deorphanising G protein-coupled receptors : the search for fast steroid receptors." Thesis, University of Bristol, 2011. http://hdl.handle.net/1983/12fbf473-f360-4831-8123-42698aff4950.

Full text
Abstract:
G protein coupled receptors (GPCRs) are the largest family of transmembrane receptors in the genome and are activated by a multitude of ligands including neuropeptides, hormones and sensory signals. The paraventricular nucleus (PVN) and supraoptic nucleus (SON) of the hypothalamus are important mediators in homeostatic control. Many modulators of PVN/SON activity, including neurotransmitters and hormones act via GPCRs - in fact over 100 non-chemosensory GPCRs have been detected in either the PVN or SON. The introduction to this thesis begins with a comprehensive summary of GPCR expression within the PVN/SON, with a critique of the detection techniques used within the literature. Also discussed are some aspects of the regulation and known roles of GPCRs in the PVN/SON, as well the possible functional significance of orphan GPCRs. Particular interest is paid to the recently 'deorphanised' G protein-coupled oestrogen (E2) receptor, GPER, which is the first receptor to be acknowledged as a steroid binding GPCR (although there are conflicting studies regarding its affinity for E2) and is expressed in the PVN and SON. Steroids are known to have fast non-genomic effects that are thought to be mediated in-part by membrane-associated forms of the traditional steroid receptors (members of a family of transcription factors). However, the possible discovery of a fast E2 GPCR has raised speculation regarding the existence of other steroid binding GPCRs. Thus the experimental Chapters were undertaken to explore the concept of fast steroid receptors, with particular emphasis on their possible roles in neuroendocrine systems. Firstly, the distribution of the putative E2 receptor was investigated to give further insight into its possible in vivo roles. In the rodent, high levels of GPER gene and protein expression were detected in the oxytocin and vasopressin neurones in the PVN and SON, the anterior and intermediate lobe of the pituitary, adrenal medulla and renal medulla and pelvis, suggesting roles for GPER in multiple functions including hormone release. To clarify the controversy surrounding GPER as an E2 receptor, we investigated GPER function in vitro using a series of cell signalling assays. However, E2 did not stimulate GPER-mediated signalling, suggesting that either GPER remains an orphan GPCR, or the cell lines used in this study lacked a vital component for E2 activation of GPER. As the rapid effects of glucocorticoid have been reported in numerous brain regions (including the PVN and SON), endocrine, and other tissues, the second part of this thesis focussed on the search for a possible fast glucocorticoid receptor. We compared the tissue distribution gene expression profiles of approximately 125 orphan GPCRs common to human and rodent with tissues that are known to exhibit fast effects of steroids (e.g., hippocampus, PVN, SON, thymus, kidney, etc.). Of the 125 orphans,3 GPCRs (GPR108, GPR146, and TMEM87B) had distribution profiles that closely matched the regions/tissues of interest. These orphans were tested for glucocorticoid activation using a universal deorphanisation assay. However, the identity of the fast glucocorticoid receptor remains unknown, as none of the candidate orphan GPCRs responded to glucocorticoids.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Rongzhi Barish Barry C. Barish Barry C. Peck Charles W. "A search for fast moving magnetic monopoles with the MACRO detector /." Diss., Pasadena, Calif. : California Institute of Technology, 1995. http://resolver.caltech.edu/CaltechETD:etd-10232007-094957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zellweger, Tobias. "The Dark Side of Fast Fashion - : In Search of Consumers’ Rationale Behind the Continued Consumption of Fast Fashion." Thesis, Stockholms universitet, Företagsekonomiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-145014.

Full text
Abstract:
This study investigates the underlying rationale of environmentally and socially conscious young Swedish consumers for their continued consumption of fast fashion. Furthermore this study assesses influential factors that shape young Swedish consumers’ attitudes and beliefs towards fast fashion. The fast fashion business model is largely based on the exploitation of poor working conditions and lack of environmental protection laws in the production countries. However, consumers are becoming increasingly aware of this dark side of fast fashion and the retailers are addressing their concerns with selective organic clothing collections. In order to gain in debt understanding of young Swedish consumers rationalizations, I applied an inductive research approach based on the philosophy of interpretive social science. More specifically I conducted semi-structured interviews with 12 Swedish participants between the ages of 18 and 25. The findings of this study show that the participants prioritize price, quality and how the clothes look over where they have been produced and under what circumstances. Furthermore, the interviewees indicate a high dependency on the Swedish government to punish misconducts of fast fashion retailers. Greenwashing, the Swedish school system as well as a green trend in contemporary Swedish society seem to shape young consumers attitudes and beliefs towards fast fashion. Future research could investigate how the Swedish government and the Swedish school system can take a more pro-active role in educating their citizens and students about the actual negative impacts caused by the overconsumption of fast and disposable fashion towards society and environment.
APA, Harvard, Vancouver, ISO, and other styles
14

FU, Jing-wei, and 傅敬惟. "Fast Hexagon Search Algorithm." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/37928024863874686575.

Full text
Abstract:
碩士<br>國立屏東商業技術學院<br>資訊工程系(原資訊科技系)<br>97<br>Motion vector searching is the key issue for video compression. Many studies are proposed for rising video quality and reducing search points. This paper is based on the point of view that most of motion vectors are very short and not far from (0,0) . The new algorithm that based on Hexagon-Based Search Algorithm and Diamond Search Algorithm is proposed, it has less search point and better video quality than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Chang, Alan, and 張哲維. "Fast Similarity Search in String Databases." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/98631286343899640374.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊工程學系<br>93<br>Efficient similarity search in large string databases requires effective index support. Since long strings have each numerous substrings of arbitrary length, the effective index designs are of great challenge. The existing solution, namely MRS [11], employs a low-cost lower bound function to sieve out the most similar candidates from the majority of unlikely database substrings. Therefore, only very small portions of string databases require the expensive true edit distance computation to finalize the query. A significant savings in overall query processing cost can be realized by the filtration feature of lower bound functions. In this paper, we seek to improve MRS to its full potential. Specifically, we propose a very simple method that exchanges the roles of database strings and query string in the original MRS design. Despite simplicity, our solution can further improve the query performance by 10 times in terms of disk page accesses while using only half of the original index’s size. Keywords: String Index, Similarity Search, Edit Distance, Near Neighbor Query
APA, Harvard, Vancouver, ISO, and other styles
16

Cakir, Fatih. "Online hashing for fast similarity search." Thesis, 2017. https://hdl.handle.net/2144/27360.

Full text
Abstract:
In this thesis, the problem of online adaptive hashing for fast similarity search is studied. Similarity search is a central problem in many computer vision applications. The ever-growing size of available data collections and the increasing usage of high-dimensional representations in describing data have increased the computational cost of performing similarity search, requiring search strategies that can explore such collections in an efficient and effective manner. One promising family of approaches is based on hashing, in which the goal is to map the data into the Hamming space where fast search mechanisms exist, while preserving the original neighborhood structure of the data. We first present a novel online hashing algorithm in which the hash mapping is updated in an iterative manner with streaming data. Being online, our method is amenable to variations of the data. Moreover, our formulation is orders of magnitude faster to train than state-of-the-art hashing solutions. Secondly, we propose an online supervised hashing framework in which the goal is to map data associated with similar labels to nearby binary representations. For this purpose, we utilize Error Correcting Output Codes (ECOCs) and consider an online boosting formulation in learning the hash mapping. Our formulation does not require any prior assumptions on the label space and is well-suited for expanding datasets that have new label inclusions. We also introduce a flexible framework that allows us to reduce hash table entry updates. This is critical, especially when frequent updates may occur as the hash table grows larger and larger. Thirdly, we propose a novel mutual information measure to efficiently infer the quality of a hash mapping and retrieval performance. This measure has lower complexity than standard retrieval metrics. With this measure, we first address a key challenge in online hashing that has often been ignored: the binary representations of the data must be recomputed to keep pace with updates to the hash mapping. Based on our novel mutual information measure, we propose an efficient quality measure for hash functions, and use it to determine when to update the hash table. Next, we show that this mutual information criterion can be used as an objective in learning hash functions, using gradient-based optimization. Experiments on image retrieval benchmarks confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Po-chung, and 王柏忠. "Fast Pattern Classification through Nearest-Neighbor Search." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/46937856524916401924.

Full text
Abstract:
碩士<br>國立高雄第一科技大學<br>電腦與通訊工程所<br>98<br>Over recent years, support vector machines (SVMs) have been widely used for solving a variety of classification problems in the fields of pattern recognition and data mining applications. One basic principle behind SVMs is to predict the class label of a testing sample by using the optimal hyperplane determined from labeled training samples. Obviously, this principle brings SVMs a limitation that they are computationally infeasible for training a very large-scale dataset. To overcome this drawback, an intuitive approach is to reduce the number of training samples that are unrelated to the construction of the optimal hyperplane. In this thesis, an efficient approach based on nearest neighbor search is therefore proposed to identify non-relevant samples that can subsequently be removed from a large-scale training dataset without degrading the classification accuracy. The performance of the proposed approach is assessed through the use of several publicly available datasets such as IRIS, Monks, and Forest. Experimental results demonstrate that the proposed approach is a significant improvement compared with the previous attempts in terms of the reduced number of training samples, the time taken for SVM training procedures, and the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Shang-Ju, and 李尚儒. "A Novel Algorithm for Fast Codebook Search." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/47491812853646618150.

Full text
Abstract:
碩士<br>國立交通大學<br>電控工程系所<br>98<br>In this thesis, we propose an algorithm to reduce the complexity to search the most suitable codeword for a given codebook. It is proven in the thesis that about half codewords are eliminated in each iteration. In addition, we derive two lower bounds for the proposed algorithm and show that they reach the actual SNR loss in high resolution codebook. Furthermore, the complexity analysis and simulations are given to see that the advantages of taking this algorithm are revealed in the scenario of large codebook size.
APA, Harvard, Vancouver, ISO, and other styles
19

Kuo, Ching-Lin, and 郭景林. "Fast Codeword Search Techniques for Vector Quantization." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/71639511911627628887.

Full text
Abstract:
碩士<br>國立中正大學<br>資訊工程學系<br>84<br>In this thesis, we propose two fast codeword search techniques for vector quantization. One is the closest-paired tree-structured vector quantization (CPTSVQ), and the other is the double test equal-average nearest neighbor search (DTENNS) algorithm. The CPTSVQ is a kind of tree- structured vector quantization (TSVQ). In CPTSVQ, the closest-pair technique is used to enlarge the search range of the multipath search algorithm, and improves the image quality of CPTSVQ. According to our experimental results, if the number of the search paths of CPTSVQ is equal to that of TSVQ, the image quality of CPTSVQ is better than that of TSVQ, though CPTSVQ spends more time in encoding images than that of TSVQ. Moreover, the performances of CPTSVQ are the medial results of those of TSVQ for the different numbers of search paths. The newly proposed CPTSVQ based on TSVQ can provide more choices for the trade-off in improving the image quality and facilitating the encoding time. For the DTENNS algorithm, it not only reduces the encoding time but also encodes the image with the same quality of the full search algorithm. Moreover, the encoding time of the DTENNS algorithm is faster than that of the equal-average nearest neighbor search (ENNS) algorithm, which was recently proposed by Guan et al..
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Yong-Sheng, and 陳永昇. "Fast Algorithms for Block Matching, Nearest Neighbor Search, and DNA Sequence Search." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/14512640917239947049.

Full text
Abstract:
博士<br>國立臺灣大學<br>資訊工程學研究所<br>89<br>Template matching has been widely used in image and video compression, visual tracking, stereo vision, pattern classification, object recognition, and information retrieval in database systems. Among the major difficulties of template matching is its high computational cost when dealing with large amount of data. In this thesis we propose techniques that can greatly improve the computational efficiency of template matching while still guaranteeing the optimal search. These techniques are applied to speed up the applications of block matching, nearest neighbor search, and DNA sequence database search. The key idea of how we speed up the template matching process is the utilization of distance lower bounds. Our goal is to find in a search range the object yielding the minimum distance to the query object. Therefore, calculation of a distance can be skipped if any of its lower bound is larger than the global minimum distance. Since the computation of the distance lower bound utilized in this work costs less than that of the distance itself, the overall process can be accelerated. Moreover, the winner-update search strategy is used to reduce the number of distance lower bounds actually calculated. Several data transformation techniques are also adopted to tighten the distance lower bounds. Thus further speedup is achieved. For the block matching application in video compression and visual tracking, we propose a new fast algorithm based on the winner-update search strategy which utilizes an ascending lower bound list of the matching error to determine the winner. At each search position, the costly computation of matching error can be avoided when there exists a lower bound larger than the global minimum matching error. The proposed algorithm can significantly speed up the computation of the block matching because (1) computational cost of the lower bound we use is less than that of the matching error itself; (2) an element in the ascending lower bound list will be calculated only when its preceding element has already been smaller than the minimum matching error computed so far; and (3) for many search positions, only the first several lower bounds in the list need to be calculated. Our experiments have shown that, when applying to motion vector estimation for several widely-used test videos, 92% to 98% of operations can be saved. Moreover, we apply the proposed block matching algorithm to a video-based face/eye tracking system. In our experiments, the face and eye positions of the user can be obtained at the video frame rate. We also propose in this thesis a fast and versatile algorithm which can perform a variety of nearest neighbor searches very efficiently. At the preprocessing stage, the proposed algorithm constructs a lower bound tree (LB-tree) by agglomeratively clustering all the sample points. Given a query point, the lower bound of its distance to each sample point can be calculated by using the internal node of the LB-tree. Calculations of distances from the query point to many sample points can be avoided if their less expensive lower bounds are larger than the minimum distance. To reduce the amount of lower bounds actually calculated, the winner-update search strategy is used for tree traversal. For further efficiency improvement, data transformation can be applied to the sample and query points. In addition to finding the nearest neighbor, the proposed algorithm can also (i) provide the k-nearest neighbors progressively; (ii) find the nearest neighbors within a specified distance threshold; and (iii) identify neighbors close to the nearest neighbor. Our experiments have shown that the proposed algorithm can save substantial computation, particularly when the distance of the query point to its nearest neighbor is relatively very small compared with its distance to most other samples (which is the case for many object recognition problems). When applied to the real database used in Nayar's 100 object recognition system, the proposed algorithm is about one thousand times faster than the exhaustive search. This performance is roughly eighteen times faster than the result attained by Nene and Nayar, whose method is by far the best method we know. In the application of DNA sequence database search, our goal is to find all the sequence segments in the database that are similar enough (compared to a threshold value) to the query sequence. We propose in this thesis a string-to-signal transform technique which can transform a DNA sequence into multi-channel signals. Without considering gaps, the similar score between two DNA sequences can be calculated as the sum of absolute difference between their corresponding signals. Fast template matching techniques presented in this thesis can then be applied to greatly speed up the search process. Moreover, these techniques guarantee the optimal search. That is, all the sequence segments that are similar enough to the query sequence can be found without any miss.
APA, Harvard, Vancouver, ISO, and other styles
21

Bégin, Steve. "A search for fast pulsars in globular clusters." Thesis, 2006. http://hdl.handle.net/2429/17874.

Full text
Abstract:
Millisecond pulsars (MSP) are old neutron stars that have been spun up to high spin frequencies (as fast as 716 Hz) through the accretion of matter from a companion star. The extreme steller densities in the core of globular clusters creates numerous accreting neutron star systems through exchange interactions: this leads to the formation of MSPs in larger numbers than the galactic disk. Over the course of this project, we have collected over 17 TB of data on the 3 globular clusters M28 NGC6440 and NGC6441 plus 2 observations on NGC6522 and NGC6624 as part of the recently begun S-band survey using the Green Bank telescope. I have analyzed and conducted acceleration searches on 70% of the data and discovered 7 of the 23 new millisecond pulsars reported in this work. One year of timing observations of the pulsars in M28 and NGC6440 has led to the phase connected solution for 12 of the 15 new pulsars in those two clusters, 7 of which are in binaries. We have measured the rate of advance of periastron for two highly eccentric binaries and assuming this is purely due to general relativity, this leads to total system masses of (1.616 ± 0.014)M [circle with central dot] and 2.2 ± 0.8)M [circle with central dot] for M28C and NGC6440B respectively. The small mass function combined with this information imply that the most likely neutron star mass of NGC6440B is either very large or else there could be significant contribution to the advance of periastron from a nonzero quadrupole moment due to tidal interaction with the companion. Measurements of the period derivatives for many of the pulsars show that they are dominated by the dynamical effect of the gravitational field of the clusters. Finally, we have discovered the potential presence of a Mars-mass planet orbiting the pulsar NGC6440C with a period of ~21 days. A dedicated timing campaign will be necessary to confirm the presence of such an object.<br>Science, Faculty of<br>Physics and Astronomy, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Jun-Ting, and 陳俊廷. "A HYBRID CANDIDATE SCHEME FOR FAST ACELP SEARCH." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24759404532951137604.

Full text
Abstract:
碩士<br>大同大學<br>通訊工程研究所<br>93<br>A manner of speech coding could be said to come over a series of advances and replacement of technology; and all of these are just for the perfect compatibility or trade-off between the finite resource and the marketing request; and the finite resource is usually closely related to system computational complexity. For an example of ACELP speech codec mode, a huge amount of computations are concentrated in the part of codebook search. Therefore, how to do a fast and accurate search on algebraic codebook becomes very important; and this is the main propose for this thesis. This thesis mainly supplies a fast and accurate scheme for codebook search; and this scheme is the mixture of the characteristics of two fast search schemes:One is from a paper of electrical engineering department of National Cheng Kung University [6], which is called “Chen’s scheme” in this thesis; and another one is from a graduation thesis of my senior classmate last year [32], which is called “Wang’s scheme” in this thesis. We called this innovated search scheme as “Hybrid scheme” in this thesis. We adopt the ITU-T G.729 codec standard as a sample for experience in this thesis. From the experience result, Hybrid scheme indeed has great performance better than that of two cooperated schemes, Chen’s scheme and Wang’s scheme for the performance of codebook search and the reduction of computational complexity; besides, it really preserves certain degree of speech quality. And after the complexity comparison and the detailed analysis for Hybrid scheme and depth-first tree search, we prove that Hybrid scheme does have its existing value for reference and the flexible selection for system developer.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Wen-Chin, and 王文祺. "AN IMPROVED CANDIDATE SCHEME FOR FAST ACELP SEARCH." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/19280679835777081194.

Full text
Abstract:
碩士<br>大同大學<br>電機工程研究所<br>91<br>In the era of third-generation (3G) wireless personal communications, though applications of multimedia such as video and data communication have become more and more popular, speech communication is still one of the most important mobile radio services. The ACELP algorithm is based on the Code-Excited Linear-Prediction (CELP) coding model. The ACELP (algebraic code excited linear prediction) algorithm has been adopted by many speech coding standards due to its high speech quality and inherent robustness to channel noise. But in the full search, it need 8192 operational load. To further reduce computation complexity reduction, in this thesis, we propose an improved candidate scheme for fast ACELP search. Consequently, speech coding techniques that fast search as possible is increasingly important. To achieve the goal, the use of fast search speech coders is certainly an attractive approach to retain overall high voice quality at low average search loops. The aim of this thesis is thus to the ACELP (algebraic code excited linear prediction) algorithm has been adopted by many speech coding standards due to the relative high speech quality available. To further reduce computation complexity reduction, we propose an improved candidate scheme for fast ACELP search. For further computational complexity reduction, we propose a fast ACELP algorithm using a designed pilot function to predict the predetermined candidate pulses. We to select maximum number forward difference of target signal. A region is to include prior to one point and later three points. We in accordance with speech quality to decide need regions number. In the 7.95 and 7.4 kbit/s modes for the AMR (Adaptive Multi-Rate ) codec of the 3rd Generation Partnership Project (3GPP) standard coder, sampling rate is 8 kHz with 16 bit resolution and the frame size is 20 ms. Each frame has a bit stream of 17 bits.The algebraic code excited linear prediction (ACELP) algorithm has been adopted by many speech coding standards, such as the ITU G-723.1 and G-729 as well as the GSM EFR speech coding standards, due to low complexity and high quality. Simulation results show that the computational load can be reduced by about 50-80% with almost imperceptible degradation in performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Chang, Ming-Che, and 張銘哲. "Adaptive Cross Search for Fast Motion Estimation Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/su6hj9.

Full text
Abstract:
碩士<br>國立高雄第一科技大學<br>電腦與通訊工程所<br>96<br>When digital video data are stored in storage devices or transmitted in communication channel, it requires huge space of storage or occupies wide transmission bandwidth. This results in the great development and demand in video compression standards like MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264, etc. Motion estimation plays a very important role in video data compression system. Its main goal is to find out the similar data between neighbor video frames so as to reduce the temporal redundancy in video frames. Therefore, the accuracy of motion estimation has a big influence to the quality of reconstructed video in the video coding system. Another one important issue about the motion estimation is its high computation complexity that makes it time-consuming in coding scheme. Summarizing those reasons, a fast and accuracy motion estimation algorithm is very important in video coding scheme. Block-matching motion estimation is often used in video coding. Among those block-matching algorithms, full-search algorithm (FS) gets the best performance in the quality of reconstructed video. However, the search points required are too much to make it time-consuming. Hence, there were many fast block-matching algorithms had been developed such as three-step search algorithm (TSS), new three-step search algorithm (NTSS), four-step search algorithm (4SS), block-based gradient descent search algorithm (BBGDS), diamond search algorithm (DS), and hexagon-based search algorithm (HEXBS) etc. These fast search algorithms try to provide an acceptable quality of reconstructed video while reducing number of search points for each block as many as possible. In the thesis, we propose a new algorithm based on cross search pattern, and some additional strategies are used. The strategies, such as prediction, thresholding, and hierarchy etc, help us quickly find the motion vector for each block . Experimental results show that as compared to existing search algorithms the proposed search algorithm requires the less number of search points for each block while maintaining a performance similar to other search algorithms in term of motion compensation errors. approximate quality of reconstructed video while compared to the existing search algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Chang, Shun-Chieh, and 張舜傑. "The Research of VQ-Based Fast Search Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/k7gsrf.

Full text
Abstract:
博士<br>國立臺北科技大學<br>電機工程系博士班<br>100<br>This dissertation proposes a fast search algorithm for vector quantization (VQ) based on a fast locating method, and uses learning and trade-off analysis to implement this algorithm. The proposed algorithm is a binary search space VQ (BSS-VQ) that determines a search subspace by binary search in each dimension, and the full search VQ (FSVQ) or partial distance elimination (PDE) is subsequently used to obtain the best-matched codeword. In trade-off analysis, a slight loss occurred in quantization quality; however, a substantial computational saving was achieved. In learning analysis, the BSS was built by the learning process, which uses full search VQ (FSVQ) as an inferred function. The BSS-VQ algorithm is applied to the line spectral pairs (LSP) encoder of the G.729 standard, which is a two-stage VQ encoder with a codebook size of 128 and two small codebook sizes of 32. In addition, a moving average (MA) filters the LSP parameter beforehand, and the high correlation characteristics are degraded between consecutive speech frames. These factors present a challenge for developing fast and efficient search algorithms for VQ. In the experiment, the computational savings of DITIE, TSVQ, and BSS-VQ are 61.72%, 88.63%, and 92.36%, respectively, and the quantization accuracy of DITIE, TSVQ, and BSS-VQ are 100%, 26.07%, and 99.22%, respectively, which confirmed the excellent performance of the BSS-VQ algorithm. Moreover, unlike the TIE method, the BSS-VQ algorithm does not depend on the high correlation characteristics of signals to reduce the amount of computation; thus, it is suitable for the LSP encoder of the G.729 standard.
APA, Harvard, Vancouver, ISO, and other styles
26

Chou, Tung, and 周彤. "Fast Exhaustive Search for Polynomial Systems over F2." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/72472312199039751567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jhao, Bin-Cheng, and 趙斌成. "Fast predictive search algorithm for video motion estimation." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/43132089826832406542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Liang, Wong Mu, and 王木良. "Fast codebook search schemes in CELP speech coder." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/33258578647155591386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Shen-hsien, and 劉昇顯. "Enhanced zero-block decision with fast motion search." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/12068245007983493901.

Full text
Abstract:
碩士<br>國立中央大學<br>通訊工程研究所碩士在職專班<br>98<br>H.264/AVC is the latest video coding standard. In order to achieve the highest coding efficiency, H.264 adopts complicated coding schemes, employing motion compensation with variable blocks sizes motion estimation, multiple reference frames motion estimation, de-blocking filter and integer transform, etc. Unfortunately, these features incur a considerable increase in encoder complexity, mainly regards to mode decision and motion estimation. In this paper, we first review the characteristics of zero blocks and ZBD (zero-block mode decision) algorithm, that algorithm is applicable to high QP and slow video, but in low QP and complex moving images, the performance has room for improvement. We proposed the enhanced zero-block decision algorithm that used pattern to replace ZB quantity to decide best mode. Then we add the concept of motion search speed up in this paper to improve performance. Experimental results show that our proposed method almost no distortion in the image quality of the case, the encoder can reduce the computing complexity, but also improves the quantitative parameters in the low and complex moving image.
APA, Harvard, Vancouver, ISO, and other styles
30

Hsieh, Yen-Chou, and 謝衍州. "Fast Packet Classification Based on Binary Prefix Search." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/36065667203365441311.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系碩博士班<br>92<br>Fast packet classification is required for the increasing traffic demand because of the rapid growth of the Internet. Packet classification is often the first packet processing step in routers. Because of the complexity of the matching algorithms, packet classification is often a bottleneck in the performance of the network infrastructure. Most of the algorithmic solutions don’t scale very well. In this thesis, we proposed a novel packet classification algorithm based on the binary prefix search. The data structure of a d-dimensional rule table is converted to a d-level sorted array for binary search on each level. We evaluated our scheme by a variety of filter tables and compared it with the other existing schemes. Our experiments showed that the proposed scheme performs better than other existing schemes in terms of speed and storage requirements. Specifically, the performance improvements of the proposed scheme in classification speed over the aggregated bit vector are 29-97% and 63-75% for tables of 1K-20K 2D rules and 100-2000 5D, respectively.
APA, Harvard, Vancouver, ISO, and other styles
31

Kuo, Chien-Liang, and 郭建良. "Fast Partial Codebook Search Algorithm for Vector Quantization." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/91097170823278569147.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系碩博士班<br>91<br>In this thesis, we propose two methods which base on mean-sorted method , to filter the impossible codeword in advance, expecting to reduce the times of the calculation of Euclidean Distance in the Encoding times.In these algorithms,we choice different projection masks for each codeword by it''s pixels value, and store in the Codebook;Or after producing the codebook , producing a unique projection mask for each codeword according to it''s distribution of pixels value.When compressing,take out the previous records, and set these on the codeword and source vectors, then use the distortion measure function to judge.If the result which was calculated by that method is matched,then calculate the Euclidean Distance further. Our proposed method is base on MPS, and can avoid unnecessary calculaion of the Euclidean Distance. Because when we search in the codebook, we do not need to calculate the SED value and can judge whether this codeword is the closest codeword;Besides, it is proved this method can reduce the calculation quantity effectively through experiment.In addition, because this method is not only fix in a certain projection mask.And can choice different types according to difference of vector,so it has much flexibility. its effect also has the good performance equally in all types of pictures.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Shu-Yen, and 王書彥. "Fast Cellular Search Algorithm for Block-Matching Estimation." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/22435183367158286304.

Full text
Abstract:
碩士<br>長庚大學<br>電機工程研究所<br>91<br>The multimedia data which includes image, audio and video is more and more larger with accompanying the progress of digital technology and the development of internet. When digital video data are stored in storage devices or transmitted in communication channel, it requires huge space of storage or occupies wide transmission bandwidth. Therefore, the motion estimation takes an important role in video coder that has a big influence on the performance of a video coding system. In general, the motion field of the current block can be tracked from the motion fields of the neighboring blocks in the spatial and temporal directions. Many types of motion estimation methods, block matching algorithm were used commonly in many video compression standards. The full search block algorithm gets the best performance in the quality of reconstructed video, but it will be a heavy computational load. Many fast algorithms, such as three steps search algorithm, Diamond Search algorithm, Cellular search algorithm, have been developed to reduce the computational complexity. In this thesis, we propose a new search algorithm to improve the Cellular search, in terms of search algorithm’s rules, adapt to decide the point that real need to check. Remove the redundancy checking points rely to interrelation of vectors. In addition, we consider two performance and efficiency measurement criteria including peak signal to noise ratio and computational complexity to compare the efficiency of two image motion estimations with traditional methods three-step search algorithm and the Cellular search algorithm. In simulations study, the result show the same quality of image was obtained, however, the 50 % computation was less than the Cellular search algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

HUANG, SAN-YI, and 黃三益. "Dynamic bucketing:a data structure for fast range search." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/00446121402221182975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

XIE, WAN-MING, and 謝萬明. "Fast algorithms for VQ codebook design and search." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/16068615464186125222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Wei-Jeng, and 王偉政. "Fast Exhaustive Search for Polynomial Systems over F3." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/49431959255690879055.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電子工程學研究所<br>104<br>Solving multivariate polynomial systems over finite fields is an important problem in cryptography. For random F2 low-degree systems with equally many variables and equations, enumeration is more efficient than advanced solvers for all practical problem sizes. Whether there are others remained an open problem. We here study and propose an exhaustive-search algorithm for low-degree systems over F3 which is suitable for parallelization. We implemented it on Graphic Processing Units (GPUs) and commodity CPUs. Its optimizations and differences from the F2 case are also analyzed. We can solve 30+ quadratic equations in 30 variables on an NVIDIA GeForce GTX 980 Ti in 14 minutes; a cubic system takes 36 minutes. This well outperforms existing solvers. Using these results, we compare Gröbner Bases vs. enumeration for polynomial systems over small fields as the sizes go up.
APA, Harvard, Vancouver, ISO, and other styles
36

Awekar, Amit Chintamani. "Fast, incremental, and scalable all pairs similarity search." 2009. http://www.lib.ncsu.edu/theses/available/etd-12022009-094010/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yeh, Chien-hsing, and 葉建興. "A Fast Quantum Search Algorithm and its Application." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/93712296814320678370.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電子工程系<br>95<br>Quantum computation and quantum information science, which combine the exploration of quantum mechanics and new physical principals, have been a promise for solving complicated problems that are not tractable by conventional computers. In this thesis, we consider quantum search algorithms to search the minimum in an unordered database of N items. Traditionally, the running time required to locate the minimum is O(N) steps. To alleviate the computational load, various quantum search algorithm of complexity O(N^{1/2}) have been addressed. This thesis presents two fast Grove-based quantum search algorithms to more efficiently find the minimum. The first one uses the quantum counting to estimate the number of marked states to determine the number of iterations required, whereas the second one, to reduce the complexity called for, adaptively adjusts the number of iterations based on whether we can find a lower minimum or not in the present iteration. Comparison with other related quantum search algorithm is also thoroughly made. To justify the validity of the new algorithm, we apply it to antenna selection and block-based motion estimation problems. As shown by provided simulations, the new algorithm offers lower computational complexity compared with previous works in various scenarios.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Kuan-Hua, and 楊冠華. "Fast No Search Fractal Coding for Color Images." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/14825267229564404808.

Full text
Abstract:
碩士<br>大同大學<br>通訊工程研究所<br>93<br>The fractal image coding has been used in many image processing applications in recent years. In the fractal image coding, most of the time is spent on searching the best matching domain block and the parameters of the transform function. If we want to transmit an image in the internet or store it in some devices, it is desirable to achieve fast fractal encoding. In this thesis, fast no search fractal coding methods for color images are proposed which are able to speed up the encoding time and maintain the image fidelity. Based on iterated function system (IFS) and recurrent iterated function system (RIFS), we propose some no search factal coding methods for RGB images and YCbCr images. Extensive simulations show that our no search fractal coding methods achieve fast encoding time than conventional fractal coding method and maintain high quality of decoded images.
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Chih-Te, and 盧志德. "Multiview Encoder Parallelized Fast Search Realization on NVIDIA CUDA." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/vdd5k4.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>資訊工程系研究所<br>98<br>Due to the rapid growth of the graphics processing unit (GPU) processing capability, it gets more and more popular to use it for non-graphics computations. NVIDIA announced a powerful GPU architecture called Compute Unified Device Architecture (CUDA) in 2007, which is able to provide massive data parallelism under the SIMD architecture constraint. We use NVIDIA GTX-280 GPU system, which has 240 computing cores, as the platform to implement a very complicated video coding scheme. The Multiview Video Coding (MVC) scheme, an extension of H.264/AVC/MPEG-4 Part 10 (AVC), is being developed by the international standard team joined by the ITU-T Video Coding Experts Group and the ISO/IEC JTC 1 Moving Pictures Experts Group (MPEG). It is an efficient video compression scheme; however, its computational compexity is very high. Two of its most time-consuming components are motion estimation (ME) and disparity estimation (DE). In this thesis, we propose a fast search algorithm, called multithreaded one-dimensional search (MODS). It can be used to do both the ME and the DE operations. We implement the integer-pel ME and DE processes with MODS on the GTX-280 platform. The speedup ratio can be 89 times faster than the CPU only configuration. Even when the fast search algorithm of the original JMVC is turned on, the MODS version on CUDA can still be 21 times faster.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Ching-Hsien, and 陳慶賢. "A FAST SEARCH METHOD FOR TEXT-INDEPENDENT SPEAKER IDENTIFICATION." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/67036281586403630746.

Full text
Abstract:
碩士<br>大同工學院<br>電機工程學系<br>84<br>The major problem in text-independent speaker identification is how to enhance the inter-speaker variances and decrease the intra-speaker variances. Once these problems can be solved, we can develop speaker identification systemwith wish identfication rate. To achieve the goals, there are two need, one is to find the appropriate feature of speech signals, and the other is to define a good distance measure. In practice, a large amount of speech data are needed for both training and testing, especially for the condition of many speakers employed, which does notonly results in the lower accuracy, but also the lower speed because of the computation complexity. Therefore, it is very important to reduce the computation complexity in the system of speaker identification. This research developed a VQ-based speaker identification system and proposed some strategies to improve the problems as described above. At First, to raise the identification rate, the appropriate distance measure is needed. We derived a new distance measure based on the Maximum Likelihood Estimation (MLE) along with the characteristic of the prewhitening cepstrum feature of speech signals. This new distance measure normalizes distances as well as MLE, but it is not such complex in computation as MLE be. In the decision making process, the accumulated distances of the employed speakers need to be compared on the equalized basis. To deal with that problem,we use the strategy called Laterial Inhibition Gaussian (LIG) Network which wilenhance the inter-speaker variance and raise the identification rate. Secondly, for the sake of decreasing the computational complexity, we base on a hierarchical identification scheme and make some improvements. It makes test vectors only need to be measured the distances with some homogeneous set of speakers. Thus, it saves much computation effort used in full search of the whole employed speakers. The experimental results show that the strategies used in this research surely save a lot of save a lot of computation complexity. With no influences on the identification rate, it saves the computation complexity about 50% compared to that of full search. As to the identification rate, it also could achieve about 90% with 19 speakers employed.
APA, Harvard, Vancouver, ISO, and other styles
41

Lue, Chien Chih, and 呂建志. "A Fast Search Algorithm for Vector Quantization Codebook Generation." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/16664514047637526747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lo, Wei-En, and 羅偉恩. "Fast Binary Search Packet Classification Based On Decision Trees." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/19616735166327089505.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系碩博士班<br>96<br>With the rapid growth of internet requirement and the occurrence of network applications, backbone routers nowadays need to classify packets into flows in a short time. In this paper, we propose a new algorithm called fast binary search packet classification to improve the efficiency on both search speed and the memory storage. In our algorithm, we first partition the filter table with decision tree but process a binary search on leaf nodes instead of the traditional tree traversal. We use a hash- based bit selecting strategy to replace the range cutting method in order to reduce memory usage and the impact caused by unbalance environment. Our algorithm can be implemented in two schemes. Variable-selected-bit scheme chooses different bit on each node and thus has better memory utilization. Fixed-selected-bit scheme chooses same bit on each node in the same level in order to reduce the packet encoding time and then increase the search speed. At last we compare our algorithm with two well-known algorithm - HiCuts and HyperCuts. Result shows that variable- selected-bit scheme has better performance on memory usage in all kinds of filter tables than others; another fixed-selected-bit scheme performs 20% better than HiCuts on search in filter tables with uniform distribution and even more than 50% better in an unbalance environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Chiang, Yen-Hwa, and 江彥樺. "Fast Search Algorithms for Motion Estimation in Video Coding." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/90762676497159516837.

Full text
Abstract:
碩士<br>國立高雄第一科技大學<br>電腦與通訊工程所<br>93<br>Due to the great development and demand in video compression standards like MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264, etc, motion estimation is still an important part in the video coding schemes. Its main goal is trying to find out the motion vector and the difference of similarity between two successive pictures so as to reduce amount of redundant information in temporal domain and spatial domain to achieve data compression. Hence, a good motion estimation algorithm may influence the video quality of coding scheme very well. Another one important issue about the motion estimation is its high computation complexity that makes it time-consuming in coding scheme. Summarizing those reasons, a fast and efficient motion estimation algorithm is very important in video coding scheme. Block-matching algorithms are used for motion estimation in video coding because of their fast and simple feature. Among those block-matching algorithms, full-search algorithm (FS) gets the best performance in the quality of reconstructed video. However, high computation complexity and time-consuming makes it unsuitable for real-time application. Hence, there were many fast block-matching algorithms had been developed such as three-step search algorithm (TSS), new three-step search algorithm (NTSS), four-step search algorithm (4SS), block-based gradient descent search algorithm (BBGDS), diamond search algorithm (DS), hexagon-based search algorithm (HEXBS), and Enhanced hexagon-based search algorithm (EHEXBS) etc. These fast search algorithms try to provide an acceptable quality of reconstructed video while reducing number of search points for each block as many as possible. In the thesis, we propose four new block-matching algorithms. The first method will be treated as a base method. It is a simple algorithm and provides very good quality of reconstructed video even better than diamond search algorithm, which was recommended by MPEG-4. Our other three methods are variations of first method, which focus on the reduction of search points. These methods improve search speed obviously by adding more additional strategies to reduce search points as many as possible and even a block only needs one search point in the best case.
APA, Harvard, Vancouver, ISO, and other styles
44

Chang, Ming-Ching, and 張明清. "Fast Search Algorithms for IC Printed Mark Quality Inspection." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/42532050550149009258.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學系<br>86<br>This paper presents an effective and general purpose search algorithm for alignment, and we applied it to IC printed mark quality inspection. The search procedure is based on normalized cross correlation, and we improve the method with hierarchical resolution pyramid, dynamic programming, subpixel accuracy, multiple target search, and automatic model selection. The proposed search method can be applied to general visual inspection. The IC printed mark includes a logo pattern and characters. Due to the alignment error of the inspection machine, the mark can be rotated or translated. Main printing error of an IC mark includes: distortion, missing ink, wrong position, double print, smear print, bad contrast (global or partial character), misprint, and mis-orientation print. The inspection accuracy, speed, reliability, and repeatability are all important for the industrial requirement. To develop the inspection algorithm, digital image processing and computer vision techniques including image binarization, projection, image difference, normalized cross correlation, and mathematical morphology are used. We develop the teaching and inspection function and optimize the system and test it on an IC inspection machine. Our algorithm achieves high accuracy, reliability, and repeatability with high speed for industrial requirement and works well on field test of various IC products.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Ho-Shun, and 陳河順. "A Fast Search Method for Table-based Sphere Decoding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/89535828362536667778.

Full text
Abstract:
碩士<br>國立交通大學<br>電機工程學系<br>101<br>In this thesis, we propose 3 enumeration methods based on the concentric property of QAM modulation and the tabular enumeration. The first is a complexity-reduced enumeration which separates constellation points into several concentric circles. The candidate nodes are enumerated from inner to outer circles following the decision rules. The second method is an extension of tabular enumeration which we expand the enumeration tables by adopting the candidate sets to enlarge the selection range. For the implementation, we further simplify the proposed tables and save more than 50% of the memory units comparing with direct storing. Finally, we combine the selection approach in the first method with the expanded tables as fast search method for the depth-first sphere decoder. The performance is comparable to that of the ML detector and it can be carried out by the proposed low-complexity architecture.
APA, Harvard, Vancouver, ISO, and other styles
46

CHIEH, CHUNG MING, and 鍾明潔. "An improvement of fast search algorithm for vector quantization." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/64771750684446335732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tsai, Chung-Wei, and 蔡鐘葳. "A Hexagon-Based Fast Search Algorithm for Motion Estimation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/28355244015462725269.

Full text
Abstract:
碩士<br>南台科技大學<br>電子工程系<br>98<br>In video compression, motion estimation plays a key role for educing the complexity of the calculation. There are different impacts on the performance while using various search algorithms for motion estimation. Efficient algorithm can save more computing time and more search points. A rapid search algorithm using the principle of Hexagon-Based Search algorithm (HEXBS) for motion estimation called Hexagon-Based Fast Search algorithm (HEXFS) is proposed. The proposed algorithm combines the hexagon search algorithm and a two-step search algorithm which uses two cross search patterns to reduce the huge calculation complexity. The experimental results show that the search points' improvement ratio of the modified search algorithm can be higher than other conventional algorithms which have been proposed.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Che-Wei, and 李哲瑋. "Double-layered Initial Search Pattern for Fast Motion Estimation." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/94919139335258025914.

Full text
Abstract:
碩士<br>國立成功大學<br>電機工程學系碩博士班<br>93<br>Multimedia communication relies on data compression technologies to reduce the data bytes of transmission and enhance the speed of transmission. Motion estimation is vital to many motion-compensated video-coding techniques/standards, such as ISO MPEG-1/2/4 and ITU-T H.261/262/263/264。In block motion estimation, a search pattern with a different shape and size has a very important impact on performance of motion estimation. The performance indicates that the speed of finding out motion vectors and the visual quality of predicted results. In recent years, many computationally efficient fast search algorithm were developed, among which are typically the three-step search (3SS) in 1994, the new diamond search (DS) in 2000, the hexagon-based Search (HEXBS) in 2002, and the efficient three-step search (E3SS) in 2004. Here we propose a pair of simple, robust and efficient fast block matching motion estimation algorithms, called double-layered initial search patterns (DLISP). Simulation experiments demonstrate that the proposed DLISP algorithm greatly outperforms the well-known hexagon-based Search (HEXBS) algorithm and achieves similar MSE performance compared to efficient three-step search (E3SS) while reducing its computation by up to 22% approximately. Compared with other recently proposed block-matching algorithms, the proposed DLISP algorithms works better on average in terms of MSE values, reconstructed image quality, and average number of search points.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Wei Yi, and 王偉一. "A Fast Local Search Algorithm for Virtual Network Embedding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/66124207234591887052.

Full text
Abstract:
碩士<br>國立清華大學<br>通訊工程研究所<br>104<br>Network virtualization is a popular topic about providing next-generation Internet services. It primarily virtualizes the resources managed by the Infrastructure Provider (InP) and the demands claimed by the Service Provider (SP) to make the concepts of the resource allocation and the user isolation to be more clearly. We inspired by the insight of the pricing problem, so that we set the price of virtual requests on the objective function. Then we focuses on a relatively fast algorithm for solving the VNE than exact solutions. We propose the Permutation Swap Method (PSM) to nd a local optimal solution in a reasonable computation time (few seconds). The PSM represents a network mapping by a node permutation, and it iteratively swaps two nodes' permutation to obtain a lower objective value until reaching a local minimum. We apply four different algorithms: Best Fit with Greedy Selection (BF-GS), Best Fit with Random Selection (BF-RS), Mixed Random Fit with Greedy Selection (MRF-GS) and Mixed Random Fit with Random Selection (MRF-RS) in the PSM, and we conduct experiments to compare the performance and efficiency of these algorithms in three data center networks: Fat-Tree, BCube, VL2 and an inter-data-center network: Cogent. The experimental results are that the algorithm without random factor has the worst performance, and the performance gain by using the greedy selection is less than the one by using the mixed random t solution. Hence to take into account both the performance and efficiency, the MRF-RS method is the best algorithm.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Ying-Chih, and 王穎智. "Fast Fractional Pixel Search Algorithm in H.264/AVC." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/35548902198281173606.

Full text
Abstract:
碩士<br>國立中央大學<br>通訊工程研究所<br>95<br>H.264/AVC is a new video compression coding standard, in which quarter resolution and motion compensation can achieve more accurate motion description. It means that we need more time to find the best condition. Therefore, reducing the computational complexity for fractional pixels is necessary and significant. We proposed a fast fractional pixel search algorithm using symmetric convex cup for half and quarter resolution. In any case (half and quarter resolution), we can roughly reduce 65%~78% of computation complexity compared to that in the reference software. Experimental results show that our proposed algorithm preserve almost the same rate-distortion curve , which compare to the original full search(FS).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography