Academic literature on the topic 'Prefix search'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prefix search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prefix search"

1

Pelc, Andrzej. "Prefix search with a lie." Journal of Combinatorial Theory, Series A 48, no. 2 (1988): 165–73. http://dx.doi.org/10.1016/0097-3165(88)90003-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ferragina, Paolo. "On the weak prefix-search problem." Theoretical Computer Science 483 (April 2013): 75–84. http://dx.doi.org/10.1016/j.tcs.2012.06.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

KIM, KUN SUK, and SARTAJ SAHNI. "IP LOOKUP BY BINARY SEARCH ON PREFIX LENGTH." Journal of Interconnection Networks 03, no. 03n04 (2002): 105–28. http://dx.doi.org/10.1142/s0219265902000586.

Full text
Abstract:
Waldvogel et al.9 have proposed a collection of hash tables (CHT) organization for an IP router table. Each hash table in the CHT contains prefixes of the same length together with markers for longer-length prefixes. IP lookup can be done with O( log ldist) hash-table searches, where ldist is the number of distinct prefix-lengths (also equal to the number of hash tables in the CHT). Srinivasan and Varghese8 have proposed the use of controlled prefix-expansion to reduce the value of ldist. The details of their algorithm to reduce the number of lengths are given in [7]. The complexity of this algorithm is O(nW2), where n is the number of prefixes, and W is the length of the longest prefix. The algorithm of [7] does not minimize the storage required by the prefixes and markers for the resulting set of prefixes. We develop an algorithm that minimizes storage requirement but takes O(nW3 + kW4) time, where k is the desired number of distinct lengths. Also, we propose improvements to the heuristic of [7].
APA, Harvard, Vancouver, ISO, and other styles
4

SHEERAN, MARY. "Functional and dynamic programming in the design of parallel prefix networks." Journal of Functional Programming 21, no. 1 (2010): 59–114. http://dx.doi.org/10.1017/s0956796810000304.

Full text
Abstract:
AbstractA parallel prefix network of width n takes n inputs, a1, a2, . . ., an, and computes each yi = a1 ○ a2 ○ ⋅ ⋅ ⋅ ○ ai for 1 ≤ i ≤ n, for an associative operator ○. This is one of the fundamental problems in computer science, because it gives insight into how parallel computation can be used to solve an apparently sequential problem. As parallel programming becomes the dominant programming paradigm, parallel prefix or scan is proving to be a very important building block of parallel algorithms and applications. There are many different parallel prefix networks, with different properties such as number of operators, depth and allowed fanout from the operators. In this paper, ideas from functional programming are combined with search to enable a deep exploration of parallel prefix network design. Networks that improve on the best known previous results are generated. It is argued that precise modelling in a functional programming language, together with simple visualization of the networks, gives a new, more experimental, approach to parallel prefix network design, improving on the manual techniques typically employed in the literature. The programming idiom that marries search with higher order functions may well have wider application than the network generation described here.
APA, Harvard, Vancouver, ISO, and other styles
5

Ju Hyoung Mun, Hyesook Lim, and Changhoon Yim. "Binary search on prefix lengths for ip address lookup." IEEE Communications Letters 10, no. 6 (2006): 492–94. http://dx.doi.org/10.1109/lcomm.2006.1638626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wuu, Lih-Chyau, Tzong-Jye Liu, and Kuo-Ming Chen. "A longest prefix first search tree for IP lookup." Computer Networks 51, no. 12 (2007): 3354–67. http://dx.doi.org/10.1016/j.comnet.2007.01.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lakshmanna, Kuruva, and Neelu Khare. "Mining DNA Sequence Patterns with Constraints Using Hybridization of Firefly and Group Search Optimization." Journal of Intelligent Systems 27, no. 3 (2018): 349–62. http://dx.doi.org/10.1515/jisys-2016-0111.

Full text
Abstract:
Abstract DNA sequence mining is essential in the study of the structure and function of the DNA sequence. A few exploration works have been published in the literature concerning sequence mining in information mining task. Similarly, in our past paper, an effective sequence mining was performed on a DNA database utilizing constraint measures and group search optimization (GSO). In that study, GSO calculation was utilized to optimize the sequence extraction process from a given DNA database. However, it is apparent that, occasionally, such an arbitrary seeking system does not accompany the optimal solution in the given time. To overcome the problem, we proposed in this work multiple constraints with hybrid firefly and GSO (HFGSO) algorithm. The complete DNA sequence mining process comprised the following three modules: (i) applying prefix span algorithm; (ii) calculating the length, width, and regular expression (RE) constraints; and (iii) optimal mining via HFGSO. First, we apply the concept of prefix span, which detects the frequent DNA sequence pattern using a prefix tree. Based on this prefix tree, length, width, and RE constraints are applied to handle restrictions. Finally, we adopt the HFGSO algorithm for the completeness of the mining result. The experimentation is carried out on the standard DNA sequence dataset, and the evaluation with DNA sequence dataset and the results show that our approach is better than the existing approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Manikandan, P., Bjørn B. Larsen, and Einar J. Aas. "Design of embedded TCAM based longest prefix match search engine." Microprocessors and Microsystems 35, no. 8 (2011): 659–67. http://dx.doi.org/10.1016/j.micpro.2011.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

ZAITSU, Kazuya, Koji YAMAMOTO, Yasuto KURODA, Kazunari INOUE, Shingo ATA, and Ikuo OKA. "FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine." IEICE Transactions on Communications E95.B, no. 7 (2012): 2306–14. http://dx.doi.org/10.1587/transcom.e95.b.2306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nishi, Manziba Akanda, and Kostadin Damevski. "Scalable code clone detection and search based on adaptive prefix filtering." Journal of Systems and Software 137 (March 2018): 130–42. http://dx.doi.org/10.1016/j.jss.2017.11.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Prefix search"

1

Sedlář, František. "Algoritmy pro vyhledání nejdelšího shodného prefixu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236363.

Full text
Abstract:
This master's thesis explains basics of the longest prefix match (LPM) problem. It analyzes and describes chosen LPM algorithms considering their speed, memory requirements and an ability to implement them in hardware. On the basis of former findings it proposes a new algorithm Generic Hash Tree Bitmap. It is much faster than many other approaches, while its memory requirements are even lower. An implementation of the proposed algorithm has become a part of the Netbench library.
APA, Harvard, Vancouver, ISO, and other styles
2

Duda, Robson Fernando. "APLICAÇÃO DE HEURÍSTICAS E META-HEURÍSTICAS NO DESENVOLVIMENTO DE UM SISTEMA DE APOIO A DECISÃO PARA RESOLUÇÃO DE PROBLEMAS DE ROTEAMENTO DE VEÍCULOS APLICADOS À AGRICULTURA." UNIVERSIDADE ESTADUAL DE PONTA GROSSA, 2014. http://tede2.uepg.br/jspui/handle/prefix/169.

Full text
Abstract:
Made available in DSpace on 2017-07-21T14:19:39Z (GMT). No. of bitstreams: 1 Robson Fernando Duda.pdf: 3342961 bytes, checksum: 3f61d3a8f1dcfb461c6860c82d3f54db (MD5) Previous issue date: 2014-02-28<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>This paper presents a solution to the routing problem of vehicles with homogeneous fleet. To do so, heuristic and metaheuristic based algorithms applied towards the development of a decision support system, with georeferenced interface were developed. The algorithms had as base heuristic methods built in two phases, besides a metaheuristic. The interface layer used as visualization component is based in cartographic data that indicates the location of the points to be assisted and the paths that connects them, forming a road system represented using the Google Maps® API. The algorithms were validated using instances from the literature, presenting satisfactory results regarding optimization based in the methods that were used, showing that it is possible the usage of the developed system in the distribution of agricultural products.<br>Este trabalho apresenta uma solução para o problema de roteamento de veículos com frotas homogêneas. Para tanto, foram desenvolvidos algoritmos baseados em heurísticas e meta-heurísticas aplicadas ao desenvolvimento de um sistema de apoio a decisão, com interface georreferenciada. Os algoritmos tiveram como base métodos heurísticos construtivos e em duas fases, além de uma meta-heurística. A camada de interface utilizada como componente de visualização é baseada em dados cartográficos que indicam a localização dos pontos a serem atendidos e as vias que os interligam, formando a malha viária que é representada utilizando a API do Google Maps®. Os algoritmos foram validados utilizando instâncias da literatura, apresentando resultados satisfatórios em relação a otimização baseada nos métodos utilizados, mostrando ser possível a utilização do sistema desenvolvido para a distribuição de produtos agrícolas.
APA, Harvard, Vancouver, ISO, and other styles
3

Maršálek, Tomáš. "Návrh vyhledávacího systému pro moderní potřeby." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-262227.

Full text
Abstract:
In this work I argue that field of text search has focused mostly on long text documents, but there is a growing need for efficient short text search, which has different user expectations. Due to this reduced data set size requirements different algorithmic techniques become more computationally affordable. The focus of this work is on approximate and prefix search and purely text based ranking methods, which are needed due to lower precision of text statistics on short text. A basic prototype search engine has been created using the researched techniques. Its capabilities were demonstrated on example search scenarios and the implementation was compared to two other open source systems representing currently recommended approaches for short text search problem. The results show feasibility of the implemented prototype regarding both user expectations and performance. Several options of future direction of the system are proposed.
APA, Harvard, Vancouver, ISO, and other styles
4

Ngom, Bassirou. "FreeCore : un système d'indexation de résumés de document sur une Table de Hachage Distribuée (DHT)." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS180/document.

Full text
Abstract:
Cette thèse étudie la problématique de l’indexation et de la recherche dans les tables de hachage distribuées –Distributed Hash Table (DHT). Elle propose un système de stockage distribué des résumés de documents en se basant sur leur contenu. Concrètement, la thèse utilise les Filtre de Blooms (FBs) pour représenter les résumés de documents et propose une méthode efficace d’insertion et de récupération des documents représentés par des FBs dans un index distribué sur une DHT. Le stockage basé sur contenu présente un double avantage, il permet de regrouper les documents similaires afin de les retrouver plus rapidement et en même temps, il permet de retrouver les documents en faisant des recherches par mots-clés en utilisant un FB. Cependant, la résolution d’une requête par mots-clés représentée par un filtre de Bloom constitue une opération complexe, il faut un mécanisme de localisation des filtres de Bloom de la descendance qui représentent des documents stockés dans la DHT. Ainsi, la thèse propose dans un deuxième temps, deux index de filtres de Bloom distribués sur des DHTs. Le premier système d’index proposé combine les principes d’indexation basée sur contenu et de listes inversées et répond à la problématique liée à la grande quantité de données stockée au niveau des index basés sur contenu. En effet, avec l’utilisation des filtres de Bloom de grande longueur, notre solution permet de stocker les documents sur un plus grand nombre de serveurs et de les indexer en utilisant moins d’espace. Ensuite, la thèse propose un deuxième système d’index qui supporte efficacement le traitement des requêtes de sur-ensembles (des requêtes par mots-clés) en utilisant un arbre de préfixes. Cette dernière solution exploite la distribution des données et propose une fonction de répartition paramétrable permettant d’indexer les documents avec un arbre binaire équilibré. De cette manière, les documents sont répartis efficacement sur les serveurs d’indexation. En outre, la thèse propose dans la troisième solution, une méthode efficace de localisation des documents contenant un ensemble de mots-clés donnés. Comparé aux solutions de même catégorie, cette dernière solution permet d’effectuer des recherches de sur-ensembles en un moindre coût et constitue est une base solide pour la recherche de sur-ensembles sur les systèmes d’index construits au-dessus des DHTs. Enfin, la thèse propose le prototype d’un système pair-à-pair pour l’indexation de contenus et la recherche par mots-clés. Ce prototype, prêt à être déployé dans un environnement réel, est expérimenté dans l’environnement de simulation peersim qui a permis de mesurer les performances théoriques des algorithmes développés tout au long de la thèse<br>This thesis examines the problem of indexing and searching in Distributed Hash Table (DHT). It provides a distributed system for storing document summaries based on their content. Concretely, the thesis uses Bloom filters (BF) to represent document summaries and proposes an efficient method for inserting and retrieving documents represented by BFs in an index distributed on a DHT. Content-based storage has a dual advantage. It allows to group similar documents together and to find and retrieve them more quickly at the same by using Bloom filters for keywords searches. However, processing a keyword query represented by a Bloom filter is a difficult operation and requires a mechanism to locate the Bloom filters that represent documents stored in the DHT. Thus, the thesis proposes in a second time, two Bloom filters indexes schemes distributed on DHT. The first proposed index system combines the principles of content-based indexing and inverted lists and addresses the issue of the large amount of data stored by content-based indexes. Indeed, by using Bloom filters with long length, this solution allows to store documents on a large number of servers and to index them using less space. Next, the thesis proposes a second index system that efficiently supports superset queries processing (keywords-queries) using a prefix tree. This solution exploits the distribution of the data and proposes a configurable distribution function that allow to index documents with a balanced binary tree. In this way, documents are distributed efficiently on indexing servers. In addition, the thesis proposes in the third solution, an efficient method for locating documents containing a set of keywords. Compared to solutions of the same category, the latter solution makes it possible to perform subset searches at a lower cost and can be considered as a solid foundation for supersets queries processing on over-dht index systems. Finally, the thesis proposes a prototype of a peer-to-peer system for indexing content and searching by keywords. This prototype, ready to be deployed in a real environment, is experimented with peersim that allowed to measure the theoretical performances of the algorithms developed throughout the thesis
APA, Harvard, Vancouver, ISO, and other styles
5

Machado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.

Full text
Abstract:
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória.<br>Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
6

Campolmi, Alessia. "Essays on open economic, inflation and labour markets." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7367.

Full text
Abstract:
En los últimos años se ha desarollado mucho la literatura que utiliza modelos estocásticos de equilibrio económico general en economía abierta. En esta clase de modelos el primer capítulo estudia si el banco central tiene que fijarse en al inflación medida mirando al los precios al consumo (CPI) o a los precios a la producción. Se demonstra como la introducción de competencia monopolística en el mercado del trabajo y rigidez de los salarios nominales justifica el utilizo de la inflación medida sobre CPI. En el segundo capítulo el enfoque es sobre las diferentes volatilidades de la inflación entre paísos de la unión monetaria y como esto se puede relacionar con diferentes estructuras del mercado del trabajo. En el último capítulo se utiliza un modelo a dos paísos para estudiar las consecuencias de una subida del precio del petróleo sobre la inflación, los salarios reales y el producto interno bruto.<br>In these last years there has been an increasing literature developing DSGE Open Economy Models with market imperfections and nominal rigidities. It is the so called "New Open Economy Macroeconomics". Within this class of models the first chapter analyses the issue of whether the monetary authority should target Consumer Price Index (CPI) inflation or domestic inflation. It is shown that the introduction of monopolistic competition in the labour market and nominal wage rigidities rationalise CPI inflation targeting. In the second chapter we introduce matching and searching frictions in the labour market and relate different labour market structures across European countries with differences in the volatility of inflation across the same countries. In the last chapter we use a two-country model with oil in the production function and price and wage rigidities to relate movements in wage and price inflation, real wages and GDP growth rate to oil price changes.
APA, Harvard, Vancouver, ISO, and other styles
7

Raciborski, Rafal. "Topics in macroeconomics and finance." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209211.

Full text
Abstract:
The thesis consists of four chapters. The introductory chapter clarifies different notions of rationality used by economists and gives a summary of the remainder of the thesis. Chapter 2 proposes an explanation for the common empirical observation of the coexistence of infrequently-changing regular price ceilings and promotion-like price patterns. The results derive from enriching an otherwise standard, albeit stylized, general equilibrium model with two elements. First, the consumer-producer interaction is modeled in the spirit of the price dispersion literature, by introducing oligopolistic markets, consumer search costs and heterogeneity. Second, consumers are assumed to be boundedly-rational: In order to incorporate new information about the general price level, they have to incur a small cognitive cost. The decision whether to re-optimize or act according to the obsolete knowledge about prices is itself a result of optimization. It is shown that in this economy, individual retail prices are capped below the monopoly price, but are otherwise flexible. Moreover, they have the following three properties: 1) An individual price has a positive probability of being equal to the ceiling. 2) Prices have a tendency to fall below the ceiling and then be reset back to the cap value. 3) The ceiling remains constant for extended time intervals even when the mean rate of inflation is positive. Properties 1) and 2) can be associated with promotions and properties 1) and 3) imply the emergence of nominal price rigidity. The results do not rely on any type of direct costs of price adjustment. Instead, price stickiness derives from frictions on the consumers’ side of the market, in line with the results of several managerial surveys. It is shown that the developed theory, compared to the classic menu costs-based approach, does better in matching the stylized facts about the reaction of individual prices to inflation. In terms of quantitative assessment, the model, when calibrated to realistic parameter values, produces median price ceiling durations that match values reported in empirical studies.<p><p>The starting point of the essay in Chapter 3 is the observation that the baseline New-Keynesian model, which relies solely on the notion of infrequent price adjustment, cannot account for the observed degree of inflation sluggishness. Therefore, it is a common practice among macro- modelers to introduce an ad hoc additional source of persistence to their models, by assuming that price setters, when adjusting a price of their product, do not set it equal to its unobserved individual optimal level, but instead catch up with the optimal price only gradually. In the paper, a model of incomplete adjustment is built which allows for explicitly testing whether price-setters adjust to the shocks to the unobserved optimal price only gradually and, if so, measure the speed of the catching up process. According to the author, a similar test has not been performed before. It is found that new prices do not generally match their estimated optimal level. However, only in some sectors, e.g. for some industrial goods and services, prices adjust to this level gradually, which should add to the aggregate inflation sluggishness. In other sectors, particularly food, price-setters seem to overreact to shocks, with new prices overshooting the optimal level. These sectors are likely to contribute to decreasing the aggregate inflation sluggishness. Overall, these findings are consistent with the view that price-setters are boundedly-rational. However, they do not provide clear-cut support for the existence of an additional source of inflation persistence due to gradual individual price adjustment. Instead, they suggest that general equilibrium macroeconomic models may need to include at least two types of production sectors, characterized by a contrasting behavior of price-setters. An additional finding stemming from this work is that the idiosyncratic component of the optimal individual price is well approximated by a random walk. This is in line with the assumptions maintained in most of the theoretical literature. <p><p>Chapter 4 of the thesis has been co-authored by Julia Lendvai. In this paper a full-fledged production economy model with Kahneman and Tversky’s Prospect Theory features is constructed. The agents’ objective function is assumed to be a weighted sum of the usual utility over consumption and leisure and the utility over relative changes of agents’ wealth. It is also assumed that agents are loss-averse: They are more sensitive to wealth losses than to gains. Apart from the changes in the utility, the model is set-up in a standard Real Business Cycle framework. The authors study prices of stocks and risk-free bonds in this economy. Their work shows that under plausible parameterizations of the objective function, the model is able to explain a wide set of unconditional asset return moments, including the mean return on risk-free bonds, equity premium and the Sharpe Ratio. When the degree of loss aversion in the model is additionally assumed to be state-dependent, the model also produces countercyclical risk premia. This helps it match an array of conditional moments and in particular the predictability pattern of stock returns.<br>Doctorat en Sciences économiques et de gestion<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
8

Chih-HsunWang and 王之洵. "Multi-layered Binary Prefix Search for Packet Classification." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/zv3g3r.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系<br>104<br>Packet classification is a key functionality of the Internet routers for many network applications, such as Quality of Service (QoS), security, monitoring, analysis, and network intrusion detection (NIDS). The existing well-known decision-tree-based schemes, like HiCuts, HyperCuts and EffiCuts, suffer from a memory explosion problem caused by rule duplication and the number of memory accesses are inseparable with the height of decision tree they built. In this thesis, we propose a scheme called binary search on buckets (BSOB) for multi-dimensional packet classification problem. BSOB performs binary searches on the ID (prefix) of ordered leaf nodes in the decision tree. We give a pointer to tree node pointing to its corresponding ancestor node based on their prefix. When searching to a certain node, the fast link can be utilized to directly access to its ancestor node that is related to the certain node. Hence, BSOB can improves the existing well-known decision-tree-based schemes that need to traverse the decision tree from the root node to some leaf nodes. In order to solve the memory explosion problem caused by decision-tree-based scheme, we divide the original rule table into multiple groups and construct the BSOB structure to each group. Moreover, we use node retain technique which retains the rule that causes duplication in internal node. Hence, the rule duplication problem can be reduced. For 12 types classifiers with 100,000 rules generated by ClassBench, our experimental results implemented on a PC software environment show that BSOB uses 2.5-2.8 MB memory and needs 8-15 memory accesses in average case and 21-51 memory accesses in worst case that is much better than the existing well-known decision-tree based schemes. We also show the update results of BSOB. Insert operation is about 1-3 times higher than search and the delete operation is close to the search speed. Because of the advantage of less memory usage and high classification speed, BSOB is able to address the packet classification problems in SDN environment that has more header fields and large rule sizes.
APA, Harvard, Vancouver, ISO, and other styles
9

Hsieh, Yen-Chou, and 謝衍州. "Fast Packet Classification Based on Binary Prefix Search." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/36065667203365441311.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系碩博士班<br>92<br>Fast packet classification is required for the increasing traffic demand because of the rapid growth of the Internet. Packet classification is often the first packet processing step in routers. Because of the complexity of the matching algorithms, packet classification is often a bottleneck in the performance of the network infrastructure. Most of the algorithmic solutions don’t scale very well. In this thesis, we proposed a novel packet classification algorithm based on the binary prefix search. The data structure of a d-dimensional rule table is converted to a d-level sorted array for binary search on each level. We evaluated our scheme by a variety of filter tables and compared it with the other existing schemes. Our experiments showed that the proposed scheme performs better than other existing schemes in terms of speed and storage requirements. Specifically, the performance improvements of the proposed scheme in classification speed over the aggregated bit vector are 29-97% and 63-75% for tables of 1K-20K 2D rules and 100-2000 5D, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Chih-Ho, and 干志豪. "Approaching proximate prefix search and efficient load balancing via locality in P2P networks." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/68058819792678473319.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊管理研究所<br>92<br>According to the Moore's law and fast evolutions of PC components, the capability or capacity doubles about every two years, and people get more resources than the regular engaged. To stimulate these to better potential, Napster guides its members to share storages and to search in the joint jukebox while others, such as SETI, strike hard problems with great help from the Internet users. A scalable and efficient search structure is necessary to organize such resources in environments akin to the Internet. Distributed hash tables meet all of our requirements but support only to exact matchings since object names are sacrificed during search. For wider applications based on overlay networks, we want broader abilities to search of correlated objects given one key. We propose a list-based overlay network, called Treevial, with tree competence of search (prefix/range query) while storage load remains fair with two schemes. First, a simple but powerful distributed storage scheme gives equal shares to nodes by transferring load between neighbors of the basic list. Second, a search structure based on the concept from skip graphs (membership vectors) retrieves any targets by logarithmic messages and fetches successive objects in the original namespace with constant costs. We think our work provide efficient search operations but depends heavily on locality issues. One that gives good locality for our design is to introduce landmarks during the join process of nodes. We propose a mechanism from available nodes in the system rather than having specific cites known to every node since a self-organized structure adapts better to the Internet. This makes nodes to be near in overlays if they are close in underlying networks, and improves the original design to be efficient in maintenance and search. There are performance measures of our design. As the results from simulations, the measures about the costs of storage balancing and another about routing stretch reduce more than half after we give locality awareness. This work is adequate in range-query-intensive environments and provides efficient search operations and good maintenance costs by locality.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Prefix search"

1

Boido, Claudio. Asset Allocation Strategies and Commodities. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190656010.003.0021.

Full text
Abstract:
As a result of the financial crisis of 2007–2008 and subsequent central banking decisions, the asset management industry changed its asset allocation choices. Asset managers are focusing their attention on the search for new asset classes by taking advantage of the new opportunities to capture risk premia with the aim of exceeding the returns given by traditional investments, including traded equities, fixed income securities, and cash. By doing so, they are trying to improve the selection of alternative assets, such as commodities that sometimes have relatively low correlations with traditional assets. The chapter begins by describing the principles of asset allocation, distinguishing between passive and active asset allocation, also focusing on beta and alternative beta. It then concentrates on how investors can gain exposure to commodities through different investment vehicles and strategies.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Prefix search"

1

Ferragina, Paolo. "On the Weak Prefix-Search Problem." In Combinatorial Pattern Matching. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21458-5_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Külekci, M. Oğuzhan, Ismail Habib, and Amir Aghabaiglou. "Privacy–Preserving Text Similarity via Non-Prefix-Free Codes." In Similarity Search and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32047-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Belazzougui, Djamal, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. "Fast Prefix Search in Little Space, with Applications." In Algorithms – ESA 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15775-2_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dedecker, Ruben, Harm Delva, Pieter Colpaert, and Ruben Verborgh. "A File-Based Linked Data Fragments Approach to Prefix Search." In Lecture Notes in Computer Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74296-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rheinländer, Astrid, Martin Knobloch, Nicky Hochmuth, and Ulf Leser. "Prefix Tree Indexing for Similarity Search and Similarity Joins on Genomic Data." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13818-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

I, Tomohiro, Robert W. Irving, Dominik Köppl, and Lorna Love. "Extracting the Sparse Longest Common Prefix Array from the Suffix Binary Search Tree." In String Processing and Information Retrieval. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86692-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Quicke, Donald L. J., Buntika A. Butcher, and Rachel A. Kruft Welton. "More on manipulating text." In Practical R for biologists: an introduction. CABI, 2021. http://dx.doi.org/10.1079/9781789245349.0257.

Full text
Abstract:
Abstract This chapter provides more information on manipulating text, presenting two examples. Example 1 focuses on standardizing names in a phylogenetic tree description, using R to reformat taxon names, create lists, sort data and use wildcards for when some things you are interested in don't have exactly the same length. The example tree description concerns parasitoids of caterpillars at a study site that have been DNA barcoded and their possible taxonomic identities added automatically. Example 2 deals with substrings of unknown length. This example search for a numeric substring of unknown length but with a standard prefix, using data of some DNA sequences from a set of Aleiodes wasps. The trimming of white spaces and/or tabs, use of wildcards to locate internal letter strings, finding of suffixes, prefixes and specifying of letters, numbers and punctuation, manipulation of character case, ignoring of character case, and specifying of particular and modifiable character classes are briefly described.
APA, Harvard, Vancouver, ISO, and other styles
8

Quicke, Donald L. J., Buntika A. Butcher, and Rachel A. Kruft Welton. "More on manipulating text." In Practical R for biologists: an introduction. CABI, 2021. http://dx.doi.org/10.1079/9781789245349.0022.

Full text
Abstract:
Abstract This chapter provides more information on manipulating text, presenting two examples. Example 1 focuses on standardizing names in a phylogenetic tree description, using R to reformat taxon names, create lists, sort data and use wildcards for when some things you are interested in don't have exactly the same length. The example tree description concerns parasitoids of caterpillars at a study site that have been DNA barcoded and their possible taxonomic identities added automatically. Example 2 deals with substrings of unknown length. This example search for a numeric substring of unknown length but with a standard prefix, using data of some DNA sequences from a set of Aleiodes wasps. The trimming of white spaces and/or tabs, use of wildcards to locate internal letter strings, finding of suffixes, prefixes and specifying of letters, numbers and punctuation, manipulation of character case, ignoring of character case, and specifying of particular and modifiable character classes are briefly described.
APA, Harvard, Vancouver, ISO, and other styles
9

Ali, Shaukat, Paolo Arcaini, and Tao Yue. "Do Quality Indicators Prefer Particular Multi-objective Search Algorithms in Search-Based Software Engineering?" In Search-Based Software Engineering. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59762-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vasavi, S., Mallela Padma Priya, and Anu A. Gokhale. "Framework for GeoSpatial Query Processing by Integrating Cassandra With Hadoop." In Geospatial Intelligence. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8054-6.ch017.

Full text
Abstract:
We are moving towards digitization and making all our devices, such as sensors and cameras, connected to internet, producing bigdata. This bigdata has variety of data and has paved the way to the emergence of NoSQL databases, like Cassandra, for achieving scalability and availability. Hadoop framework has been developed for storing and processing distributed data. In this chapter, the authors investigated the storage and retrieval of geospatial data by integrating Hadoop and Cassandra using prefix-based partitioning and Cassandra's default partitioning algorithm (i.e., Murmur3partitioner) techniques. Geohash value is generated, which acts as a partition key and also helps in effective search. Hence, the time taken for retrieving data is optimized. When users request spatial queries like finding nearest locations, searching in Cassandra database starts using both partitioning techniques. A comparison on query response time is made so as to verify which method is more effective. Results show the prefix-based partitioning technique is more efficient than Murmur3 partitioning technique.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prefix search"

1

Gollapudi, Sreenivas, and Rina Panigrahy. "A dictionary for approximate string search and longest prefix search." In the 15th ACM international conference. ACM Press, 2006. http://dx.doi.org/10.1145/1183614.1183723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Awerbuch, Baruch, and Christian Scheideler. "Peer-to-peer systems for prefix search." In the twenty-second annual symposium. ACM Press, 2003. http://dx.doi.org/10.1145/872035.872053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Youngin Bae, Jaehoon Kim, Myeong-Wuk Jang, and Byoung-Joon Lee. "A prefix-based smart search in Content-Centric Networking." In 2012 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2012. http://dx.doi.org/10.1109/icce.2012.6161761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Guosheng, Shaohua Yu, and Jinyou Dai. "Binary Search on Prefix Covered Levels for IP Address Lookup." In 2009 5th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM). IEEE, 2009. http://dx.doi.org/10.1109/wicom.2009.5303224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deng, Dong, Guoliang Li, and Jianhua Feng. "A pivotal prefix based filtering algorithm for string similarity search." In SIGMOD/PODS'14: International Conference on Management of Data. ACM, 2014. http://dx.doi.org/10.1145/2588555.2593675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anastasiu, David C., and George Karypis. "L2AP: Fast cosine similarity search with prefix L-2 norm bounds." In 2014 IEEE 30th International Conference on Data Engineering (ICDE). IEEE, 2014. http://dx.doi.org/10.1109/icde.2014.6816700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Behdadfar, M., H. Saidi, H. Alaei, and B. Samari. "Scalar Prefix Search: A New Route Lookup Algorithm for Next Generation Internet." In 2009 Proceedings IEEE INFOCOM. IEEE, 2009. http://dx.doi.org/10.1109/infcom.2009.5062179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Agrawal, Manu, Kartik Manchanda, Ribhav Soni, Anurag Lal, and Ravindranath Chowdary. "Parallel Implementation of Local Similarity Search for Unstructured Text Using Prefix Filtering." In 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). IEEE, 2017. http://dx.doi.org/10.1109/pdcat.2017.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alshamrani, Hussain, Bogdan Ghita, and David Lancaster. "Detecting IP prefix hijacking using data reduction-based and Binary Search Algorithm." In 2015 Internet Technologies and Applications (ITA). IEEE, 2015. http://dx.doi.org/10.1109/itecha.2015.7317374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lam, Hoang Thanh, Dinh Viet Dung, Raffaele Perego, and Fabrizio Silvestri. "An Incremental Prefix Filtering Approach for the All Pairs Similarity Search Problem." In 2010 12th Asia Pacific Web Conference (APWEB). IEEE, 2010. http://dx.doi.org/10.1109/apweb.2010.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Prefix search"

1

Dy, Sydney M., Julie M. Waldfogel, Danetta H. Sloan, et al. Integrating Palliative Care in Ambulatory Care of Noncancer Serious Chronic Illness: A Systematic Review. Agency for Healthcare Research and Quality (AHRQ), 2020. http://dx.doi.org/10.23970/ahrqepccer237.

Full text
Abstract:
Objectives. To evaluate availability, effectiveness, and implementation of interventions for integrating palliative care into ambulatory care for U.S.-based adults with serious life-threatening chronic illness or conditions other than cancer and their caregivers We evaluated interventions addressing identification of patients, patient and caregiver education, shared decision-making tools, clinician education, and models of care. Data sources. We searched key U.S. national websites (March 2020) and PubMed®, CINAHL®, and the Cochrane Central Register of Controlled Trials (through May 2020). We also engaged Key Informants. Review methods. We completed a mixed-methods review; we sought, synthesized, and integrated Web resources; quantitative, qualitative and mixed-methods studies; and input from patient/caregiver and clinician/stakeholder Key Informants. Two reviewers screened websites and search results, abstracted data, assessed risk of bias or study quality, and graded strength of evidence (SOE) for key outcomes: health-related quality of life, patient overall symptom burden, patient depressive symptom scores, patient and caregiver satisfaction, and advance directive documentation. We performed meta-analyses when appropriate. Results. We included 46 Web resources, 20 quantitative effectiveness studies, and 16 qualitative implementation studies across primary care and specialty populations. Various prediction models, tools, and triggers to identify patients are available, but none were evaluated for effectiveness or implementation. Numerous patient and caregiver education tools are available, but none were evaluated for effectiveness or implementation. All of the shared decision-making tools addressed advance care planning; these tools may increase patient satisfaction and advance directive documentation compared with usual care (SOE: low). Patients and caregivers prefer advance care planning discussions grounded in patient and caregiver experiences with individualized timing. Although numerous education and training resources for nonpalliative care clinicians are available, we were unable to draw conclusions about implementation, and none have been evaluated for effectiveness. The models evaluated for integrating palliative care were not more effective than usual care for improving health-related quality of life or patient depressive symptom scores (SOE: moderate) and may have little to no effect on increasing patient satisfaction or decreasing overall symptom burden (SOE: low), but models for integrating palliative care were effective for increasing advance directive documentation (SOE: moderate). Multimodal interventions may have little to no effect on increasing advance directive documentation (SOE: low) and other graded outcomes were not assessed. For utilization, models for integrating palliative care were not found to be more effective than usual care for decreasing hospitalizations; we were unable to draw conclusions about most other aspects of utilization or cost and resource use. We were unable to draw conclusions about caregiver satisfaction or specific characteristics of models for integrating palliative care. Patient preferences for appropriate timing of palliative care varied; costs, additional visits, and travel were seen as barriers to implementation. Conclusions. For integrating palliative care into ambulatory care for serious illness and conditions other than cancer, advance care planning shared decision-making tools and palliative care models were the most widely evaluated interventions and may be effective for improving only a few outcomes. More research is needed, particularly on identification of patients for these interventions; education for patients, caregivers, and clinicians; shared decision-making tools beyond advance care planning and advance directive completion; and specific components, characteristics, and implementation factors in models for integrating palliative care into ambulatory care.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!