To see the other types of publications on this topic, follow the link: A* search.

Dissertations / Theses on the topic 'A* search'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'A* search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shen, Yipeng. "Meta-search and distributed search systems /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20SHEN.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.<br>Includes bibliographical references (leaves 138-144). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
2

Bdeir, Ayah. " search." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36152.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.<br>Includes bibliographical references (p. 77-82).<br>In the past three decades, especially in the aftermath of September 11th, significant effort has been focused on developing technologies for aviation security. Security inspectors have considerable latitude to wave passengers into additional screening, and pat-downs are extensive and thorough. Immigrants, individuals from minority groups, and persons from specific ethnicities are targeted more, and accuse authorities of racial profiling and discrimination in both the "random" selection and the actual pat-down procedure, but are often reluctant to resist or file official complaints. Expensive, intrusive technologies at security officials' disposal reinforce an inherent power imbalance between authorities and passengers, and set the space for abuse of power. To date, the only tool at a target's disposal is a verbal or written account of their experience that may or may not be taken seriously. Moreover, existing airport security legislation is flawed and open to interpretation, and official standards used to define a breach are absent or lax. <random> search is an an instrument, a neutral, quantifiable witness to the screening process.<br>(cont.) Undetectable, wearable pressure sensors, implemented with Quantum Tunneling Composites (QTC), are distributed across the undergarment in order to monitor and record inappropriate or unjustified searches. By allowing the traveler to log and share the experience s/he is going through, the 'smart' body suit attempts to quantify the search using a common platform and standardized measurements. The digital record is repeatable and legible enough to be used as evidence to hold security officials accountable for their actions. <random> search is a personal, voluntary technology that does not impose a course of action on the wearer, but rather offers him/her a record to analyze, incriminate, share, perform, or simply keep.<br>by Ayah Bdeir.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Roscheck, Michael Thomas. "Detection Likelihood Maps for Wilderness Search and Rescue: Assisting Search by Utilizing Searcher GPS Track Logs." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3312.

Full text
Abstract:
Every year there are numerous cases of individuals becoming lost in remote wilderness environments. Principles of search theory have become a foundation for developing more efficient and successful search and rescue methods. Measurements can be taken that describe how easily a search object is to detect. These estimates allow the calculation of the probability of detection—the probability that an object would have been detected if in the area. This value only provides information about the search area as a whole; it does not provide details about which portions were searched more thoroughly than others. Ground searchers often carry portable GPS devices and their resulting GPS track logs have recently been used to fill in part of this knowledge gap. We created a system that provides a detection likelihood map that estimates the probability that each point in a search area was seen well enough to detect the search object if it was there. This map will be used to aid ground searchers as they search an assigned area, providing real time feedback of what has been "seen." The maps will also assist incident commanders as they assess previous searches and plan future ones by providing more detail than is available by viewing GPS track logs.
APA, Harvard, Vancouver, ISO, and other styles
4

Hurlock, Jonathan. "Twitter search : building a useful search engine." Thesis, Swansea University, 2015. https://cronfa.swan.ac.uk/Record/cronfa43037.

Full text
Abstract:
Millions of digital communications are posted over social media every day. Whilst some state that a large proportion of these posts are considered to be babble, we know that some of these posts actually contain useful information. In this thesis we specifically look at how we can identify reasons as to what makes some of these communications useful or not useful to someone searching for information over social media. In particular we look at what makes messages (tweets) from the social network Twitter useful or not useful users performing search over a corpus of tweets. We identify 16 features that help a tweet be deemed useful, and 17 features as to why a tweet may be deemed not useful to someone performing a search task. From these findings we describe a distributed architecture we have compiled to process large datasets and allow us to perform search over a corpus of tweets. Utilizing this architecture we are able to index tweets based on our findings and describe a crowdsourcing study we ran to help optimize weightings for these features via learning to rank, which quantifies how important each feature is in understanding what makes tweets useful or not for common search tasks performed over twitter. We release a corpus of tweets for the purpose of evaluating other usefulness systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Huy M. Eng (Huy Le) Massachusetts Institute of Technology. "Improving search quality of the Google search appliance." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53174.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.<br>Includes bibliographical references (p. 69-73).<br>In this thesis, we describe various experiments on the ranking function of the Google Search Appliance to improve search quality. An evolutionary computation framework is implemented and applied to optimize various parameter settings of the ranking function. We evaluate the importance of IDF in the ranking function and achieve small improvements in performance. We also examine many ways to combining the query-independent and query-dependent scores. Lastly, we perform various experiments with signals based on the positions of the query terms in the document.<br>by Huy Nguyen.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Hogan, Seamus D. "Equilibrium search." Thesis, University of Canterbury. Economics, 1989. http://hdl.handle.net/10092/4364.

Full text
Abstract:
This thesis is concerned with the problem of incorporating consumer search in to equilibrium-in particular with the relationship between oligopoly models where consumers search and those which assume perfect information. Two chapters consider duality in search. The conventional way of expressing a consumer's search strategy is to describe his decision variable as a function of the search cost. The dual approach is to find the search cost that leaves the consumer just indifferent between two decisions. Since cost is continuous this search-cost function is differentiable in more parameters than its inverse and so more readily yields comparative static results. Using this function it is possible to express the demand facing firms as explicit functions of the distribution of consumers search costs. The belief is commonly expressed in the literature that adaptive search models add little insight compared to the analytically simpler
APA, Harvard, Vancouver, ISO, and other styles
7

Hrnčířová, Jana. "Executive search." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-149840.

Full text
Abstract:
In the theoretical part of the thesis focuses on the general definition of the recruitment and selection, mainly the methods, which are used. Furthermore, it defines the area work in public administration with a focus on employment and workers. Also, try to determine the difference between the acquisition employees in the public and private sectors. The work is then engaged in recruitment agencies and HR consulting companies, their history, the legislative definition and the question whether it is advantageous to work with recruitment agencies. Afterwards, the method of work is engaged in executive search, its history, use, and especially how it is itself the recruitment method. The practical part contains specific case of HR consulting company that deals with executive search method. Here is then analyzed in the procedure of recruitment executive search method and success by getting employees the company achieved in 2011. This part of the thesis is based on long-term observations of the author, who was an employee of the staffing and consulting company so he could observe the whole process of executive search methods in practice. Using a questionnaire survey, this work investigates how executive search method is used in the private and public sectors and what are the respondents as the main advantages and disadvantages of the method. At the end of this section for recommendations on how to use executive search method in the public sector, and problems are defined here, why not in the sector executive search method does not use too much.
APA, Harvard, Vancouver, ISO, and other styles
8

Vassef, Hooman. "Combining fast search and learning for scalable similarity search." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86566.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.<br>Includes bibliographical references (leaves 38-39).<br>by Hooman Vassef.<br>S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Diriye, A. M. "Search interfaces for known-item and exploratory search tasks." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1343928/.

Full text
Abstract:
People’s online search tasks vary considerably from simple known-item search tasks to complex and exploratory ones. Designing user search interfaces that effectively support this range of search tasks is a difficult and challenging problem, made so by the variety of search goals, information needs and search strategies. Over the last few years, this topic has gained more attention from several research communities, but designing more effective search interfaces requires us to understand how they are used during search tasks, and the role they play in people’s information seeking. The aim of the research reported here was to understand how search interfaces support known-item and exploratory search tasks, and how we can leverage this to design better Information Retrieval systems that improve user experience and performance. We begin this thesis by reporting on an initial exploratory user study that investigates the relationship between richer search interfaces and search tasks. We find, through qualitative data analysis, that richer search interfaces that provide more sophisticated search strategies better support exploratory search tasks than simple search interfaces, which were shown to be more effective for known-item search tasks. This analysis revealed several ways search interface features affect information seeking (impede, distract, facilitate, augment, etc.). A follow-up study further developed and validated these findings by analyzing their impact in terms of task completion time, interactive precision and user preference. To expand our knowledge of search tasks, a definition synthesizing the constituent elements from the literature is proposed. Using this definition, our final study builds on our earlier work, and identifies differences in how people interact and use search interfaces for different search tasks. We conclude the thesis by discussing the implications of our user studies, and our novel search interfaces, for the design of future user search interfaces. The contributions of this thesis are a demonstration of the impact of search interfaces on information seeking; an analysis and synthesis of the constituent elements of search tasks based on the research in the Information Science community; and a series of novel search interfaces that address existing shortcomings, and support more complex and exploratory search tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Bian, Jiang. "Contextualized web search: query-dependent ranking and social media search." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37246.

Full text
Abstract:
Due to the information explosion on the Internet, effective information search techniques are required to retrieve the desired information from the Web. Based on much analysis on users' search intention and the variant forms of Web content, we find that both the query and the indexed web content are often associated with various context information, which can provide much essential information to indicate the ranking relevance in Web search. This dissertation seeks to develop new search algorithms and techniques by taking advantage of rich context information to improve search quality and consists of two major parts. In the first part, we study the context of the query in terms of various ranking objectives of different queries. In order to improve the ranking relevance, we propose to incorporate such query context information into the ranking model. Two general approaches will be introduced in the following of this dissertation. The first one proposes to incorporate query difference into ranking by introducing query-dependent loss functions, by optimizing which we can obtain better ranking model. Then, we investigate another approach which applies a divide-and-conquer framework for ranking specialization. The second part of this dissertation investigates how to extract the context of specific Web content and explore them to build more effective search system. This study is based on the new emerging social media content. Unlike traditional Web content, social media content is inherently associated with much new context information, including content semantics and quality, user reputation, and user interactions, all of which provide useful information for acquiring knowledge from social media. In this dissertation, we seek to develop algorithms and techniques for effective knowledge acquisition from collaborative social media environments by using the dynamic context information. We first propose a new general framework for searching social media content, which integrates both the content features and the user interactions. Then, a semi-supervised framework is proposed to explicitly compute content quality and user reputation, which are incorporated into the search framework to improve the search quality. Furthermore, this dissertation also investigates techniques for extracting the structured semantics of social media content as new context information, which is essential for content retrieval and organization.
APA, Harvard, Vancouver, ISO, and other styles
11

Soylemez, Emrah. "Gis-based Search Theory Application For Search And Rescue Planning." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608362/index.pdf.

Full text
Abstract:
Search and Rescue (SAR) operations aim at finding missing objects with minimum time in a determined area. There are fundamentally two problems in these operations. The first problem is assessing highly reliable probability distribution maps, and the second is determining the search pattern that sweeps the area from the air as fast as possible. In this study, geographic information systems (GIS) and multi criteria decision analysis (MCDA) are integrated and a new model is developed based upon Search Theory in order to find the position of the missing object as quickly as possible with optimum resource allocation. Developed model is coded as a search planning tool for the use of search and rescue planners. Inputs of the model are last known position of the missing object and related clues about its probable position. In the developed model, firstly related layers are arranged according to their priorities based on subjective expert opinion. Then a multi criteria decision method is selected and each data layer is multiplied by a weight corresponding to search expert&rsquo<br>s rank. Then a probability map is established according to the result of MCDA methods. In the second phase, the most suitable search patterns used in literature are applied based on established probability map. The developed model is a new approach to shortening the time in SAR operations and finding the suitable search pattern for the data of different crashes.
APA, Harvard, Vancouver, ISO, and other styles
12

Dennis, Johansson. "Search Engine Optimization and the Long Tail of Web Search." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-296388.

Full text
Abstract:
In the subject of search engine optimization, many methods exist and many aspects are important to keep in mind. This thesis studies the relation between keywords and website ranking in Google Search, and how one can create the biggest positive impact. Keywords with smaller search volume are called "long tail" keywords, and they bear the potential to expand visibility of the website to a larger crowd by increasing the rank of the website for the large fraction of keywords that might not be as common on their own, but together make up for a large amount of the total web searches. This thesis will analyze where on the web page these keywords should be placed, and a case study will be performed in which the goal is to increase the rank of a website with knowledge from previous tests in mind.
APA, Harvard, Vancouver, ISO, and other styles
13

Burlington, Michael Scott. "Search & exploration, efficient planar search for automated robotic discovery." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/MQ64328.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhao, Jing. "Full-text keyword search in meta-search and P2P networks /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAOJ.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chanzy, Philippe. "Range search and nearest neighbor search in k-d trees." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=68164.

Full text
Abstract:
This thesis presents an analysis of the expected complexity of range searching and nearest neighbor searching in random 2-d trees. We show that range searching in a random rectangle $ Delta sb{x} times Delta sb{y}$ can be done in $O lbrack Delta sb{x} Delta sb{y} n+( Delta sb{x}+ Delta sb{y}) n sp alpha +{ rm ln} n rbrack$ expected time. A matching lower bound is also obtained. We also show that nearest neighbor searching in random 2-d trees by any algorithm must take time bounded by $ Omega lbrack n sp{ alpha-1/2}/({ rm ln} n) sp alpha rbrack$ where $ alpha=( sqrt{17}-3)/2$. This disproves a conjecture by Bentley that nearest neighbor search in random 2-d trees can be done in O(1) expected time.
APA, Harvard, Vancouver, ISO, and other styles
16

Burlington, Scott M. Sc. "Search & exploration : efficient planar search for automated robotic discovery." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30352.

Full text
Abstract:
An agent is placed in an unknown environment and charged with the task of locating a lost object. What can the agent use as an efficient technique to find the object?<br>We propose a new algorithm for planar search. The algorithm stems from theoretical work on search games, in particular provably optimal search techniques on restricted domains. This thesis addresses the problem of efficiency in robotic search: having a mobile robot find a target object in an unknown environment with obstacles in an efficient manner. As a side-effect, the robot explores the environment.<br>Based on previous results, a formal description of the problem is presented along with an algorithm to solve it. This algorithm has good worst-case performance, in terms of its competitive ratio. We show experimental data validating the feasibility of our approach and typical results. Quantitative results are demonstrated showing the advantage of modified spiral search versus traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Wendell, Todd. "On pilgrimage : a search for place, a search for self." Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1204484.

Full text
Abstract:
This research investigates the phenomenon of pilgrimage, seeking to better understand the dimensions of space and power of place as it pertains to the individual pilgrim's relationship to a foreign environment while emphasizing the humanistic, experiential and physical aspects embedded within the process of pilgrimage. An examination of the concept of pilgrimage through the experience of an architect, pilgrimage as a vehicle for finding self, exploration of the phenomenology of place, and investigation of the fundamentals or anthropology of experience, will also be included.For an architect this unique and relatively untouched area of research has great importance. Architects are constantly searching for an understanding of the relationship between environments and people. The profession, as a whole, is trained to be especially sensitive to aesthetic and cultural aspects of the built environment. Furthermore, the study of pilgrimage, to date, lacks scholarly research conducted by architects, whose unique perception of three-dimensional space and knowledge of the language necessary to build unique places could potentially add insight into many aspects of the pilgrimage phenomenon. These aspects emphasize the role environment plays in pilgrimage and the spatial behavior of pilgrim's relationship to environment.<br>Department of Architecture
APA, Harvard, Vancouver, ISO, and other styles
18

Blaauw, Pieter. "Search engine poisoning and its prevalence in modern search engines." Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1002037.

Full text
Abstract:
The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
APA, Harvard, Vancouver, ISO, and other styles
19

Katila, Riitta. "In search of innovation : search determinants of new product introductions /." Digital version:, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rodic, Daniel. "A Hybrid search heuristic-exhaustive search approach for rule extraction." Pretoria : [s.n.], 2000. http://upetd.up.ac.za/thesis/available/etd-05292006-110006/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Arbelaez, Rodriguez Alejandro. "Learning during search." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00600523.

Full text
Abstract:
La recherche autonome est un nouveau domaine d'intérêt de la programmation par contraintes, motivé par l'importance reconnue de l'utilisation de l'apprentissage automatique pour le problème de sélection de l'algorithme le plus approprié pour une instance donnée, avec une variété d'applications, par exemple: Planification, Configuration d'horaires, etc. En général, la recherche autonome a pour but le développement d'outils automatiques pour améliorer la performance d'algorithmes de recherche, e.g., trouver la meilleure configuration des paramètres pour un algorithme de résolution d'un problème combinatoire. Cette thèse présente l'étude de trois points de vue pour l'automatisation de la résolution de problèmes combinatoires; en particulier, les problèmes de satisfaction de contraintes, les problèmes d'optimisation de combinatoire, et les problèmes de satisfiabilité (SAT).Tout d'abord, nous présentons domFD, une nouvelle heuristique pour le choix de variable, dont l'objectif est de calculer une forme simplifiée de dépendance fonctionnelle, appelée dépendance-relaxée. Ces dépendances-relaxées sont utilisées pour guider l'algorithme de recherche à chaque point de décision.Ensuite, nous révisons la méthode traditionnelle pour construire un portefeuille d'algorithmes pour le problème de la prédiction de la structure des protéines. Nous proposons un nouveau paradigme de recherche-perpétuelle dont l'objectif est de permettre à l'utilisateur d'obtenir la meilleure performance de son moteur de résolution de contraintes. La recherche-perpétuelle utilise deux modes opératoires: le mode d'exploitation utilise le modèle en cours pour solutionner les instances de l'utilisateur; le mode d'exploration réutilise ces instances pour s'entraîner et améliorer la qualité d'un modèle d'heuristiques par le biais de l'apprentissage automatique. Cette deuxième phase est exécutée quand l'unité de calcul est disponible (idle-time). Finalement, la dernière partie de cette thèse considère l'ajout de la coopération au cours d'exécution d'algorithmes de recherche locale parallèle. De cette façon, on montre que si on partage la meilleure configuration de chaque algorithme dans un portefeuille parallèle, la performance globale peut être considérablement amélioré.
APA, Harvard, Vancouver, ISO, and other styles
22

Sahin, Mehmet Ozgur. "Search For Z." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614289/index.pdf.

Full text
Abstract:
In this thesis, analysis of the forward backward asymmetry of high energy electron pairs at the CMS - LHC with a centre of mass energy of 7 TeV is presented and the possibility of search for a new neutral weak boson Z&rsquo<br>via measuring the forward backward asymmetry AFB of high energy electron pairs is discussed. The forward backward asymmetry is a natural result of the interference between the neutral current mediators: Photon and Z boson. A new neutral gauge boson would also interfere with these mediators and this new interference would either enhance the forward backward asymmetry at high energies or suppress it. In this analysis, 4.67 fb&minus<br>1 data collected at the CMS experiment in 2011 is used
APA, Harvard, Vancouver, ISO, and other styles
23

Sinozic, Tanja. "SEARCH Project Delphi." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3750/1/sre%2Ddisc%2D2012_09.pdf.

Full text
Abstract:
This paper describes a plan to implement the Delphi method to obtain consensus of expert opinions on policy statements derived from research evidence. The evidence is based on a three-year large-scale European Union (EU) research project ("SEARCH"). The SEARCH project focuses on trade, migration, innovation and institutional issues on relationships between the European Union (EU) and its neighbouring countries (NCs). The main objective of the use of Delphi in this context is to obtain as many high-quality responses and opinions as possible on policy implications of SEARCH project results. The SEARCH Project Delphi aims to inform policy formulation at the EU level, specifically European Neighbourhood Policy (ENP). (author's abstract)<br>Series: SRE - Discussion Papers
APA, Harvard, Vancouver, ISO, and other styles
24

Hein, Birgit. "Quantum search algorithms." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11512/.

Full text
Abstract:
In this thesis two quantum search algorithms on two different graphs, a hypercube and a d-dimensional square lattice, are analysed and some applications of the lattice search are discussed. The approach in this thesis generalises a picture drawn by Shenvi, Kempe and Whaley, which was later adapted by Ambainis, Kempe and Rivosh. It defines a one parameter family of unitary operators U_λ with parameter λ. It will be shown that two eigenvalues of U_λ form an avoided crossing at the λ-value where U_λ is equal to the old search operator. This generalised picture opens the way for a construction of two approximate eigen- vectors at the crossing and gives rise to a 2×2 model Hamiltonian that is used to approximate the operator U_λ near the crossing. The thus defined Hamiltonian can be used to calculate the leading order of search time and success probability for the search. To the best of my knowledge only the scaling of these quantities has been known. For the algorithm searching the regular lattice, a generalisation of the model Hamiltonian for m target vertices is constructed. This algorithm can be used to send a signal from one vertex of the graph to a set of vertices. The signal is transmitted between these vertices exclusively and is localised only on the sender and the receiving vertices while the probability to measure the signal at one of the remaining vertices is significantly smaller. However, this effect can be used to introduce an additional sender to search settings and send a continuous signal to all target vertices where the signal will localise. This effect is an improvement compared to the original search algorithm as it does not need to know the number of target vertices.
APA, Harvard, Vancouver, ISO, and other styles
25

Whittley, Ian Murray. "Tabu search-revisited." Thesis, University of East Anglia, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Walcott, P. A. "Colour object search." Thesis, City University London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bota, Horatiu S. "Composite web search." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/38925/.

Full text
Abstract:
The figure above shows Google’s results page for the query “taylor swift”, captured in March 2016. Assembled around the long-established list of search results is content extracted from various source — news items and tweets merged within the results ranking, images, songs and social media profiles displayed to the right of the ranking, in an interface element that is known as an entity card. Indeed, the entire page seems more like an assembly of content extracted from various sources, rather than just a ranked list of blue links. Search engine result pages have become increasingly diverse over the past few years, with most commercial web search providers responding to user queries with different types of results, merged within a unified page. The primary reason for this diversity on the results page is that the web itself has become more diverse, given the ease with which creating and hosting different types of content on the web is possible today. This thesis investigates the aggregation of web search results retrieved from various document sources (e.g., images, tweets, Wiki pages) within information “objects” to be integrated in the results page assembled in response to user queries. We use the terms “composite objects” or “composite results” to refer to such objects, and throughout this thesis use the terminology of Composite Web Search (e.g., result composition) to distinguish our approach from other methods of aggregating diverse content within a unified results page (e.g., Aggregated Search). In our definition, the aspects that differentiate composite information objects from aggregated search blocks are that composite objects (i) contain results from multiple sources of information, (ii) are specific to a common topic or facet of a topic rather than a grouping of results of the same type, and (iii) are not a uniform ranking of results ordered only by their topical relevance to a query. The most widely used type of composite result in web search today is the entity card. Entity cards have become extremely popular over the past few years, with some informal studies suggesting that entity cards are now shown on the majority of result pages generated by Google. As composite results are used more and more by commercial search engines to address information needs directly on the results page, understanding the properties of such objects and their influence on searchers is an essential aspect of modern web search science. The work presented throughout this thesis attempts the task of studying composite objects by exploring users’ perspectives on accessing and aggregating diverse content manually, by analysing the effect composite objects have on search behaviour and perceived workload, and by investigating different approaches to constructing such objects from diverse results. Overall, our experimental findings suggest that items which play a central role within composite objects are decisive in determining their usefulness, and that the overall properties of composite objects (i.e., relevance, diversity and coherence) play a combined role in mediating object usefulness.
APA, Harvard, Vancouver, ISO, and other styles
28

Khoo, Wei Ming. "Decompilation as search." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mudaranthakam, Dinesh pal Indrapal. "International faculty search." Kansas State University, 2011. http://hdl.handle.net/2097/8759.

Full text
Abstract:
Master of Science<br>Department of Computing and Information Sciences<br>Daniel A. Andresen<br>This application enables users to search the database for International Faculty Members who are currently working at the veterinary department. It also helps the users to know more about the faculty members in detail that is about their specialization, area of expertise, their origin, languages they can speak and teaching experience. The main objective of this project is to develop an online application where the faculty members could be searched based on the three major criteria that is department to which the faculty member belong to or based upon the area of expertise of the faculty member or based upon the country. The application is designed in such a way that a combination of this three drop down list would also give us the results if any such kind exists. The major attraction for this application is that the faculty members are plotted on the world map using the Bing API. A red color dot is placed on the countries to which the faculty members belong, and a mouse over on the dot pops up when the mouse pointer is placed on the red colored dot then it would pop up the names of the faculty who hail from that country. These names are in form of hyper links when clicked on them would direct us to the respective faculties profile. This project is implemented using C#.NET on Microsoft Visual Studio 2008 along with the xml parsing techniques and some XML files which stores the profile of the faculty members. My primary focus is to get familiar with .NET framework and to be able to code in C#.NET. Also learn to use MS Access as database for storing and retrieving the data.
APA, Harvard, Vancouver, ISO, and other styles
30

Speicher, Maximilian. "Search Interaction Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-208102.

Full text
Abstract:
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work<br>Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten
APA, Harvard, Vancouver, ISO, and other styles
31

Sowmya, Mathukumalli. "Job search portal." Kansas State University, 2016. http://hdl.handle.net/2097/34518.

Full text
Abstract:
Master of Science<br>Department of Computer Science<br>Mitchell L. Neilsen<br>Finding jobs that best suits the interests and skill set is quite a challenging task for the job seekers. The difficulties arise from not having proper knowledge on the organization’s objective, their work culture and current job openings. In addition, finding the right candidate with desired qualifications to fill their current job openings is an important task for the recruiters of any organization. Online Job Search Portals have certainly made job seeking convenient on both sides. Job Portal is the solution where recruiter as well as the job seeker meet aiming at fulfilling their individual requirement. They are the cheapest as well as the fastest source of communication reaching wide range of audience on just a single click irrespective of their geographical distance. The web application “Job Search Portal” provides an easy and convenient search application for the job seekers to find their desired jobs and for the recruiters to find the right candidate. Job seekers from any background can search for the current job openings. Job seekers can register with the application and update their details and skill set. They can search for available jobs and apply to their desired positions. Android, being open source has already made its mark in the mobile application development. To make things handy, the user functionalities are developed as an Android application. Employer can register with the application and posts their current openings. They can view the Job applicants and can screen them according to the best fit. Users can provide a review about an organization and share their interview experience, which can be viewed by the Employers.
APA, Harvard, Vancouver, ISO, and other styles
32

Deva, Swetha. "Online job search." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/1111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cayton, Lawrence. "Bregman proximity search." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3355478.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.<br>Title from first page of PDF file (viewed June 18, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 82-87).
APA, Harvard, Vancouver, ISO, and other styles
34

Bedrax-Weiss, Tania. "Optimal search protocols /." view abstract or download file of text, 1999. http://wwwlib.umi.com/cr/uoregon/fullcit?p9948016.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 1999.<br>Typescript. Includes vita and abstract. Includes bibliographical references (leaves 206-211). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9948016.
APA, Harvard, Vancouver, ISO, and other styles
35

Sawant, Anup Satish. "Semantic web search." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1263410119/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wandrag, Daniel Barend Rudolp. "The bioavailability of amino acids and minerals in commercial dog food." Pretoria : [s.n.], 1999. http://explore.up.ac.za/search/.b?SEARCH=b1426767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nash, Wyatt J. "Calculation of barrier search probability of detection for arbitrary search tracks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA378067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

INAGAKI, Yasuyoshi, Tomio HIRATA, and Xuehou TAN. "Designing Efficient Geometric Search Algorithms Using Persistent Binary-Binary Search Trees." Institute of Electronics, Information and Communication Engineers, 1994. http://hdl.handle.net/2237/15061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Waldorf, Mahlon R. "An analysis of speculative search for asynchronous parallel alpha-beta search." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0033/MQ64473.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Elbassuoni, Shady. "Adaptive personalization of web search : task sensitive approach to search personalization /." Saarbrücken : VDM Verlag Dr. Müller, 2008. http://d-nb.info/988664186/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Скиданенко, Максим Сергійович, Максим Сергеевич Скиданенко, Maksym Serhiiovych Skydanenko, and A. S. Skidanenko. "Visual search engines as a search tool in the learning process." Thesis, Сумський державний університет, 2012. http://essuir.sumdu.edu.ua/handle/123456789/29424.

Full text
Abstract:
The main objective of the University is the promotion of successful professionals who have practical skills, can predict, model, process information, and integrate the knowledge obtained in a higher educational establishment. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/29424
APA, Harvard, Vancouver, ISO, and other styles
42

Furcy, David Andre. "Speeding Up the Convergence of Online Heuristic Search and Scaling Up Offline Heuristic Search." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4855.

Full text
Abstract:
The most popular methods for solving the shortest-path problem in Artificial Intelligence are heuristic search algorithms. The main contributions of this research are new heuristic search algorithms that are either faster or scale up to larger problems than existing algorithms. Our contributions apply to both online and offline tasks. For online tasks, existing real-time heuristic search algorithms learn better informed heuristic values and in some cases eventually converge to a shortest path by repeatedly executing the action leading to a successor state with a minimum cost-to-goal estimate. In contrast, we claim that real-time heuristic search converges faster to a shortest path when it always selects an action leading to a state with a minimum f-value, where the f-value of a state is an estimate of the cost of a shortest path from start to goal via the state, just like in the offline A* search algorithm. We support this claim by implementing this new non-trivial action-selection rule in FALCONS and by showing empirically that FALCONS significantly reduces the number of actions to convergence of a state-of-the-art real-time search algorithm. For offline tasks, we improve on two existing ways of scaling up best-first search to larger problems. First, it is known that the WA* algorithm (a greedy variant of A*) solves larger problems when it is either diversified (i.e., when it performs expansions in parallel) or committed (i.e., when it chooses the state to expand next among a fixed-size subset of the set of generated but unexpanded states). We claim that WA* solves even larger problems when it is enhanced with both diversity and commitment. We support this claim with our MSC-KWA* algorithm. Second, it is known that breadth-first search solves larger problems when it prunes unpromising states, resulting in the beam search algorithm. We claim that beam search quickly solves even larger problems when it is enhanced with backtracking based on limited discrepancy search. We support this claim with our BULB algorithm. We show that both MSC-KWA* and BULB scale up to larger problems than several state-of-the-art offline search algorithms in three standard benchmark domains. Finally, we present an anytime variant of BULB and apply it to the multiple sequence alignment problem in biology.
APA, Harvard, Vancouver, ISO, and other styles
43

Chiravirakul, Pawitra. "Search satisfaction : choice overload, variety seeking and serendipity in search engine use." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665389.

Full text
Abstract:
Users of current web search engines are often presented with a large number of returns after submitting a search term and choosing from the list might lead to them suffering from the effect of “choice overload”, as reported in earlier work. However, these search results are typically presented in an ordered list so as to simplify the search process, which may influence search behaviour and moderate the effect of number of choices. In this thesis, the effects of the number of search returns and their ordering on user behaviour and satisfaction are explored. A mixed methods approach combining multiple data collection and analysis techniques is employed in order to investigate these effects in terms of three specific issues, namely, choice overload in search engine use, variety seeking behaviour in a situation where multiple aspects of search results are required, and the chance of encountering serendipity. The participants were given search tasks and asked to choose from the sets of returns under experimental conditions. The results from the first three experiments revealed that large numbers of search results returned from a search engine tended to be associated with more satisfaction with the selected options when the decision was made without a time limit. In addition, when time was more strongly constrained the choices from a small number of returns led to relatively higher satisfaction than for a large number. Moreover, users’ behaviour was strongly influenced by the ordering of options in that they often looked and selected options presented near the top of the result lists when they perceived the ranking was reliable. The next experiment further investigated the ranking reliance behaviour when potentially useful search results were presented in supplementary lists. The findings showed that when users required a variety of options, they relied less on the ordering and tended to adapt their search strategies to seek variety by browsing more returns through the list, selecting options located further down, and/or choosing the supplementary web pages provided. Finally, with the aim of illustrating how chance encountering can be supported, a model of an automated synonym-enhanced search was developed and employed in a real-world literature search. The results showed that the synonym search was occasionally useful for providing a variety of search results, which in turn increased users’ opportunity to come across serendipitous experiences.
APA, Harvard, Vancouver, ISO, and other styles
44

Moghaddam, Mehdi Minachi. "Internet search techniques : using word count, links and directory structure as Internet search tools." Thesis, University of Bedfordshire, 2005. http://hdl.handle.net/10547/314080.

Full text
Abstract:
As the Web grows in size it becomes increasingly important that ways are developed to maximise the efficiency of the search process and index its contents with minimal human intervention. An evaluation is undertaken of current popular search engines which use a centralised index approach. Using a number of search terms and metrics that measure similarity between sets of results, it was found that there is very little commonality between the outcome of the same search performed using different search engines. A semi-automated system for searching the web is presented, the Internet Search Agent (ISA), this employs a method for indexing based upon the idea of "fingerprint types". These fingerprint types are based upon the text and links contained in the web pages being indexed. Three examples of fingerprint type are developed, the first concentrating upon the textual content of the indexed files, the other two augment this with the use of links to and from these files. By looking at the results returned as a search progresses in terms of numbers and measures of content of results for effort expended, comparisons can be made between the three fingerprint types. The ISA model allows the searcher to be presented with results in context and potentially allows for distributed searching to be implemented.
APA, Harvard, Vancouver, ISO, and other styles
45

Nilsson, Rebecca, and Christa Alanko. "STREAMLINE THE SEARCH ENGINE MARKETING STRATEGY : Generational Driven Search Behavior on Google." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70149.

Full text
Abstract:
The expanded internet usage has resulted in an increased activity at web-based search engines. Companies are therefore devoting a large portion of their online marketing budget on Search Engine Marketing (abbreviated SEM) in order to reach potential online consumers searching for products. SEM comprises Search Engine Advertising (SEA) and Search Engine Optimization (SEO) which are two dissimilar marketing tools companies can invest in to reach the desired customer segments. It is therefore of great interest for companies in different product markets to have knowledge of which SEM strategy to utilize. The statement leads to the purpose of the thesis which is to investigate which SEM strategy is the most suitable for companies in different markets, SEA or SEO?. The purpose of the thesis is derived to the research problem: How does the search behavior of consumers differ between the two SEM tools, SEO and SEA?. Initially, in order to answer the research problem, a theoretical framework was conducted consisting of theories from previous research. To collect primary data observations of 60 test subjects was performed in accordance with the Experimental Vignette Methodology. The analysis consists of a comparison between the collected data and the theories included in the frame of reference, to identify similarities and differences. The SPSS analysis of the result revealed numerous findings such as the two-way interactions of the factors degree of involvement and the click rate of SEM, as well as the choice of either a head or a tail keyword and the degree of involvement. The analysis further revealed a three-way interaction which suggests that the degree of involvement, and the use of either a head or tail keyword affects the choice of SEM. Additionally, the result shows that customers using brands as keywords are more likely to click on an organic link rather than on a paid ad. However, when adding the factor age to the analysis the results turn insignificant. As the area of search behavior of customers using search engines is relatively scientifically unexplored, the thesis has contributed with knowledge useful for companies, marketing agencies, among others. However, due to the ongoing expansion of search engine usage, it is of great interest to conduct further research in the area to reveal additional findings.
APA, Harvard, Vancouver, ISO, and other styles
46

Henriksson, Adam. "Alternative Search : From efficiency to experience." Thesis, Umeå universitet, Institutionen Designhögskolan, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-97836.

Full text
Abstract:
Search engines of today are focusing on efficiently and accurately generating search results.Yet, there is much to be explored in the way people interact with the applications and relate to the content. Individuals are commonly unique, with complex preferences, motives and expectations. Not only is it important to be sensitive to these differences, but to accommodate the extremes. Enhancing a search engine does not only rely on technological development, but to explore potential user experiences in broader perspectives - which not only gratifies the needs for information, but supports a diversity of journeys. The aim of the project is to develop an alternate search engine with different functionality based on new values that reflects contemporary needs. The result, Exposeek, is an experiential prototype supporting exploratory browsing based on principles of distributed infrastructure, transparent computation and serendipitous information. Suggestive queries, legible algorithms and augmented results provide additional insights and present an alternative way to seek and peruse the Web.<br>Search Engines, Interaction Design
APA, Harvard, Vancouver, ISO, and other styles
47

Bjørklund, Truls A. "Column Stores versus Search Engines and Applications to Search in Social Networks." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14782.

Full text
Abstract:
Search engines and database systems both play important roles as we store and organize ever increasing amounts of information and still require the information to be easily accessible. Research on these two types of systems has traditionally been partitioned into two fields, information retrieval and databases, and the integration of these two fields has been a popular research topic. Rather than attempting to integrate the two fields, this thesis begins with a comparison of the technical similarities between search engines and a specific type of database system often used in decision support systems: column stores. Based on an initial assessment of the technical similarities, which includes an evaluation of the feasibility of creating a hybrid system that supports both workloads, the papers in this thesis investigate how the identi_ed similarities can be used as a basis for improving the effciency of the different systems. To improve the efficiency of processing decision support workloads, the use of inverted indexes as an alternative to bitmap indexes is evaluated. We develop a query processing framework for compressed inverted indexes in decision support workloads and find that it outperforms state-of-the-art compressed bitmap indexes by being significantly more compact, and also improves the query processing e_ciency for most queries. Keyword search in social networks with access control is also addressed in this thesis, and a space of solutions is developed along two axes. One of the axes defines the set of inverted indexes that are used in the solution, and the other defines the meta-data used to filter out inaccessible results. With a exible and efficient search system based on a column-oriented storage system, we conduct a thorough set of experiments that illuminate the trade-offs between different extremes in the solution space. We also develop a hybrid scheme in between two of the best extremes. The hybrid approach uses cost models to find the most efficient solution for a particular workload. Together with an effcient query processing framework based on our novel HeapUnion operator, this results in a system that is e_cient for a wide range of workloads that consist of updates and searches with access control in a social network.
APA, Harvard, Vancouver, ISO, and other styles
48

Yating, Zhang. "A Study on Object Search and Relationship Search from Text Archive Data." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/217201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Blundon, Elizabeth Gwynne. "Search through time is like search through space : behavioural and electrophysiological evidence." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54313.

Full text
Abstract:
We conducted four experiments comprised of sequential auditory and visual searches in order to further explore the generalizability of the search asymmetry phenomenon to different sensory modalities, and to the sequential presentation of items in search arrays. It has been shown that search time to identify targets that contain features that distractors don’t have (feature-present targets) is faster than search time to identify targets that are missing features that distractors have (feature-absent targets). In Experiment 1 participants listened to auditory oddball sequences, consisting of two types of five-tone runs: the flat run, which consisted of five pure tones of the same frequency (the feature-absent target), and the change run, which consisted of four pure tones of the same frequency, followed by a fifth tone of a different frequency (the feature-present target). In some sequences the change runs were common and the flat runs were rare (the feature-present condition), while in other sequences these roles were reversed (the feature-absent condition). Experiments 2, 3 and 4 used the same protocol, however the visual stimuli consisted of rings (annuli) that differed by some feature (colour in Experiment 2, contrast in Experiment 3, and shade in Experiment 4). In all four experiments participant reaction times (RT) and electrophysiological (P300) responses to rare target patterns were recorded. In Experiments 1, 2 and 3, the reaction time and P300 latencies to identify feature-present targets were significantly faster than those to feature-absent targets, suggesting strong similarities between simultaneous visual search, and sequential auditory and visual search. What’s more, P300 responses to feature-present targets exhibited strong characteristics of both P3a and P3b subcomponents, while feature-absent responses only resembled that of the P3b. By contrast, the results of the fourth experiment were inconclusive. In Experiment 4 the saliency of the feature difference in the change runs was significantly reduced compared to that of the first three experiments, yielding longer reaction times and weaker P300 responses. Implications for the current understanding of search strategies associated with easy (feature-present) and difficult (feature-absent) searches, as well as the locus of the search asymmetry phenomenon, are discussed.<br>Arts, Faculty of<br>Psychology, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

Martin, Carlstedt. "Using NLP and context for improved search result in specialized search engines." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!