To see the other types of publications on this topic, follow the link: Search tools.

Dissertations / Theses on the topic 'Search tools'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Search tools.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Saunders, Tana. "Evaluation of Internet search tools instrument design." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49957.

Full text
Abstract:
Thesis (MPhil)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: This study investigated Internet search tools / engines to identify desirable features that can be used as a benchmark or standard to evaluate web search engines. In the past, the Internet was thought of as a big spider's web, ultimately connecting all the bits of information. It has now become clear that this is not the case, and that the bow tie analogy is more accurate. This analogy suggests that there is a central core of well-connected pages, with links IN and OUT to other pages, tendrils and orphan pages. This emphasizes the importance of selecting a search tool that is well connected and linked to the central core. Searchers must take into account that not all search tools search the Invisible Web and this will reflect on the search tool selected. Not all information found on the Web and Internet is reliable, current and accurate, and Web information must be evaluated in terms of authority, currency, bias, purpose of the Web site, etc. Different kinds of search tools are available on the Internet, such as search engines, directories, library gateways, portals, intelligent agents, etc. These search tools were studied and explored. A new categorization for online search tools consisting of Intelligent Agents, Search Engines, Directories and Portals / Hubs is suggested. This categorization distinguishes the major differences between the 21 kinds of search tools studied. Search tools / engines consist of spiders, crawlers, robots, indexes and search tool software. These search tools can be further distinguished by their scope, internal or external searches and whether they search Web pages or Web sites. Most search tools operate within a relationship with other search tools, and they often share results, spiders and databases. This relationship is very dynamic. The major international search engines have identifiable search features. The features of Google, Yahoo, Lycos and Excite were studied in detail. Search engines search for information in different ways, and present their results differently. These characteristics are critical to the Recall/Precision ratio. A well-planned search strategy will improve the Precision/Recall ratio and consider the web-user capabilities and needs. Internet search tools/engines is not a panacea for all information needs, and have pros and cons. The Internet search tool evaluation instrument was developed based on desirable features of the major search tools, and is considered a benchmark or standard for Internet search tools. This instrument, applied to three South African search tools, provided insight into the capabilities of the local search tools compared to the benchmark suggested in this study. The study concludes that the local search engines compare favorably with the major ones, but not enough so to use them exclusively. Further research into this aspect is needed. Intelligent agents are likely to become more popular, but the only certainty in the future of Internet search tools is change, change, and change.
AFRIKAANSE OPSOMMING: Hierdie studie het Internetsoekinstrumente/-enjins ondersoek met die doel om gewenste eienskappe te identifiseer wat as 'n standaard kan dien om soekenjins te evalueer. In die verlede is die Internet gesien as 'n groot spinnerak, wat uiteindelik al die inligtingsdeeltjies verbind. Dit het egter nou duidelik geword dat dit glad nie die geval is nie, en dat die strikdas analogie meer akkuraat is. Hierdie analogie stel voor dat daar 'n sentrale kern van goed gekonnekteerde bladsye is, met skakels IN en UIT na ander bladsye, tentakels en weesbladsye. Dit beklemtoon die belangrikheid om die regte soekinstrument te kies, naamlik een wat goed gekonnekteer is, en geskakel is met die sentrale kern van dokumente. Soekers moet in gedagte hou dat nie alle soekenjins in die Onsigbare Web soek nie, en dit behoort weerspieël te word in die keuse van die soekinstrument. Nie alle inligting wat op die Web en Internet gevind word is betroubaar, op datum en akkuraat nie, en Web-inligting moet geëvalueer word in terme van outoriteit, tydigheid, vooroordeel, doel van die Webruimte, ens. Verskillende soorte soekinstrumente is op die Internet beskikbaar, soos soekenjins, gidse, biblioteekpoorte, portale, intelligente agente, ens. Hierdie soekinstrumente is bestudeer en verken. 'n Nuwe kategorisering vir aanlyn soekinstrumente bestaande uit Intelligente Agente, Soekinstrumente, Gidse en Portale/Middelpunte word voorgestel. Hierdie kategorisering onderskei die hoofverskille tussen die 21 soorte soekinstrumente wat bestudeer is. Soekinstrumente/-enjins bestaan uit spinnekoppe, kruipers, robotte, indekse en soekinstrument sagteware. Hierdie soekinstrumente kan verder onderskei word deur hulle omvang, interne of eksterne soektogte en of hulle op Webbladsye of Webruimtes soek. Die meeste soekinstrumente werk in verhouding met ander soekinstrumente, en hulle deel dikwels resultate, spinnekoppe en databasisse. Hierdie verhouding is baie dinamies. Die hoof internasionale soekenjins het soekeienskappe wat identifiseerbaar is. Die eienskappe van Google, Yahoo en Excite is in besonderhede bestudeer. Soekenjins soek op verskillende maniere na inligting, en lê hulle resultate verskillend voor. Hierdie karaktereienskappe is krities vir die Herwinning/Presisie verhouding. 'n Goedbeplande soekstrategie sal die Herwinning/Presisie verhouding verbeter. Internet soekinstrumente/-enjins is nie die wondermiddel vir alle inligtingsbehoeftes nie, en het voor- en nadele. Die Internet soekinstrument evalueringsmeganisme se ontwikkeling is gebaseer op gewenste eienskappe van die hoof soekinstrumente, en word beskou as 'n standaard vir Internet soekinstrumente. Hierdie instrument, toegepas op drie Suid-Afrikaanse soekenjins, het insae verskaf in die doeltreffendheid van die plaaslike soekinstrumente soos vergelyk met die standaard wat in hierdie studie voorgestel word. In die studie word tot die slotsom gekom dat die plaaslike soekenjins gunstig vergelyk met die hoof soekenjins, maar nie genoegsaam sodat hulle eksklusief gebruik kan word nie. Verdere navorsing oor hierdie aspek is nodig. Intelligente Agente sal waarskynlik meer gewild word, maar die enigste sekerheid vir die toekoms van Internet soekinstrumente is verandering, verandering en nogmaals verandering.
APA, Harvard, Vancouver, ISO, and other styles
2

Moghaddam, Mehdi Minachi. "Internet search techniques : using word count, links and directory structure as Internet search tools." Thesis, University of Bedfordshire, 2005. http://hdl.handle.net/10547/314080.

Full text
Abstract:
As the Web grows in size it becomes increasingly important that ways are developed to maximise the efficiency of the search process and index its contents with minimal human intervention. An evaluation is undertaken of current popular search engines which use a centralised index approach. Using a number of search terms and metrics that measure similarity between sets of results, it was found that there is very little commonality between the outcome of the same search performed using different search engines. A semi-automated system for searching the web is presented, the Internet Search Agent (ISA), this employs a method for indexing based upon the idea of "fingerprint types". These fingerprint types are based upon the text and links contained in the web pages being indexed. Three examples of fingerprint type are developed, the first concentrating upon the textual content of the indexed files, the other two augment this with the use of links to and from these files. By looking at the results returned as a search progresses in terms of numbers and measures of content of results for effort expended, comparisons can be made between the three fingerprint types. The ISA model allows the searcher to be presented with results in context and potentially allows for distributed searching to be implemented.
APA, Harvard, Vancouver, ISO, and other styles
3

Manicassamy, Jayanthi, and P. Dhavachelvan. "VirusPKT: A Search Tool For Assimilating Assorted Acquaintance For Viruses." Engg Journals Publications, 2009. http://hdl.handle.net/10150/212814.

Full text
Abstract:
Viruses utilize various means to circumvent the immune detection in the biological systems. Several mathematical models have been investigated for the description of viral dynamics in the biological system of human and various other species. One common strategy for evasion and recognition of viruses is, through acquaintance in the systems by means of search engines. In this perspective a search tool have been developed to provide a wider comprehension about the structure and other details on viruses which have been narrated in this paper. This provides an adequate knowledge in evolution and building of viruses, its functions through information extraction from various websites. Apart from this, tool aim to automate the activities associated with it in a self-maintainable, selfsustainable, proactive one which has been evaluated through analysis made and have been discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Bani-Ahmad, Sulieman Ahmad. "RESEARCH-PYRAMID BASED SEARCH TOOLS FOR ONLINE DIGITAL LIBRARIES." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1207228115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Klein, Abigail (Abigail B. ). "Search tools for scaling expert code review to the global classroom." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/105991.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
This thesis aims to answer the question "How can teachers of online classrooms give more qualitative feedback to students?" We narrow the scope of this question to an online software engineering class in which a major component is code review. We built two search tools that give teachers better coverage of student code. The first tool, Comment Search, allows students and staff to reuse any comment they previously wrote when reviewing another student's code. Staff can reuse any comment written by any staff member as well. After deploying Comment Search in a classroom for a full semester, we found that students and staff used this tool to write higher quality comments. We also found that many reused comments were about similar patterns in code. This inspired the second tool, Code Search, which allows teachers to search for sections of student code that contain a desired pattern. Preliminary results of Code Search are promising: for the queries that Code Search is built for, Code Search returns nearly all relevant results. Together, Comment Search and Code Search offer teachers the ability to give meaningful comments to many more students than otherwise possible.
by Abigail Klein.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

O'loughlin, Benjamin. "Evaluation of Search and Rescue Planning Tools on the West Florida Shelf." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6557.

Full text
Abstract:
The Coast Guard conducts over 20,000 search and rescue cases a year with approximately 5% of them occurring within the coastal waters of the West Florida Shelf (WFS). Each search effort is planned using the Coast Guard’s Search and Rescue Optimal Planning System (SAROPS) which uses model inputs to create composite probability distributions based on the results of Monte Carlo projections of thousands of particle trajectories. However, SAROPS is limited by the quality of model inputs and their associated errors. This study utilizes observations from three surface drifter deployments on the WFS to evaluate the effectiveness of available surface current models, including one model not currently in use by the Coast Guard. Additionally, the performance of high-frequency (HF) Radar observations is evaluated against the models. The HF Radar root-mean-square errors (RMSE) were found to be on the order of 10 cm/s, and a model created with objectively mapped HF Radar data was found to out-perform all available models. Additionally, a comparison of model skills (using a normalized Lagrangian separation method) showed the West Florida Coastal Ocean Model (WFCOM) to have better skill on both the inner and outer shelf regions of the WFS when compared to other models.
APA, Harvard, Vancouver, ISO, and other styles
7

Harris, Martyn. "The anatomy of a search and mining system for digital humanities : Search And Mining Tools for Language Archives (SAMTLA)." Thesis, Birkbeck (University of London), 2017. http://bbktheses.da.ulcc.ac.uk/236/.

Full text
Abstract:
Humanities researchers are faced with an overwhelming volume of digitised primary source material, and "born digital" information, of relevance to their research as a result of large-scale digitisation projects. The current digital tools do not provide consistent support for analysing the content of digital archives that are potentially large in scale, multilingual, and come in a range of data formats. The current language-dependent, or project specific, approach to tool development often puts the tools out of reach for many research disciplines in the humanities. In addition, the tools can be incompatible with the way researchers locate and compare the relevant sources. For instance, researchers are interested in shared structural text patterns, known as \parallel passages" that describe a specific cultural, social, or historical context relevant to their research topic. Identifying these shared structural text patterns is challenging due to their repeated yet highly variable nature, as a result of differences in the domain, author, language, time period, and orthography. The contribution of the thesis is a novel infrastructure that directly addresses the need for generic, flexible, extendable, and sustainable digital tools that are applicable to a wide range of digital archives and research in the humanities. The infrastructure adopts a character-level n-gram Statistical Language Model (SLM), stored in a space-optimised k-truncated suffix tree data structure as its underlying data model. A character-level n-gram model is a relatively new approach that is competitive with word-level n-gram models, but has the added advantage that it is domain and language-independent, requiring little or no preprocessing of the document text unlike word-level models that require some form of language-dependent tokenisation and stemming. Character-level n-grams capture word internal features that are ignored by word-level n-gram models, which provides greater exibility in addressing the information need of the user through tolerant search, and compensation for erroneous query specification or spelling errors in the document text. Furthermore, the SLM provides a unified approach to information retrieval and text mining, where traditional approaches have tended to adopt separate data models that are often ad-hoc or based on heuristic assumptions. In addition, the performance of the character-level n-gram SLM was formally evaluated through crowdsourcing, which demonstrates that the retrieval performance of the SLM is close to that of the human level performance. The proposed infrastructure, supports the development of the Samtla (Search And Mining Tools for Language Archives), which provides humanities researchers digital tools for search, browsing, and text mining of digital archives in any domain or language, within a single system. Samtla supersedes many of the existing tools for humanities researchers, by supporting the same or similar functionality of the systems, but with a domain-independent and languageindependent approach. The functionality includes a browsing tool constructed from the metadata and named entities extracted from the document text, a hybrid-recommendation system for recommending related queries and documents. However, some tools are novel tools and developed in response to the specific needs of the researchers, such as the document comparison tool for visualising shared sequences between groups of related documents. Furthermore, Samtla is the first practical example of a system with a SLM as its primary data model that supports the real research needs of several case studies covering different areas of research in the humanities.
APA, Harvard, Vancouver, ISO, and other styles
8

RAYAPROLU, SRINIVAS. "USING COM OBJECTS PROGRAMMING FOR ENHANCED LIBRARY SEARCH APPLICATIONS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1029441248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dubey, Anshul. "Search and Analysis of the Sequence Space of a Protein Using Computational Tools." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14115.

Full text
Abstract:
A new approach to the process of Directed Evolution is proposed, which utilizes different machine learning algorithms. Directed Evolution is a process of improving a protein for catalytic purposes by introducing random mutations in its sequence to create variants. Through these mutations, Directed Evolution explores the sequence space, which is defined as all the possible sequences for a given number of amino acids. Each variant sequence is divided into one of two classes, positive or negative, according to their activity or stability. By employing machine learning algorithms for feature selection on the sequence of these variants of the protein, attributes or amino acids in its sequence important for the classification into positive or negative, can be identified. Support Vector Machines (SVMs) were utilized to identify the important individual amino acids for any protein, which have to be preserved to maintain its activity. The results for the case of beta-lactamase show that such residues can be identified with high accuracy while using a small number of variant sequences. Another class of machine learning problems, Boolean Learning, was used to extend this approach to identifying interactions between the different amino acids in a proteins sequence using the variant sequences. It was shown through simulations that such interactions can be identified for any protein with a reasonable number of variant sequences. For experimental verification of this approach, two fluorescent proteins, mRFP and DsRed, were used to generate variants, which were screened for fluorescence. Using Boolean Learning, an interacting pair was identified, which was shown to be important for the fluorescence. It was also shown through experiments and simulations that knowing such pairs can increase the fraction active variants in the library. A Boolean Learning algorithm was also developed for this application, which can learn Boolean functions from data in the presence of classification noise.
APA, Harvard, Vancouver, ISO, and other styles
10

Wormet, Jody R. "Federated Search Tools in Fusion Centers : Bridging Databases in the Information Sharing Environment." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17480.

Full text
Abstract:
Approved for public release; distribution is unlimited
This research utilized a semi-structured survey instrument delivered to subject matter experts within the national network of fusion centers and employed a constant comparison method to analyze the survey results. This smart practice exploration informed through an appreciative inquiry lens found considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases. These findings confirmed the initial hypothesis that fusion centers have received very little guidance on how to bridge disconnected databases to enhance the analytical process. This research should contribute to the literature by offering a greater understanding of the challenges faced by fusion centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes; and identifying possible solutions to mitigate those challenges. As a result, the research will be useful to individual fusion centers and more broadly, the National Fusion Center Association, which provides leadership to the national network of fusion centers by sharing lessons learned, smart practices, and other policy guidance.
APA, Harvard, Vancouver, ISO, and other styles
11

Sheen, Timothy M. "Tools for portable parallel image processing." Thesis, University of Aberdeen, 1999. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU112832.

Full text
Abstract:
The computational demands of real-time image processing often dictate the use of techniques such as parallel processing to meet required performance. This thesis considers a range of technology which may be used to accelerate image processing operations. An occam compiler is ported to a PowerPC based parallel computer. A multiprocessor configuration tool and Run Time System is developed, allowing occam programs to be distributed over an arbitrary sized network of PowerPC microprocessors. Code optimization techniques for image processing operations are investigated, with the development of a post-compilation code optimizer. The optimizer provides performance increases between 37% and 450% for a variety of image processing algorithms. The applicability of these tools is demonstrated with two image processing applications, micro-biological rapid imaging and sediment texture analysis. Edge detection, region merging and shape analysis algorithms are discussed in the context of the applications. The image processing algorithms are implemented in occam and performance is compared on serial and parallel platforms. The algorithms are then ported to a hardware implementation in a custom computing device, based on a field programmable gate array (FPGA), using the Handel hardware compilation system. The issues involved with this porting are discussed, including the compromises which must be considered when designing for a size constrained hardware platform. Amongst the issues considered are restricted precision data, low level parallelism and algorithmic simplifications. To provide performance equivalent to the hardware, between 5 and 10 processors would be required on the parallel machine, with considerably greater cost, size and power consumption.
APA, Harvard, Vancouver, ISO, and other styles
12

Hunt, Neil. "Tools for image processing and computer vision." Thesis, University of Aberdeen, 1990. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU025003.

Full text
Abstract:
The thesis describes progress towards the construction of a seeing machine. Currently, we do not understand enough about the task to build more than the simplest computer vision systems; what is understood, however, is that tremendous processing power will surely be involved. I explore the pipelined architecture for vision computers, and I discuss how it can offer both powerful processing and flexibility. I describe a proposed family of VLSI chips based upon such an architecture, each chip performing a specific image processing task. The specialisation of each chip allows high performance to be achieved, and a common pixel interconnect interface on each chip allows them to be connected in arbitrary configurations in order to solve different kinds of computational problems. While such a family of processing components can be assembled in many different ways, a programmable computer offers certain advantages, in that it is possible to change the operation of such a machine very quickly, simply by substituting a different program. I describe a software design tool which attempts to secure the same kind of programmability advantage for exploring applications of the pipelined processors. This design tool simulates complete systems consisting of several of the proposed processing components, in a configuration described by a graphical schematic diagram. A novel time skew simulation technique developed for this application allows coarse grain simulation for efficiency, while preserving the fine grain timing details. Finally, I describe some experiments which have been performed using the tools discussed earlier, showing how the tools can be put to use to handle real problems.
APA, Harvard, Vancouver, ISO, and other styles
13

Ding, Jie, and Lin Yu. "In search of continuous improvement implementation Tools : results of the 2Pnd international continuous improvement survey." Thesis, Högskolan i Gävle, Ämnesavdelningen för industriell ekonomi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-5740.

Full text
Abstract:
The overall purpose of this paper is to investigate the implementation of Continuous Improvement (CI) in companies from Sweden, Netherlands, Spain, Italy, Australia and United Kingdom.This paper used the 2nd international CI survey to analyze CI behavior. The analysis was made by comparing the tools in clusters defined by different CI abilities.The major finding is that different CI tool usage depends on the different CI ability
APA, Harvard, Vancouver, ISO, and other styles
14

Mowbray, John Alexander. "The role of networking and social media tools during job search : an information behaviour perspective." Thesis, Edinburgh Napier University, 2018. http://researchrepository.napier.ac.uk/Output/1516318.

Full text
Abstract:
This research reported in this thesis explores job search networking amongst 16-24 year olds living in Scotland, and the role of social media platforms (i.e. Facebook, Twitter, and LinkedIn) during this process. Networking is treated as an information behaviour; reflecting this, the study is underpinned by a prominent model from the domain of information science. A sequential, mixed methods approach was applied to gather data. This included the use of interviews, focus groups, and a survey questionnaire. The interviews incorporated ego-centric network methods to develop a relational perspective of job search networking. The findings show that young people accrue different types of information from network contacts which can be useful for all job search tasks. Indeed, frequent networking offline and on social media is associated with positive job search outcomes. This is especially true of engaging with family members and acquaintances, and frequent use of Facebook for job search purposes. However, demographic and other contextual factors have a substantial impact on the nature of networking behaviours, and the extent to which they can influence outcomes. Additionally, young jobseekers face a range of barriers to networking, do not always utilise their networks thoroughly, and are more likely to use social media platforms as supplementary tools for job search. A key contribution of this work is that it provides a detailed insight into the process of networking that has been neglected in previous studies. Its focus on social media also reveals a new dimension to the concept which has received little attention in the job search literature. Given its focus on young jobseekers living in Scotland, the findings have also been used to create a detailed list of recommendations for practitioners.
APA, Harvard, Vancouver, ISO, and other styles
15

Tabaza, Yahia Zakaria Abdelqader. "Implementing metabolomics tools in the search for new anti-proliferative agents from the plant-associated endophytes." Thesis, University of Strathclyde, 2018. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=29567.

Full text
Abstract:
In the search for new anticancer agents of natural origin against breast and lung cancer (ZR-75 and A549 cancer cell lines, respectively), plant-associated endophytes could be a good source for bioactive secondary metabolites. Twenty six endophytes were obtained from four different Jordanian plants; Anchusa strigosa, Anthemis palestina, Euphorbia peplus and Rumex cyprius. Internal transcribed spacer (ITS) gene sequencing was implemented to identify the obtained endophytes. Based on their biological activity and chemical profile, three endophytes namely Curvularia australiensis, Chaetomium subaffine and Fusarium acuminatum were chosen for the scale-up. These endophytes were cultured in liquid and rice media at different time periods to optimise their growth and production of compounds, employing both NMR and mass spectrometry-based metabolomics. The medium that afforded better yield, more chemical diverse extract and more potent biological activity was chosen for scaling-up purposes. Each of the scaled-up extracts was subjected to liquid-liquid partitioning followed by fractionation using a high-throughput flash-chromatography system. The fractions obtained from the first chromatography step were tested in-vitro against both breast and lung cancer (ZR-75 and A549 cell lines, respectively) and analysed using both proton nuclear magnetic resonance (NMR) and liquid chromatography-high resolution mass spectrometry (LC-HRMS). The HRMS data were processed with MZmine then subjected to Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA). The OPLS-DA results pinpointed the biologically active secondary metabolites. Metabolomics-guided isolation work targeted the bioactive secondary metabolites. As a result, five new compounds and ten known compounds were obtained from the three scaled-up endophytes. The isolated compounds were elucidated by employing 1D and 2D NMR then tested against ZR-75 and A549 cell lines. Twelve compounds were found active against ZR-75 cell line, which included five new compounds. Six compounds were found active against A549 cell line that included one of the new natural products isolated.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Da-Ming. "Development of novel intelligent condition monitoring procedures for rolling element bearings." Thesis, University of Aberdeen, 2001. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU151909.

Full text
Abstract:
The primary aim of this thesis is to develop a novel procedure for an intelligent automatic diagnostic condition monitoring system for rolling element bearings. The applicability of this procedure is demonstrated by its implementation in a particular electric motor drive system. The novel bearing condition diagnostic procedure developed involves three stages combining the merits of advanced signal processing techniques, feature extraction methods and artificial neural networks. This procedure is the effective combination of these techniques and methods in a holistic approach to the rolling element bearing problem which provides the novelty in this thesis. Maintenance costs account for an extremely large proportion of the operating costs of machinery. In addition, machine breakdowns and consequent downtime can severely affect the productivity of factories and the safety of products. It is therefore becoming increasingly important for industries to monitor their equipment systematically in order to reduce the number of breakdowns and to avoid unnecessary costs and delays caused by repair. The rolling element bearing is an extremely widespread component in industrial rotating machinery and a large number of problems arise from faulty bearings. Therefore, proper monitoring of bearing condition is highly cost-effective in reducing operating cost. The advanced signal processing techniques used here are bispectral-based and wavelet-based analyses. The bispectral-based procedures examined are the bis-pectrum, the bicoherence, the bispectrum diagonal slice, the bicoherence diagonal slice, the summed bispectrum and the summed bicoherence. The wavelet-based procedure uses the Morlet wavelet. These methods greatly enhance the ability of an automated diagnostic process by linking the increased capability for signal analysis to the predictive capability of artificial neural networks. The bearing monitoring scheme based on bispectral analysis is shown to provide greater insight into the structure of bearing vibration signals and to offer more diagnostic information than conventional power spectral analysis. The wavelet analysis provides a multi-resolution, time-frequency approach to extract information from the bearing vibration signatures. In order to effectively interpret the wavelet map, the time-frequency domain is used instead of the time-scale domain by plotting the associated time trace and power spectrum.
APA, Harvard, Vancouver, ISO, and other styles
17

Wiratunga, Nirmalie Chandrika. "Informed selection and use of training examples for knowledge refinement." Thesis, Robert Gordon University, 2000. http://hdl.handle.net/10059/2346.

Full text
Abstract:
Knowledge refinement tools seek to correct faulty rule-based systems by identifying and repairing faults indicated by training examples that provide evidence of faults. This thesis proposes mechanisms that improve the effectiveness and efficiency of refinement tools by the best use and selection of training examples. The refinement task is sufficiently complex that the space of possible refinements demands a heuristic search. Refinement tools typically use hill-climbing search to identify suitable repairs but run the risk of getting caught in local optima. A novel contribution of this thesis is solving the local optima problem by converting the hill-climbing search into a best-first search that can backtrack to previous refinement states. The thesis explores how different backtracking heuristics and training example ordering heuristics affect refinement effectiveness and efficiency. Refinement tools rely on a representative set of training examples to identify faults and influence repair choices. In real environments it is often difficult to obtain a large set of training examples, since each problem-solving task must be labelled with the expert's solution. Another novel aspect introduced in this thesis is informed selection of examples for knowledge refinement, where suitable examples are selected from a set of unlabelled examples, so that only the subset requires to be labelled. Conversely, if a large set of labelled examples is available, it still makes sense to have mechanisms that can select a representative set of examples beneficial for the refinement task, thereby avoiding unnecessary example processing costs. Finally, an experimental evaluation of example utilisation and selection strategies on two artificial domains and one real application are presented. Informed backtracking is able to effectively deal with local optima by moving search to more promising areas, while informed ordering of training examples reduces search effort by ensuring that more pressing faults are dealt with early on in the search. Additionally, example selection methods achieve similar refinement accuracy with significantly fewer examples.
APA, Harvard, Vancouver, ISO, and other styles
18

Ding, Jie, and Lin Yu. "In search of continuous improvement implementation Tools : results of the 2Pnd international continuous improvement survey." Thesis, University of Gävle, Ämnesavdelningen för industriell ekonomi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-5740.

Full text
Abstract:

The overall purpose of this paper is to investigate the implementation of Continuous Improvement (CI) in companies from Sweden, Netherlands, Spain, Italy, Australia and United Kingdom.This paper used the 2nd international CI survey to analyze CI behavior. The analysis was made by comparing the tools in clusters defined by different CI abilities.The major finding is that different CI tool usage depends on the different CI ability

APA, Harvard, Vancouver, ISO, and other styles
19

Bowden, Jared Newell. "Using chromatographic and mass spectrometry tools to probe albumin and its cargos: in search of understanding type II diabetes." Diss., Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/bowden/BowdenJ0511.pdf.

Full text
Abstract:
We measured molecules carried as cargos on the abundant blood protein human serum albumin (1) in patients with newly diagnosed, untreated type II diabetes (T2D) compared to healthy controls (HC). The HSA cargos measured included lipids, minerals, peptides, and metabolites. Differences in these cargos associated with T2D were measured, using chromatography and mass spectrometry, seeking to identify biological markers that may enhance early diagnosis of T2D. An extrinsic fluorescent probe of binding sites on HSA, ANS, revealed that there were distinct differences in loading of hydrophobic cargo between HC and systemic lupus erythematosus, T2D, and Lyme disease plasma samples. A decrease in mineral levels on HSA was also measured in T2D plasma compared to healthy control plasma, using ICP-MS. Zinc ions showed the largest changes and were reduced three fold in T2D. The hydrophobic cargo of HSA revealed a decrease in HSA-associated fatty acids in T2D, measured by GCMS using negative chemical ionization. In this same GCMS study new classes of glycine-containing compounds bound to HSA were found to be increased by two fold in T2D in the hydrophobic extract of HSA. A metabolomic study using RP-uHPLC QTOF MS in both positive and negative ionization modes examined differences in the hydrophobic extract of whole plasma in T2D compared to healthy controls. Increased levels of branched chain amino acids were found in T2D compared to HC. Decreased levels of phosphatidylcholines, phosphatidylethanol amines, and vitamin D3 metabolites were found in T2D compared to HC. The results suggests that the HSA cargo in T2D, SLE, and other disease states, may provide new diagnostic markers and lead to deeper understanding of the mechanisms of disease in humans.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Sammarraie, Mareh Fakhir. "An Empirical Investigation of Collaborative Web Search Tool on Novice's Query Behavior." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/764.

Full text
Abstract:
In the past decade, research efforts dedicated to studying the process of collaborative web search have been on the rise. Yet, a limited number of studies have examined the impact of collaborative information search processes on novices’ query behaviors. Studying and analyzing factors that influence web search behaviors, specifically users’ patterns of queries when using collaborative search systems can help with making query suggestions for group users. Improvements in user query behaviors and system query suggestions help in reducing search time and increasing query success rates for novices. This thesis investigates the influence of collaboration between experts and novices as well as the use of a collaborative web search tool on novices’ query behavior. We used SearchTeam as our collaborative search tool. This empirical study involves four collaborative team conditions: SearchTeam and expert-novice team, SearchTeam and novice-novice team, traditional and expert-novice team, and traditional and novice-novice team. We analyzed participants’ query behavior in two dimensions: quantitatively (e.g. the query success rate), and qualitatively (e.g. the query reformulation patterns). The findings of this study reveal that the successful query rate is higher in expert-novice collaborative teams, who used the collaborative search tools. Participants in expert-novice collaborative teams who used the collaborative search tools, required less time to finalize all tasks compared to expert-novice collaborative teams, who used the traditional search tools. Self-issued queries and chat logs were major sources of terms that novice participants in expert-novice collaborative teams who used the collaborative search tools used. Novices as part of expert-novice pairs who used the collaborative search tools, employed New and Specialization more often as query reformulation patterns. The results of this study contribute to the literature by providing detailed investigation regarding the influence of utilizing collaborative search tool (SearchTeam) in the context of software troubleshooting and development. This study highlights the possible collaborative information seeking (CIS) activities that may occur among software developers’ interns and their mentors. Furthermore, our study reveals that there are specific features, such as awareness and built-in instant messaging (IM), offered by SearchTeam that can promote the CIS activities among participants and help increase novices’ query success rates. Finally, we believe the use of CIS tools, designed to support collaborative search actions in big software development companies, has the potential to improve the overall novices’ query behavior and search strategies.
APA, Harvard, Vancouver, ISO, and other styles
21

Rodgers, Paul John. "Natural tracers as tools for upscaling hydrological flow path understanding in two mesoscale Scottish catchments." Thesis, University of Aberdeen, 2004. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU188552.

Full text
Abstract:
Natural geochemical and isotopic tracers were used to assess and model hydrological processes in two mesoscale (>200km2) catchments in the Scottish highlands, the Feshie and Feugh, in order to upscale understanding from the traditional headwater catchment scale (<10km2). Gran alkalinity was used as a geochemical tracer to distinguish acidic, organically enriched soil water from more alkaline groundwater. Spatial variations in alkalinity reflected the influence of different hydrological sources at the sub-catchment and catchment-wide scale, whereas temporal alkalinity variation at different flows over the hydrological year and over shorter event timescales provided information on the influence of hydrological flow paths. The well-defined relationship between alkalinity and flow meant that two-component end member mixing analysis could be used to quantify the influence of these hydrological flow paths and sources over a range of scales and contrasting catchment characteristics. These techniques were then used to examine more specific groundwater-surface water interactions in the River Feshie's extensive braided section. These interactions were seen to exert a significant and dynamic impact on the hydrochemistry of main stem surface flows and as a result, the hydrological and hydroecological functioning of the catchment as a whole. Stable isotope (18O) variations were also employed as a natural tracer to further investigate hydrological flow paths and provide preliminary catchment residence time estimates. These estimates indicated the relative dominance of catchment characteristics over the more general influence of scale in determining the age sources of catchment runoff. This represented one of the first such assessments of stable isotopic tracers for investigating catchment hydrology other than at the headwater scale. The natural tracer approach therefore provided considerable insight into mesoscale catchment hydrological functioning that would not have been feasible through more conventional small-scale hydrometric investigation. This has direct utility for the sustainable management of such catchment systems as well as highlighting the potential for applying such tracer investigations in order to help structure and validate more accurate hydrological models.
APA, Harvard, Vancouver, ISO, and other styles
22

Bernström, Adam. "Digitala verktyg - framtidens lärande? : En kvalitativ studie om vad lärare anser om digitala verktyg, dess möjligheter och utmaningar." Thesis, Högskolan i Jönköping, Högskolan för lärande och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44925.

Full text
Abstract:
Studien som har bedrivits är ett resultat av den pågående digitaliseringen som sker i både samhället och skolan. Revideringar från Skolverket har försett läroplanen med mer integrering av digitala verktyg och möjligheter för hur eleverna ska kunna använda samt navigera på dem. Denna förändring har även gjort att kraven för lärares digitala kompetens måste utökas. Detta leder in på syftet med denna studie då den har som avsikt att söka en djupare förståelse för hur lärare upplever digitala verktyg i svenskundervisningen samt hur planering och utförande påverkas. Forskningsfrågorna som studien grundar sig i är: Hur använder sig några lärare av digitala verktyg i svenskundervisningen? Vilka möjligheter och utmaningar anser några lärare digitala verktyg medför i svenskundervisningen? För att få en djupare förståelse och höra djupgående argument har studien bedrivits i form av en kvalitativ studie där sex verksamma lärare har intervjuats. All insamlade data har transkriberats och analyserats genom tematisk analys för att få fram ett resultat. Studiens resultat visar att det främst är kompetensen som styr användningen av digitala verktyg i undervisningen. Lärarna anser att de har bristande kunskaper om de digitala verktygen de använder vilket kan leda till att de känner viss rädsla för att använda digitala verktyg. De möjligheter som lärarna beskriver är ökad motivation hos eleven, effektivisering samt att det verkar som ett bra stöd för elever i behov. De utmaningar som lärarna uppger är störningsmomenten samt bristen på digitala verktyg.
The present study is a result of the ongoing digitalization that has been taking place in both society and the school. Revisions by the National Agency of Education have provided the curriculum with more integration of digital tools and how the students should use and navigate them. This change has also made the demand for teachers’ digital skills to be increased. This leads to the purpose of this study as it is intended to seek a deeper understanding of how teachers experience digital tools in Swedish teaching and how planning and execution are affected. The research questions on which the study is based were: How do some teachers use digital tools in Swedish teaching? What opportunities and challenges some teachers find digital tools entail in Swedish teaching? In order to gain a deeper understanding and hear in-depth arguments, the study has been conducted in the form of a qualitative study in which six active teachers have been interviewed. All collected data has been transcribed and analyzed in the basis of thematic analysis to produce a result. The study's results show that it is primarily competence that controls the use of digital tools in teaching. The teachers believe that they have a lack of knowledge about the digital tools they use, which can lead to them feeling some fear of using digital tools. The opportunities that the teachers describe are increased motivation among the pupil, efficiency improvements and that it seems to be a good support for students in need. The challenges listed by the teachers are the disturbances and the lack of digital tools.
APA, Harvard, Vancouver, ISO, and other styles
23

Clark, John A. "Metaheuristic search as a cryptological tool." Thesis, University of York, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bielko, Juneta. "Internetinės rinkodaros įtaka viešbučių verslui Lietuvoje." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120703_132256-39354.

Full text
Abstract:
Magistro baigiamajame darbe išanalizuota ir įvertinta Lietuvos viešbučiuose taikomų internetinės rinkodaros priemonių efektyvumas, iškeltos pagrindinės internetinės rinkodaros naudojimo problemos bei pateikti siūlymai, kaip šias problemas spręsti. Pirmoje darbo dalyje, remiantis įvairiais Lietuvos ir užsienio autoriais pateikiama internetinės rinkodaros samprata, teoriniu aspektu tiriamas internetinės rinkodaros turinys, tikslai, komunikacijos priemonės, sąsaja su socialiniais tinklais. Taip pat nagrinėjamos ir optimizavimo paieškos sistemoms bei ryšių su klientais valdymo sąvokos. Antroje dalyje analizuojamos ir vertinamos internetinės rinkodaros tendencijos Lietuvoje ir pasaulyje. Trečioje darbo dalyje aptariama tyrimų metodika bei organizavimas. Ketvirtoje dalyje nagrinėjama paslaugų gavėjų patirtis ir ekspertų požiūris į internetinę rinkodarą, jos priemones bei tų priemonių taikymo efektyvumą. Išnagrinėjus teorinius internetinės rinkodaros aspektus ir tendencijas bei abiejų pusių požiūrius, yra pateikiamso išvados bei siūlymai.
Master's thesis analyzes and evaluates efficiency of internet marketing measures used in Lithuanian hotels, brings up the main challenges for the use of online marketing and provides suggestions how to solve the problems. In the first part of the work, using the literature of different Lithuanian and foreign authors, the concept of online marketing is presented, the content of online marketing as well as objectives, means of communication and interface with social networks are studied from a theoretical point of view. This section also analyzes the concepts of search engine optimization and customer relationship management. The second part analyzes and assesses the trends of online marketing trends in Lithuania and worldwide. The third chapter discusses the research methodology and organization. The fourth section analyzes the user‘s experience and expert approach to online marketing, its tools and effectiveness of their application. Conclusions and suggestions are presented after theoretical analysis of online marketing aspects, trends and attitudes of both sides.
APA, Harvard, Vancouver, ISO, and other styles
25

Cavalcanti, Yguaratã Cerqueira. "A bug report analysis and search tool." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/2027.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:53:57Z (GMT). No. of bitstreams: 2 arquivo1938_1.pdf: 2696606 bytes, checksum: c2ff3cbbb3029fd0f89eb8d67c0e4f08 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Manutenção e evolução de software são atividades caracterizadas pelo seu enorme custo e baixa velocidade de execução. Não obstante, elas são atividades inevitáveis para garantir a qualidade do software quase todo software bem sucedido estimula os usuários a fazer pedidos de mudanças e melhorias. Sommerville é ainda mais enfático e diz que mudanças em projetos de software são um fato. Além disso, diferentes estudos têm afirmado ao longo dos anos que as atividades de manutenção e evolução de software são as mais caras do ciclo de desenvolvimento, sendo responsável por cerca de até 90% dos custos. Todas essas peculiaridades da fase de manutenção e evolução de software leva o mundo acadêmico e industrial a investigar constantemente novas soluções para reduzir os custos dessas atividades. Neste contexto, Gerência de Configuração de Software (GCS) é um conjunto de atividades e normas para a gestão da evolução e manutenção de software; GCS define como são registradas e processadas todas as modificações, o impacto das mesmas em todo o sistema, dentre outros procedimentos. Para todas estas tarefas de GCM existem diferentes ferramentas de auxílio, tais como sistemas de controle de versão e bug trackers. No entanto, alguns problemas podem surgir devido ao uso das mesmas, como por exemplo o problema de atribuição automática de responsável por um bug report e o problema de duplicação de bug reports. Neste sentido, esta dissertação investiga o problema de duplicação de bug reports resultante da utilização de bug trackers em projetos de desenvolvimento de software. Tal problema é caracterizado pela submissão de dois ou mais bug reports que descrevem o mesmo problema referente a um software, tendo como principais conseqüências a sobrecarga de trabalho na busca e análise de bug reports, e o mal aproveitamento do tempo destinado a essa atividade
APA, Harvard, Vancouver, ISO, and other styles
26

Gaškaitė, Giedrė. "Įmonės „Stoke Travel” internetinio marketingo priemonės." Bachelor's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130620_113443-95197.

Full text
Abstract:
Bakalauro baigiamajame darbe nagrinėjamos įmonės „Stoke Travel” interentinio marketingo priemonės. Pagrindinis tyrimo tikslas, aptarus internetinio marketingo priemones teoriniu požiūriu ir atlikus anketinę apklausą, bei interviu, ištirti kaip respondentai vertina įmones naudojamas internetinio marketingo priemones, bei pasiūlyti įmonei tinkamiausias interentinio marketingo priemones.
In this Bachelor’s Degree Work companies “Stoke Travel” Internet marketing tools are analyzing. The main objective is after researching theoretical background of the subject to conduct a survey and interview to find out companies internet marketing tools dimension, their clients internet use peculiarity and clients evaluation to the companies used internet marketing tools.
APA, Harvard, Vancouver, ISO, and other styles
27

Lakshmanan, Hariharan 1980. "A client side tool for contextual Web search." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29385.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.
Includes bibliographical references (p. 76-77).
This thesis describes the design and development of an application that uses information relevant to the context of a web search for the purpose of improving the search results obtained using standard search engines. The representation of the contextual information is based on a Vector Space Model and is obtained from a set of documents that have been identified as relevant to the context of the search. Two algorithms have been developed for using this contextual representation to re-rank the search results obtained using search engines. In the first algorithm, re-ranking is done based on a comparison of every search result with all the contextual documents. In the second algorithm, only a subset of the contextual documents that relate to the search query is used to measure the relevance of the search results. This subset is identified by mapping the search query onto the Vector Space representation of the contextual documents. A software application was developed using the .NET framework with C# as the implementation language. The software has functionality to enable users to identify contextual documents and perform searches either using a standard search engine or using the above-mentioned algorithms. The software implementation details, and preliminary results regarding the efficiency of the proposed algorithms have been presented.
by Hariharan Lakshmanan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
28

Santarem, Segundo José Eduardo [UNESP]. "Recursos tecno-metodológicos para descrição e recuperação de informações na Web." Universidade Estadual Paulista (UNESP), 2004. http://hdl.handle.net/11449/93618.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:43Z (GMT). No. of bitstreams: 0 Previous issue date: 2004-02-05Bitstream added on 2014-06-13T20:34:32Z : No. of bitstreams: 1 santaremsegundo_je_me_mar.pdf: 3000027 bytes, checksum: 549f825ffec5cc70809e299d5095e31b (MD5)
Universidade Estadual Paulista (UNESP)
A tecnologia trouxe à Ciência da Informação uma nova partícula em seu objeto de estudo - a informação na Web; trouxe, também, uma aproximação muito grande entre as Cências da Informação e da Computação. A Internet vem crescendo rapidamente, incrementando a explosão de informações, de forma a termos uma grande quantidade de informação disponível na Web. Desse modo, torna-se necessário investigar tecnologias para descrição e recuperação de informações que possibilitem a organização da informação digital no âmbito da World Wide Web. Valendo-se de pesquisa documental em fontes das áreas de Ciência da Computação e Ciência da Informação e da própria rede Internet foram analisadas as principais linguagens e os recursos para publicação de informações na Web, as formas de descrição e recuperação de informação, as propostas de novos padrões e de estrutura de dados e abordadas as novas ferramentas que vêm sendo discutidas e implementadas, objetivando a organização da informação digital. Verificou-se o delineamento de uma Web Semântica, que se trata de uma extensão da Web atual e que propõe uma nova arquitetura, de maneira que possamos dar significado a toda informação encontrada neste novo conceito de Internet. Tais aspectos permitem concluir que a criação da Web Semântica é uma questão de tempo e que, em breve, essa nova extensão da Web passará a ser um pedaço consistente e qualificado de informações dentro da Internet, possibilitando a várias comunidades a construção de conhecimento a partir de dados confiáveis encontrados na rede.
The technology brought to Information Science a new particle in its object of study - the information in the Web; it brought, also, a very great approach enters sciences of the Information and the Computation. The Internet comes growing of frightful form, developing the explosion of Information, of form the terms a countless amount of available information in the Web. In this way, one becomes necessary to investigate technologies for description and recovery of information that make possible the organization of the digital information in the scope of the World Wide Web. Using itself documentary research in sources of the areas of Computer Science and Information Science and proper net Internet had been analyzed the main languages and the resources for publication of information in the Web, the forms of description and recovery of information, the proposals of new standards and structure of boarded data and the new tools that they come being argued and implemented, objectifying the organization of the digital information.The delineation of a Semantic Web was verified, that if deals with an extension of the current Web and that it considers a new architecture, thus let us can give meant to all information found in this new concept of Internet. Such aspects allow to conclude that the creation of the Semantic Web is a time question and that, in briefing, this new extension of the Web will inside start to be a consistent and qualified piece of information of the Internet, making possible to some communities the construction of knowledge from found trustworthy data in the net.
APA, Harvard, Vancouver, ISO, and other styles
29

W, M. Tharanga Dilruk Ranasinghe. "Search Engine : An Effective tool for exploring the Internet." Library, Eastern University of Sri Lanka, 2006. http://hdl.handle.net/10150/105790.

Full text
Abstract:
The Internet has become the largest source of information. Today, millions of Websites exist and this number continuous to grow. Finding the right information at the right time is the challenge in the Internet age. Search engine is searchable database which allows locating the information on the Internet by submitting the keywords. Search engines can be divided into two categories as the Individual and Meta Search engines. This article discusses the features of these search engines in detail.
APA, Harvard, Vancouver, ISO, and other styles
30

Kanjariya, Mitesh Mukesh. "Discovery Tool: A Framework for Accelerating Academic Collaborations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1374029435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Faronius, Hofmann Therese, Julia Helstad, Linda Håkansson, and Henrik Thorsell. "A Search Tool for Pedagogical Techniques and Supportive Technical Aids." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352397.

Full text
Abstract:
Teaching pedagogically has been found to be crucial for students' learning. There are hundreds of technical aids that are developed with the aim to facilitate the use of different pedagogical techniques, but only a small collection of these are used at Uppsala University. To find alternative pedagogical techniques and relevant technical aids for each technique can be difficult for educators. It is believed that creating a web service as a search tool for finding pedagogical techniques and technical aids would benefit educators to improve their teaching. The purpose of this project was to create a search tool which educators can use for finding pedagogical techniques and suitable technical aids for their teaching. The search tool should inspire educators to use unfamiliar technical aids, widen their use of these aids and give them sufficient information of how to use them. To create a search tool which educators would find useful, the search tool was developed based on user stories from educators at Uppsala University. The search tool was found to be user-friendly by educators whom tested the web service. They also believed that the search tool will help them find new ways of teaching and improve their pedagogical technique.
Att undervisa pedagogiskt har visat sig vara av stor vikt för studenters inlärning. Att finna alternativa pedagogiska tekniker och tillhörande tekniska hjälpmedel tenderar att vara problematiskt för utbildare. Att skapa ett sökverktyg, i form av en hemsida, för att finna pedagogiska tekniker och tekniska hjälpmedel anses gynna utbildare i deras strävan efter en pedagogisk utbildning. Målet med skapandet av sökvertyget var att utbildare ska använda denna hemsida för att finna pedagogiska tekniker och tekniska hjälpmedel som passar deras utlärande. Sökvertyget ska inspera utbildare att använda tekniska hjälpmedel de är obekanta med genom att tillhandahålla information om hur de olika hjälpmedlen används. Sökvertyget utvecklades baserat på user stories från utbildare vid Uppsala universitet i syfte att skapa ett sökvertyg utbildare ska finna användbart. Sökverktyget utvärderades genom att låta utbildare använda hemsidan. De ansåg att sökverktyget var användarvänligt och att det kommer vara till stor nytta i deras sökande efter nya pedagogiska metoder, vilket i sin tur kommer hjälpa dem att utveckla deras pedagogiska utlärning.
APA, Harvard, Vancouver, ISO, and other styles
32

Segundo, José Eduardo Santarem. "Recursos tecno-metodológicos para descrição e recuperação de informações na Web /." Marília : [s.n.], 2004. http://hdl.handle.net/11449/93618.

Full text
Abstract:
Orientador : Silvana Aparecida Borsetti Gregorio Vidotti
Banca: Marcos Luiz Mucheroni
Banca: Plácida Leopoldina Ventura Amorim da Costa Santos
Resumo: A tecnologia trouxe à Ciência da Informação uma nova partícula em seu objeto de estudo - a informação na Web; trouxe, também, uma aproximação muito grande entre as Cências da Informação e da Computação. A Internet vem crescendo rapidamente, incrementando a explosão de informações, de forma a termos uma grande quantidade de informação disponível na Web. Desse modo, torna-se necessário investigar tecnologias para descrição e recuperação de informações que possibilitem a organização da informação digital no âmbito da World Wide Web. Valendo-se de pesquisa documental em fontes das áreas de Ciência da Computação e Ciência da Informação e da própria rede Internet foram analisadas as principais linguagens e os recursos para publicação de informações na Web, as formas de descrição e recuperação de informação, as propostas de novos padrões e de estrutura de dados e abordadas as novas ferramentas que vêm sendo discutidas e implementadas, objetivando a organização da informação digital. Verificou-se o delineamento de uma Web Semântica, que se trata de uma extensão da Web atual e que propõe uma nova arquitetura, de maneira que possamos dar significado a toda informação encontrada neste novo conceito de Internet. Tais aspectos permitem concluir que a criação da Web Semântica é uma questão de tempo e que, em breve, essa nova extensão da Web passará a ser um pedaço consistente e qualificado de informações dentro da Internet, possibilitando a várias comunidades a construção de conhecimento a partir de dados confiáveis encontrados na rede.
Abstract: The technology brought to Information Science a new particle in its object of study - the information in the Web; it brought, also, a very great approach enters sciences of the Information and the Computation. The Internet comes growing of frightful form, developing the explosion of Information, of form the terms a countless amount of available information in the Web. In this way, one becomes necessary to investigate technologies for description and recovery of information that make possible the organization of the digital information in the scope of the World Wide Web. Using itself documentary research in sources of the areas of Computer Science and Information Science and proper net Internet had been analyzed the main languages and the resources for publication of information in the Web, the forms of description and recovery of information, the proposals of new standards and structure of boarded data and the new tools that they come being argued and implemented, objectifying the organization of the digital information.The delineation of a Semantic Web was verified, that if deals with an extension of the current Web and that it considers a new architecture, thus let us can give meant to all information found in this new concept of Internet. Such aspects allow to conclude that the creation of the Semantic Web is a time question and that, in briefing, this new extension of the Web will inside start to be a consistent and qualified piece of information of the Internet, making possible to some communities the construction of knowledge from found trustworthy data in the net.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
33

Castle, Timothy S. "Coordinated inland area Search and Rescue (SAR) planning and execution tool." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355989.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1998.
"September 1998." Thesis advisor(s): Gordon H. Bradley, Alan R. Washburn. Includes bibliographical references (p. 81-82). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
34

Grewal, Ratvinder Singh. "A visual metaphor-based tool for a search-engine user interface." Thesis, University of Wolverhampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dos, Caroline. "On the path of Exploratory Search Users - A Semantic Web tool." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5701/.

Full text
Abstract:
L’Exploratory Search, paradigma di ricerca basato sulle attività di scoperta e d’apprendimento, è stato per diverso tempo ignorato dai motori di ricerca tradizionali. Invece, è spesso dalle ricerche esplorative che nascono le idee più innovative. Le recenti tecnologie del Semantic Web forniscono le soluzioni che permettono d’implementare dei motori di ricerca capaci di accompagnare gli utenti impegnati in tale tipo di ricerca. Aemoo, motore di ricerca sul quale s’appoggia questa tesi ne è un esempio efficace. A partire da quest’ultimo e sempre con l’aiuto delle tecnologie del Web of Data, questo lavoro si propone di fornire una metodologia che permette di prendere in considerazione la singolarità del profilo di ciascun utente al fine di guidarlo nella sua ricerca esplorativa in modo personalizzato. Il criterio di personalizzazione che abbiamo scelto è comportamentale, ovvero basato sulle decisioni che l’utente prende ad ogni tappa che ritma il processo di ricerca. Implementando un prototipo, abbiamo potuto testare la validità di quest’approccio permettendo quindi all’utente di non essere più solo nel lungo e tortuoso cammino che porta alla conoscenza.
APA, Harvard, Vancouver, ISO, and other styles
36

Harkness, Daniel Joseph. "Crawler 2.0 a search tool to assist law enforcement with investigations /." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cameron, Michael, and mcam@mc-mc net. "Efficient Homology Search for Genomic Sequence Databases." RMIT University. Computer Science and Information Technology, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070509.162443.

Full text
Abstract:
Genomic search tools can provide valuable insights into the chemical structure, evolutionary origin and biochemical function of genetic material. A homology search algorithm compares a protein or nucleotide query sequence to each entry in a large sequence database and reports alignments with highly similar sequences. The exponential growth of public data banks such as GenBank has necessitated the development of fast, heuristic approaches to homology search. The versatile and popular blast algorithm, developed by researchers at the US National Center for Biotechnology Information (NCBI), uses a four-stage heuristic approach to efficiently search large collections for analogous sequences while retaining a high degree of accuracy. Despite an abundance of alternative approaches to homology search, blast remains the only method to offer fast, sensitive search of large genomic collections on modern desktop hardware. As a result, the tool has found widespread use with millions of queries posed each day. A significant investment of computing resources is required to process this large volume of genomic searches and a cluster of over 200 workstations is employed by the NCBI to handle queries posed through the organisation's website. As the growth of sequence databases continues to outpace improvements in modern hardware, blast searches are becoming slower each year and novel, faster methods for sequence comparison are required. In this thesis we propose new techniques for fast yet accurate homology search that result in significantly faster blast searches. First, we describe improvements to the final, gapped alignment stages where the query and sequences from the collection are aligned to provide a fine-grain measure of similarity. We describe three new methods for aligning sequences that roughly halve the time required to perform this computationally expensive stage. Next, we investigate improvements to the first stage of search, where short regions of similarity between a pair of sequences are identified. We propose a novel deterministic finite automaton data structure that is significantly smaller than the codeword lookup table employed by ncbi-blast, resulting in improved cache performance and faster search times. We also discuss fast methods for nucleotide sequence comparison. We describe novel approaches for processing sequences that are compressed using the byte packed format already utilised by blast, where four nucleotide bases from a strand of DNA are stored in a single byte. Rather than decompress sequences to perform pairwise comparisons, our innovations permit sequences to be processed in their compressed form, four bases at a time. Our techniques roughly halve average query evaluation times for nucleotide searches with no effect on the sensitivity of blast. Finally, we present a new scheme for managing the high degree of redundancy that is prevalent in genomic collections. Near-duplicate entries in sequence data banks are highly detrimental to retrieval performance, however existing methods for managing redundancy are both slow, requiring almost ten hours to process the GenBank database, and crude, because they simply purge highly-similar sequences to reduce the level of internal redundancy. We describe a new approach for identifying near-duplicate entries that is roughly six times faster than the most successful existing approaches, and a novel approach to managing redundancy that reduces collection size and search times but still provides accurate and comprehensive search results. Our improvements to blast have been integrated into our own version of the tool. We find that our innovations more than halve average search times for nucleotide and protein searches, and have no signifcant effect on search accuracy. Given the enormous popularity of blast, this represents a very significant advance in computational methods to aid life science research.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Wanjing. "Wrapper mill a tool for generating and managing wrappers for search engines /." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
39

Düring, Max Perea, Anton Gildebrand, and Markus Linghede. "Asset Finder: A Search Tool forFinding Relevant Graphical AssetsUsing Automated Image Labelling." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-387204.

Full text
Abstract:
The creation of digital 3D-environments appears in a variety of contexts such as movie making, video game development, advertising, architecture, infrastructure planning and education. When creating these environments it is sometimes necessary to search for graphical assets in big digital libraries by trying different search terms. The goal of this project is to provide an alternative way to find graphical assets by creating a tool called Asset Finder that allows the user to search using images instead of words. The Asset Finder uses image labelling provided by the Google Vision API to find relevant search terms. The tool then uses synonyms and related words to increase the amount of search terms using the WordNet database. Finally the results are presented in order of relevance using a score system. The tool is a web application with an interface that is easy to use. The results of this project show an application that is able to achieve good results in some of the test cases.
Skapande av 3D-miljöer dyker upp i en mängd olika kontexter som till exempel filmskapande, spelutveckling, reklam, arkitektur, infrastrukturplanering och utbildning. När man skapar dessa miljöer är det ibland nödvändigt att söka efter 3D-modeller i stora digitala bibliotek och att försöka hitta söktermer som matchar modellen som du försöker hitta. Målet med detta projekt är att förse ett alternativt sätt att hitta dessa modeller genom att skapa ett verktyg kallat Asset Finder som tillåter användaren att söka genom bilder istället för ord. Asset Finder använder sig av bildtaggning som tillhandahålls av Google Vision API för att hitta relevanta söktermer. Verktyget använder sig sedan av synonymer och relaterade ord för att öka mängden söktermer genom WordNet-databasen. Till slut används ett poängsystem för att presentera resultaten sorterade efter relevans. Verktyget är en webbapplikation som ämnas vara enkel att använda. Resultatet av detta projekt är en applikation som kan åstadkomma goda resultat i vissa av testfallen.
APA, Harvard, Vancouver, ISO, and other styles
40

Strågefors, Linnea. "Designing and evaluating a digital tool to support online search of Swedish food recipes : facilitating the search process for the users." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-66062.

Full text
Abstract:
Searching for food recipes online is a common task for many people in an increasingly digitalised society. Sweden has, as many countries, local recipes, seasonal products, cultural dinners and measurement and weight standards that can differ from other coun- tries. However general searches for Swedish recipes at common search engines can pose several difficulties. For example that users can’t filter the search results by recipes and that there can be difficulties with ambiguous semantic evaluations of the users’ search queries. Aspects considered in this thesis are also how the user search process could be facilitated by using recipe labels and graphical visualisations of the information. The ambition of this thesis is investigate how online recipe search can be made more effi- cient for people looking for Swedish recipes in Swedish. An initial user survey with a questionnaire was conducted to understand the po- tential requirements for the development of a tool to support online search of Swedish recipes. More specifically, the survey inquired about users’ current search experience and tried to identify useful search criteria. The results showed that 82.4% of the par- ticipants prefer to search for recipes online via a search engine, compared with other alternatives such as searching at specific recipe sites. The main difficulty the partici- pants experienced with that search approach was that many of the search results were not recipes but other types of search results. Most participants preferred to see more in- formative recipe items in the search results list. However at the same time, some recipe labels that were present were not actually noticed by most participants. The survey also investigated further what information that could be more appropriate to show about the recipes. Based on the outcomes from the survey study, a prototype application was then developed that targets for Swedish recipe search. The purpose of the prototype is to im- plement the search criteria identified from the survey and to provide enhancements. By developing this prototype, the search criteria could be tested by users on the prototype. A second user survey with a questionnaire was then conducted, evaluating the usabil- ity of the prototype. The prototype shows to offer improvements in filtering the search results to only show Swedish recipes, presenting more relevant recipe information and also improvements in visualising the information in the recipes in the search result list.
APA, Harvard, Vancouver, ISO, and other styles
41

Lundgren, Jesper. "Search-based Procedural Content Generation as a Tool for Level Design in Games." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102212.

Full text
Abstract:
The aim of this thesis is to evaluate the use of Search-based Procedural Content generation (SBPCG) to help a designer create levels for different game styles. I show how SBPCG can be used for level generation in different game genres by surveying both paper and released commercial solutions. I then provide empirical data by using a Genetic Algorithm (GA) to evolve levels in two different game types, first one being a space puzzle game, and the second a platform game. Constraints from a level designer provide a base to create fitness functions for both games with success. Even though difficulties with level representation make it hard for a designer to work with this technique directly, the generated levels show that the technique has promising potential to aid level designers with their work.
APA, Harvard, Vancouver, ISO, and other styles
42

Kilinc, Fatma. "The Tool Transporter Movements Problem In Flexible Manufacturing Systems." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606017/index.pdf.

Full text
Abstract:
In this study, we address job sequencing and tool switching problem arising in Flexible Manufacturing Systems. We consider a single machine with limited tool slots on its tool magazine. The available tool slots cannot accommodate all the tools required by all jobs, therefore tool switches between jobs are required. A single tool transporter with limited capacity is used in transporting the tools from the storage area to the machine. Our aim is to minimize the number of tool transporter movements. We provide two mixed integer linear programming formulations of the problem, one of which is based on the traveling salesman problem. We develop a Branch-and-Bound algorithm powered with various lower and upper bounding techniques for optimal results. In order to obtain good solutions in reasonable times, we propose Beam Search algorithms. Our computational results reveal the satisfactory performance of the B&
B algorithm for moderate sized problems. Moreover, Beam Search techniques perform well for large-sized problems.
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Tsung-Lu. "BAXQLB̲LAST an enhanced BLAST bioinformatics homology search tool with batch and structured query support /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Malod-Dognin, Noël. "Protein structure comparison : from contact map overlap maximisation to distance-based alignment search tool." Rennes 1, 2010. http://www.theses.fr/2010REN1S015.

Full text
Abstract:
In molecular biology, a fruitful assumption is that proteins sharing close three dimensional structures may share a common function and in most cases derive from a same ancestor. Computing the similarity between two protein structures is therefore a crucial task and has been extensively investigated. Among all the proposed methods, we focus on the similarity measure called Contact Map Overlap maximisation (CMO), mainly because it provides scores which can be used for obtaining good automatic classifications of the protein structures. In this thesis, comparing two protein structures is modelled as finding specific sub-graphs in specific k-partite graphs called alignment graphs. Then, we model CMO as a kind of maximum edge induced sub-graph problem in alignment graphs, for which we conceive an exact solver which outperforms the other CMO algorithms from the literature. Even though we succeeded to accelerate CMO, the procedure still stays too much time consuming for large database comparisons. To further accelerate CMO, we propose a hierarchical approach for CMO which is based on the secondary structure of the proteins. Finally, although CMO is a very good scoring scheme, the alignments it provides frequently posses big root mean square deviation values. To overcome this weakness, we propose a new comparison method based on internal distances which we call DAST (for Distance-based Alignment Search Tool). It is modelled as a maximum clique problem in alignment graphs, for which we design a dedicated solver with very good performances
Une hypothèse féconde de la biologie structurale est que les protéines partageant des structures tri-dimensionnelles similaires sont susceptibles de partager des fonctions similaires et de provenir d'un ancêtre commun. Déterminer la similarité entre deux structures protéiques est une tâche importante qui à été largement étudiée. Parmi toutes les méthodes proposées, nous nous intéressons à la mesure de similarité appelée Maximisation du Recouvrement de Cartes de Contacts (CMO). Dans cette thèse, nous proposons un cadre général pour modéliser la comparaison de deux structures protéiques dans des graphes k-partis spécifiques appelés graphes d'alignements. Puis, nous modélisons CMO comme une recherche de sous-graphe maximum induit par les arêtes dans des graphes d'alignements, problème pour lequel nous proposons un solveur exact qui surpasse les autres algorithmes de la littérature. Cependant, la procédure d'alignement requière encore trop de temps de calculs pour envisager des comparaisons à grande échelle. Afin d'accélérer davantage CMO, nous proposons une approche hiérarchique basée sur les structures secondaires. Enfin, bien que CMO soit une très bonne mesure de similarité, les alignements qu'elle fournit possèdent souvent de fortes valeurs de déviation (RMSD). Pour palier à cette faiblesse, nous proposons une nouvelle méthode de comparaison basée sur les distances internes que nous appelons DAST. Elle est modélisée comme une recherche de clique maximum dans des graphes d'alignements, pour laquelle nous présentons un solveur dédié montrant de très bonnes performances
APA, Harvard, Vancouver, ISO, and other styles
45

Radke, Annemarie Katherine. "Design and Development of a Metadata-Driven Search Tool for use with Digital Recordings." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90376.

Full text
Abstract:
It is becoming more common for researchers to use existing recordings as a source for data rather than to generate new media for research. Prior to the examination of recordings, data must be extracted from the recordings and the recordings must be described with metadata to allow users to search for the recordings and to search information within the recordings. The purpose of this small-scale study was to develop a web based search tool that will permit a comprehensive search of spoken information within a collection of existing digital recordings archived in an open-access digital repository. The study is significant to the field of instructional design and technology (IDT) as the digital recordings used in this study are interviews, which contain personal histories and insight from leaders and scholars who have influenced and advanced the field of IDT. This study explored and used design and development research methods for the development of a search tool for use with digital video interviews. The study applied speech recognition technology, tool prototypes, usability testing, expert review, and the skills of a program developer. Results from the study determined that the produced tool provided a more comprehensive and flexible search for users to locate content from within AECT Legends and Legacies Project video interviews.
Doctor of Philosophy
It is becoming more common for researchers to use existing recordings in studies. Prior to examination, the information about the recordings and within the recordings must be determined to allow users the ability to search information. The purpose of this small-scale study was to develop an online search tool that allows users to locate spoken words within a video interview. The study is important to the field of instructional design and technology (IDT) as the video interviews used in this study contain experience and insight from people who have advanced the field of IDT. Using current and free technology, this study developed a practical search tool to search information from AECT Legends and Legacies Project video interviews.
APA, Harvard, Vancouver, ISO, and other styles
46

Dincer, Alper. "Design And Implementation Of A Search Tool For Roads On Pocket Pcs For Mobile Gis." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607925/index.pdf.

Full text
Abstract:
The aim of this study is to develop a search tool for roads for mobile GIS application. The satellite image of Ankara is the base map of program. There is also a search option for the roads. The application is based on open source libraries, which are ECW for imagery and SQLite for the database of vector. The application is coded in Embedded Visual C++. The study shows that mobile GIS applications can be prepared by the help of open source libraries. There is no need to buy a commercial product to mobilize the GIS.
APA, Harvard, Vancouver, ISO, and other styles
47

Srirangam, Murty. "A case study on the use of the design exemplar as a search and retrieval tool." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202418227/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chau, Michael, Hsinchun Chen, Jailun Qin, Yilu Zhou, Yi Qin, Wai-Ki Sung, and Daniel M. McDonald. "Comparison of Two Approaches to Building a Vertical Search Tool: A Case Study in the Nanotechnology Domain." ACM/IEEE-CS, 2002. http://hdl.handle.net/10150/105990.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
As the Web has been growing exponentially, it has become increasingly difficult to search for desired information. In recent years, many domain-specific (vertical) search tools have been developed to serve the information needs of specific fields. This paper describes two approaches to building a domain-specific search tool. We report our experience in building two different tools in the nanotechnology domain - (1) a server-side search engine, and (2) a client-side search agent. The designs of the two search systems are presented and discussed, and their strengths and weaknesses are compared. Some future research directions are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

Namballa, Ravi K. "CHESS: A Tool for CDFG Extraction and High-Level Synthesis of VLSI Systems." Scholar Commons, 2003. https://scholarcommons.usf.edu/etd/1439.

Full text
Abstract:
In this thesis, a new tool, named CHESS, is designed and developed for control and data-flow graph (CDFG) extraction and the high-level synthesis of VLSI systems. The tool consists of three individual modules for:(i) CDFG extraction, (ii) scheduling and allocation of the CDFG, and (iii) binding, which are integrated to form a comprehensive high-level synthesis system. The first module for CDFG extraction includes a new algorithm in which certain compiler-level transformations are applied first, followed by a series of behavioral-preserving transformations on the given VHDL description. Experimental results indicate that the proposed conversion tool is quite accurate and fast. The CDFG is fed to the second module which schedules it for resource optimization under a given set of time constraints. The scheduling algorithm is an improvement over the Tabu Search based algorithm described in [6] in terms of execution time. The improvement is achieved by moving the step of identifying mutually exclusive operations to the CDFG extraction phase, which, otherwise, is normally done during scheduling. The last module of the proposed tool implements a new binding algorithm based on a game-theoretic approach. The problem of binding is formulated as a non-cooperative finite game, for which a Nash-Equilibrium function is applied to achieve a power-optimized binding solution. Experimental results for several high-level synthesis benchmarks are presented which establish the efficacy of the proposed synthesis tool.
APA, Harvard, Vancouver, ISO, and other styles
50

Sendurur, Emine. "Effects Of A Web-based Internet Search Scaffolding Tool On Metacognitive Skills Improvement Of Students With Different Goal Orientations." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12614286/index.pdf.

Full text
Abstract:
In this study, the aim was to investigate the effects of the web-based internet search scaffolding tool (WISST) on the improvement of metacognitive skills of 7th grade students associated with their goal orientation. This study utilized a static-group pretest-posttest design. The first experiment group received web-based metacognitive scaffolding tool treatment
the second experiment group received teacher-based metacognitive scaffolding
and the control group had no scaffolding. The designed tool aimed to scaffold users throughout web searching by emphasizing certain metacognitive skills improvement. Three main instruments were used to gather data: metacognition inventory for Internet search (MIIS), patterns of adaptive learning scale (PALS), and achievement test. 76 7th grade elementary school students in Ankara, Turkey participated in this study. The data gathered from the participants were analyzed through quantitative and qualitative data analysis methods. The results of the study indicated that WISST tool helped students improve certain metacognitive skills including monitoring, planning, controlling, and strategy generation. Its unique effectiveness was on the improvement of controlling skills. Teacher scaffolding group was also successful in improvement of strategy generation skills. No effects of goal orientations on the improvement of metacognitive skills were found in the analyses. Within hierarchical regression models, only pre-MIIS scores significantly contributed to the model. Students having less improved metacognitive skills were found associated with less trials and less visits. Students having poor performance work grades were tended to copy-paste more, try less, and visit less. Task difficulty and task type was observed to influence the search patterns of students. Search patterns and reflections also indicated that scaffolded groups made positive difference in search patterns.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography