To see the other types of publications on this topic, follow the link: Techniques of Filtering Information.

Dissertations / Theses on the topic 'Techniques of Filtering Information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Techniques of Filtering Information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Elling, Eva, and Hannes Fornander. "A Study of Recommender Techniques Within the Field of Collaborative Filtering." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214728.

Full text
Abstract:
Recommender systems can be seen everywhere today,having endless possibilities of implementation. However,operating in the background, they can easily be passed withoutnotice. Essentially, recommender systems are algorithms thatgenerate predictions by operating on a certain data set. Eachcase of recommendation is environment sensitive and dependenton the condition of the data at hand. Consequently, it is difficultto foresee which method, or combination of methods, to apply in aparticular situation for obtaining desired results. The area of recommendersystems that this thesis is delimited to is Collaborativefiltering (CF) and can be split up into three different categories,namely memory based, model based and hybrid algorithms. Thisthesis implements a CF algorithm for each of these categoriesand sets focus on comparing their prediction accuracy and theirdependency on the amount of available training data (i.e. asa function of sparsity). The results show that the model basedalgorithm clearly performs better than the memory based, bothin terms of overall accuracy and sparsity dependency. With anincreasing sparsity level, the problem of having users without anyratings is encountered, which greatly impacts the accuracy forthe memory based algorithm. A hybrid between these algorithmsresulted in a better accuracy than the model based algorithmitself but with an insignificant improvement.
APA, Harvard, Vancouver, ISO, and other styles
2

Jarrell, Jason A. "Employ sensor fusion techniques for determining aircraft attitude and position information." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5894.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains xii, 108, [9] p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 104-108).
APA, Harvard, Vancouver, ISO, and other styles
3

Grönberg, David, and Otto Denesfay. "Comparison and improvement of time aware collaborative filtering techniques : Recommender systems." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160360.

Full text
Abstract:
Recommender systems emerged in the mid '90s with the objective of helping users select items or products most suited for them. Whether it is Facebook recommending people you might know, Spotify recommending songs you might like or Youtube recommending videos you might want to watch, recommender systems can now be found in every corner of the internet. In order to handle the immense increase of data online, the development of sophisticated recommender systems is crucial for filtering out information, enhancing web services by tailoring them according to the preferences of the user. This thesis aims to improve the accuracy of recommendations produced by a classical collaborative filtering recommender system by utilizing temporal properties, more precisely the date on which an item was rated by a user. Three different time-weighted implementations are presented and evaluated: time-weighted prediction approach, time-weighted similarity approach and our proposed approach, weighting the mean rating of a user on time. The different approaches are evaluated using the well known MovieLens 100k dataset. Results show that it is possible to slightly increase the accuracy of recommendations by utilizing temporal properties.
APA, Harvard, Vancouver, ISO, and other styles
4

Cabir, Hassane Natu Hassane. "A Comparison Of Different Recommendation Techniques For A Hybrid Mobile Game Recommender System." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615173/index.pdf.

Full text
Abstract:
As information continues to grow at a very fast pace, our ability to access this information effectively does not, and we are often realize how harder is getting to locate an object quickly and easily. The so-called personalization technology is one of the best solutions to this information overload problem: by automatically learning the user profile, personalized information services have the potential to offer users a more proactive and intelligent form of information access that is designed to assist us in finding interesting objects. Recommender systems, which have emerged as a solution to minimize the problem of information overload, provide us with recommendations of content suited to our needs. In order to provide recommendations as close as possible to a user&rsquo
s taste, personalized recommender systems require accurate user models of characteristics, preferences and needs. Collaborative filtering is a widely accepted technique to provide recommendations based on ratings of similar users, But it suffers from several issues like data sparsity and cold start. In one-class collaborative filtering, a special type of collaborative filtering methods that aims to deal with datasets that lack counter-examples, the challenge is even greater, since these datasets are even sparser. In this thesis, we present a series of experiments conducted on a real-life customer purchase database from a major Turkish E-Commerce site. The sparsity problem is handled by the use of content-based technique combined with TFIDF weights, memory based collaborative filtering combined with different similarity measures and also hybrids approaches, and also model based collaborative filtering with the use of Singular Value Decomposition (SVD). Our study showed that the binary similarity measure and SVD outperform conventional measures in this OCCF dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Parameswaran, Rupa. "A Robust Data Obfuscation Technique for Privacy Preserving Collaborative Filtering." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11459.

Full text
Abstract:
Privacy is defined as the freedom from unauthorized intrusion. The availability of personal information through online databases, such as government records, medical records, and voters and #146; lists, pose a threat to personal privacy. The concern over individual privacy has led to the development of legal codes for safeguarding privacy in several countries. However, the ignorance of individuals as well as loopholes in the systems, have led to information breaches even in the presence of such rules and regulations. Protection against data privacy requires modification of the data itself. The term {em data obfuscation} is used to refer to the class of algorithms that modify the values of the data items without distorting the usefulness of the data. The main goal of this thesis is the development of a data obfuscation technique that provides robust privacy protection with minimal loss in usability of the data. Although medical and financial services are two of the major areas where information privacy is a concern, privacy breaches are not restricted to these domains. One of the areas where the concern over data privacy is of growing interest is collaborative filtering. Collaborative filtering systems are being widely used in E-commerce applications to provide recommendations to users regarding products that might be of interest to them. The prediction accuracy of these systems is dependent on the size and accuracy of the data provided by users. However, the lack of sufficient guidelines governing the use and distribution of user data raises concerns over individual privacy. Users often provide the minimal information that is required for accessing these E-commerce services. The lack of rules governing the use and distribution of data disallows sharing of data among different communities for collaborative filtering. The goals of this thesis are (a) the definition of a standard for classifying DO techniques, (b) the development of a robust cluster preserving data obfuscation algorithm, and (c) the design and implementation of a privacy-preserving shared collaborative filtering framework using the data obfuscation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Schwenk, Karsten Verfasser], Dieter W. [Akademischer Betreuer] [Fellner, and Carsten [Akademischer Betreuer] Dachsbacher. "Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing / Karsten Schwenk. Betreuer: Dieter W. Fellner ; Carsten Dachsbacher." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1107771080/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schwenk, Karsten [Verfasser], Dieter W. [Akademischer Betreuer] Fellner, and Carsten [Akademischer Betreuer] Dachsbacher. "Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing / Karsten Schwenk. Betreuer: Dieter W. Fellner ; Carsten Dachsbacher." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-35906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dahal, Ashok. "Detection of Ulcerative Colitis Severity and Enhancement of Informative Frame Filtering Using Texture Analysis in Colonoscopy Videos." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc822759/.

Full text
Abstract:
There are several types of disorders that affect our colon’s ability to function properly such as colorectal cancer, ulcerative colitis, diverticulitis, irritable bowel syndrome and colonic polyps. Automatic detection of these diseases would inform the endoscopist of possible sub-optimal inspection during the colonoscopy procedure as well as save time during post-procedure evaluation. But existing systems only detects few of those disorders like colonic polyps. In this dissertation, we address the automatic detection of another important disorder called ulcerative colitis. We propose a novel texture feature extraction technique to detect the severity of ulcerative colitis in block, image, and video levels. We also enhance the current informative frame filtering methods by detecting water and bubble frames using our proposed technique. Our feature extraction algorithm based on accumulation of pixel value difference provides better accuracy at faster speed than the existing methods making it highly suitable for real-time systems. We also propose a hybrid approach in which our feature method is combined with existing feature method(s) to provide even better accuracy. We extend the block and image level detection method to video level severity score calculation and shot segmentation. Also, the proposed novel feature extraction method can detect water and bubble frames in colonoscopy videos with very high accuracy in significantly less processing time even when clustering is used to reduce the training size by 10 times.
APA, Harvard, Vancouver, ISO, and other styles
9

Zéboudj, Rachid. "Filtrage, seuillage automatique, contraste et contours : du pré-traitement à l'analyse d'image." Saint-Etienne, 1988. http://www.theses.fr/1988STET4001.

Full text
Abstract:
Etude de quelques aspects du traitement et de l'analyse d'image : présentation d'un lissage adaptatif mettant en évidence les régions qui composent une image; introduction de la notion de contraste utile en seuillage d'image; segmentation d'image; techniques d'extraction d'information par seuillage d'image et détection de contours; classification de formes utilisant la courbure
APA, Harvard, Vancouver, ISO, and other styles
10

SANTOS, André Luis Silva dos. "UM MODELO DE SISTEMA DE FILTRAGEM HÍBRIDA PARA UM AMBIENTE COLABORATIVO DE ENSINO APRENDIZAGEM." Universidade Federal do Maranhão, 2008. http://tedebc.ufma.br:8080/jspui/handle/tede/293.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:38Z (GMT). No. of bitstreams: 1 Andre Luis Silva dos Santos.pdf: 7753143 bytes, checksum: 538ea307ce9dad0b071cd12c49ac05f0 (MD5) Previous issue date: 2008-02-15
Nowadays, the World Wide Web (WWW) is an excellent source of information. However, open issues carry on. It´s difficult obtain relevant information in short time. Moreover, there is no accuracy for retrieving this information. Servers such as Google, Altavista and Cadê, can retrieve a huge amount of information. Nonetheless, the retrieved information could be not relevant. The information filtering systems arise to aim users in the searching for relevant information. This work proposes a hybrid model of filtering information based on content-based filtering and collaborative filtering. This model has been used into a collaborative learning system named NetClass and it was developed using the PASSI methodology. A case study done with CEFET´s students is presented as well.
A Web é uma excelente fonte de informação, mas um dos problemas que surgem com a grande disseminação de informações é a dificuldade de se obter informação relevante em tempo hábil e de forma precisa. Mecanismos que auxiliem o usuário na recuperação de informações tais como o Google.com, Altavista e Cadê, muitas das vezes retornam uma grande quantidade de conteúdo, sem garantir uma boa efetividade de recuperação, com excesso de informações recuperadas ou com informações irrelevantes. Os Sistemas de Filtragem de Informação surgem como alternativa de auxílio aos usuários na busca de informações relevantes. Este trabalho propõe a criação de um modelo de sistema de filtragem híbrido de informação baseados nos métodos: Filtragem Baseada em Conteúdo e Filtragem Colaborativa. O modelo proposto é aplicado a um ambiente colaborativo de ensinoaprendizagem, o NetClass, e foi desenvolvido com a metodologia PASSI. Um estudo de caso feito com alunos do CEFET-MA também é descrito.
APA, Harvard, Vancouver, ISO, and other styles
11

Abiza, Yamina. "Personnalisation de la distribution et de la présentation des informations des bases de vidéo interactive diffusées." Nancy 1, 1997. http://www.theses.fr/1997NAN10249.

Full text
Abstract:
Notre thèse traite des problèmes de la conception et de la personnalisation des applications de bases de documents de vidéo interactive dans les services d'information multimédia résidentiels émergents (télévision interactive et télématique de deuxième génération). Plus précisément nous nous intéressons aux services d'information dits "server push" où les sources d'information sont dynamiques, distribuées, et destinées à être diffusées dans des environnements hétérogènes, à ressources partagées, et à utilisateurs ayant des besoins d'information individuel différents. Dans ce contexte, il y a plusieurs aspects à personnaliser. Nous nous centrons ici sur deux problèmes particuliers : le filtrage d'information basé sur les critères de structure des documents et l'adaptation des modalités de présentation des contenus. Ces deux problèmes sont liés à la conception de la base d'information. Notre approche de la personnalisation est basée sur la définition d'un modèle conceptuel de données, HB_Model, qui comporte un modèle de données de base pour la représentation des structures des documents de vidéo interactive, un modèle HB_Views et un modèle HB_Views et un modèle HB_Versions. HB_Yiews offre un mécanisme de vues pour le filtrage automatique basé sur la structure hypermédia et la sémantique associée dans les applications étudiées. HB_Versions est un modèle de représentation et de structuration d'objets simples multiversions du modèle de base, adapté aux besoins spécifiques de la personnalisatons des modalités de présentation des contenus des documents. Ce modèle permet la spécification en intention des configurations, cohérentes et adaptatives, des modalités de présentation des contenus suivant les contraintes dynamiques du contexte d'interaction et de présentation - sous la forme d'un CSP (Constraints Satisfaction Problem). Pour terminer, nous montrons comment nos propositions s'articulent et s'insèrent dans l'architecture fonctionnelle et opérationnelle d'un service d'information de vidéo interactive diffusé et personnalisé
In this thesis we deal with the issues of the design and personalization of data-oriented interactive video applications (i. E. Multimedia/hypermedia documents applications with predominance of audio and video data) in the emerging residential multimedia informatoin services (i. E. Interactive television and second generation telematics) More specifically, we are concerned with the server-push information services (i. E. Distributed dynamic and broadcast information sources) to be deployed in heteroyenous environments with shared ressources and destinated to users having different information needs and preferences. In this context, they are a lot of possible aspects to personalize. Here we focus on two particular related aspects : structure-based information filtering and personalization of contents presentation modalities. The techniques to achieve these personalization aspects are tightly related to the design of a given information source. Our approach to personalization is based on the definition of a conceptual data model, HB_Model, composed of : 1) a base model to represent documents organization and internal structure in video interactive sources, 2) a versionning model, HB_Versions, to represents documents contents with multiple alternative representation forms or modalities, and 3) a model for wiews definition, HB_Views, to represent relatively stable users information needs. Personalized information delivery from a given server - based on structure criteria - is archived by the materialization of individual users' views specifications using newly available information on this server. Personalization of contents representation modalitics is archived by the intentional specification of documents contents configuration in the form of a CSP (Constraint Satisfaction Problem) which reflects the constraints of the interaction and presentation contexts caracteristics on the choice of the presentation modality for each content and garantees the coherence of presentation modalities combinations. Finally, we show how our propositions articulate and fit into the architecture of a personalized, server-push interactive video information service
APA, Harvard, Vancouver, ISO, and other styles
12

Chambers, Brian D. "Adaptive Bayesian information filtering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0007/MQ45945.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Faghih, Farshad. "Adaptive wavelet-based noise filtering techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38627.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Webster, David Edward. "Realising context-oriented information filtering." Thesis, University of Hull, 2010. http://hydra.hull.ac.uk/resources/hull:2724.

Full text
Abstract:
The notion of information overload is an increasing factor in modern information service environments where information is ‘pushed’ to the user. As increasing volumes of information are presented to computing users in the form of email, web sites, instant messaging and news feeds, there is a growing need to filter and prioritise the importance of this information. ‘Information management’ needs to be undertaken in a manner that not only prioritises what information we do need, but to also dispose of information that is sent, which is of no (or little) use to us.The development of a model to aid information filtering in a context-aware way is developed as an objective for this thesis. A key concern in the conceptualisation of a single concept is understanding the context under which that concept exists (or can exist). An example of a concept is a concrete object, for instance a book. This contextual understanding should provide us with clear conceptual identification of a concept including implicit situational information and detail of surrounding concepts.Existing solutions to filtering information suffer from their own unique flaws: textbased filtering suffers from problems of inaccuracy; ontology-based solutions suffer from scalability challenges; taxonomies suffer from problems with collaboration. A major objective of this thesis is to explore the use of an evolving community maintained knowledge-base (that of Wikipedia) in order to populate the context model from prioritise concepts that are semantically relevant to the user’s interest space. Wikipedia can be classified as a weak knowledge-base due to its simple TBox schema and implicit predicates, therefore, part of this objective is to validate the claim that a weak knowledge-base is fit for this purpose. The proposed and developed solution, therefore, provides the benefits of high recall filtering with low fallout and a dependancy on a scalable and collaborative knowledge-base.A simple web feed aggregator has been built using the Java programming language that we call DAVe’s Rss Organisation System (DAVROS-2) as a testbed environment to demonstrate specific tests used within this investigation. The motivation behind the experiments is to demonstrate that the combination of the concept framework instantiated through Wikipedia can provide a framework to aid in concept comparison, and therefore be used in news filtering scenario as an example of information overload. In order to evaluate the effectiveness of the method well understood measures of information retrieval are used. This thesis demonstrates that the utilisation of the developed contextual concept expansion framework (instantiated using Wikipedia) improved the quality of concept filtering over a baseline based on string matching. This has been demonstrated through the analysis of recall and fallout measures.
APA, Harvard, Vancouver, ISO, and other styles
15

Triki, Taha. "Filtering and reduction techniques of combinatorial tests." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM019/document.

Full text
Abstract:
L'objectif principal de cette thèse est d'apporter des solutions à certaines problèmes du test combinatoire. Le test combinatoire consiste à générer des tests qui couvrent toutes les combinaisons de valeurs d'entrée définies. La première problème abordé, c'est que le test combinatoire peut générer un grand nombre de tests qui sont invalides par rapport à la spécification du système à tester (SUT). Ces tests invalides sont typiquement ceux qui échouent lors de la vérification de la pré-condition d'une opération de système. Ces tests invalides doivent être éliminés de l'ensemble des tests utilisé pour évaluer le SUT, car ils conduisent à des verdicts non concluants. Comme solution, nous proposons de coupler la technique de test combinatoire à une technique d'animation qui repose sur une spécification pour filtrer les tests invalides. Dans notre travail, les tests combinatoires sont générés à partir d'un patron de test. Ce patron est essentiellement défini comme une séquence d'appels d'opérations, en utilisant un ensemble de valeurs pour les paramètres. Le dépliage d'un patron de test complexe, où plusieurs valeurs d'entrée sont utilisées, peut être soumis à une explosion combinatoire, et il est impossible d'avoir des tests valides à partir du patron de test. Il s'agit d'une deuxième problématique de cette thèse. Comme solution, nous proposons un processus de dépliage et d'animation incrémental qui permet de filtrer à un stade précoce (dans la séquence d'appels d'opération) les tests invalides, et donc de maîtriser l'explosion combinatoire. D'autres mécanismes de filtrage sont proposés, pour filtrer les tests qui ne couvrent pas certains comportements d'opération ou ne remplissent pas une propriété donnée. Le nombre de tests générés à partir d'un patron de test peut être considérablement grand pour être exécuté sur un SUT avec ressources mémoires et processeurs limitées. Ce problème est connu sous le nom de problème de réduction de suites de tests, et il représente le troisième problème de cette thèse. Comme solution, nous proposons une nouvelle technique de réduction de suites de tests basée sur les annotations (appelés tags) insérées dans le code source ou la spécification du SUT. L'exécution / animation de tests génère une trace des annotations couverts. Basé sur cette trace d'exécution, une famille de relations d'équivalence est proposée, pour réduire une suite de tests, en utilisant les critères d'ordre et de nombre de répétition des tags couverts
The main objective of this thesis is to provide solutions to some combinatorial testing issues. The combinatorial testing consists in generating tests that cover all combinations of defined input values. The first issue of this thesis is that combinatorial testing can generate a large number of tests that are invalid according to the specification of the System Under Test (SUT). These invalid tests are typically those which fail the verification of the precondition of system operation. These invalid tests must be discarded from the set of tests used to evaluate the SUT, because they lead to inconclusive verdicts. As a solution, we propose to couple the combinatorial testing technique to an animation technique that relies on a specification to filter out invalid tests. In our work, combinatorial tests are generated from a test pattern. This pattern is mainly defined as a sequence of operation calls, using a set of values for their parameters. The unfolding of a complex test pattern, where many operation calls and/or input values are used, may be subject to combinatorial explosion, and it is impossible to provide valid tests from the test pattern. This is a second issue of this thesis. As a solution, we propose an incremental unfolding and animation process that allows to filter out at early stage (in the operation sequence) invalid tests, and therefore to master the combinatorial explosion. Other mechanisms of filtering are proposed to filter out tests which do not cover some operation behaviors or do not fulfill a given property. The test suites generated from a test pattern can be very large to execute on the SUT due the limited memory or CPU resources. This problem is defined as the test suite reduction problem, and it is the third issue of this thesis. As a solution, we propose a new test suite reduction technique based on annotations (called tags) inserted in the source code or the specification of the SUT. The execution/animation of tests generates a trace of the covered annotations. Based on the trace, a family of equivalence relations is proposed, to reduce a test suite, using the criteria of order and number of repetition of covered tags
APA, Harvard, Vancouver, ISO, and other styles
16

Yu, Kai. "Statistical Learning Approaches to Information Filtering." Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dolbear, Catherine. "Personalised information filtering using event causality." Thesis, University of Oxford, 2004. http://ora.ox.ac.uk/objects/uuid:31e94de4-5dda-4312-968b-d0ef34dea8e2.

Full text
Abstract:
Previous research on multimedia information filtering has mainly concentrated on key frame identification and video skim generation for browsing purposes, however applications requiring the generation of summaries as the final product for user con- sumption are of equal scientific and commercial interest. Recent advances in computer vision have enabled the extraction of semantic events from an audio-visual signal, so it can be assumed for our purposes that such semantic labels are already available for use. We concentrate instead on developing methods to prioritise these semantic elements for inclusion in a summary which can be personalised to meet a particular user's needs. Our work differentiates itself from that in the literature as it is driven by the results of a knowledge elicitation study with expert summarisers. The experts in our study believe that summaries structured as a narrative are better able to convey the content of the original data to a user. Motivated by the information filtering problem, the primary contribution of this thesis is the design and implementation of a system to summarise sequences of events by automatic modelling of the causal relationships between them. We show, by com- parison against summaries generated by experts and with the introduction of a new coherence metric, that modelling the causal relationships between events increases the coherence and accuracy of summaries. We suggest that this claim is valid, not only in the domain of soccer highlights generation, in which we carry out the bulk of our experiments, but also in any other domain in which causal relationships can be iden- tified between events. This proposal is tested by applying our summarisation system to another, significantly different domain, that of business meeting summarisation, using the soccer training set and a manually generated ontology mapping. We introduce the concept of a context-group of causally related events as a first step towards modelling narrative episodes and present a comparison between a case based reasoning and a two-stage Markov model approach to summarisation. For both methods we show that by including entire context-groups in the summary, rather than single events in isolation, more accurate summaries can be generated. Our approach to personalisation biases a summary according to particular narrative plotlines using different subsets of the training data. Results show that the number of instances of certain event classes can be increased by biasing the training set appropriately. This method gives very similar results to a standard weighting method, while avoiding the need to tailor the weights to a particular application domain.
APA, Harvard, Vancouver, ISO, and other styles
18

Shardanand, Upendra. "Social information filtering for music recommendation." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/11667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Olsson, Tomas. "Information Filtering with Collaborative Interface Agents." Thesis, SICS, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-22235.

Full text
Abstract:
This report describes a distributed approach to social filtering based on the agent metaphor. Firstly, previous approaches are described, such as cognitive filtering and social filtering. Then a couple of previously implemented systems are presented and then a new system design is proposed. The main goal is to give the requirements and design of an agent-based system that recommends web-documents. The presented approach combines cognitive and social filtering to get the advantages from both techniques. Finally, a prototype implementation called WebCondor is described and results of testing the system are reported and discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Boudreau, Daniel. "Joint time delay estimation and adaptive filtering techniques." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70177.

Full text
Abstract:
This thesis studies adaptive filters for the case in which the main input signal is not synchronized with the reference signal. The asynchrony is modelled by a time-varying delay. This delay has to be estimated and compensated. This is accomplished by designing and investigating joint delay estimation and adaptive filtering algorithms. First, joint maximum likelihood estimator is derived for input Gaussian signals. It is used to define a readily implementable joint estimator, composed of an adaptive delay element and an adaptive filter. Next, two estimation criteria are investigated with that structure. The minimum mean squared error criterion is used with a joint steepest-descent adaptive algorithm and with a joint least-mean-square adaptive algorithm. The general convergence conditions of the joint steepest-descent algorithm are derived. The joint LMS algorithm is analysed in terms of joint convergence in the mean and in the mean square. Finally, a joint recursive least squares adaptive algorithm is investigated in conjunction with the exponentially weighted least squares criterion. Experimental results are obtained for these different adaptive algorithms, in order to verify the analyses. The results show that the joint algorithms improve the performance of the conventional adaptive filtering techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Watson, Vincent C. "Angular rate estimation by multiplicative Kalman filtering techniques." Thesis, Monterey, Calif. : Naval Postgraduate School, 2003. http://handle.dtic.mil/100.2/ADA420668.

Full text
Abstract:
Thesis (M.S. in Astronautical Engineering)--Naval Postgraduate School, December 2003.
"December 2003". Thesis advisor(s): Cristi, Roberto ; Agrawal, Brij. Includes bibliographical references (p. 53). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
22

Kaewkham-ai, Boonsri. "Improving Dst index prediction using Kalman filtering techniques." Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.486700.

Full text
Abstract:
Over the past few decades, short term Dst index prediction using different techniques have been proposed since effects of space weather cause many problems on operational systems on Earth. Using input-output methods, the coupling function between solar wind parameters and Dst index is found to be nonlinear. In practice, observed data have been provided in almost real time but this is very noisy. To address noisy data and nonlinear dynamics, Kalman filtering techniques are used. Furthermore, the measurement noise which is derived from the error between provisional Dst and quick look Dst is found to be non white and modelled using an ARMA structure. Four existing models are chosen and a new model using NARX structure is proposed. Parameter estimation using joint and dual estimation techniques is studied. A comparison between models with Kalman filtering techniques and models alone is made and it is found that Kalman filtering methods can improve prediction performance and reduce prediction error.
APA, Harvard, Vancouver, ISO, and other styles
23

Jones, Philip Andrew. "Techniques in Kalman Filtering for Autonomous Vehicle Navigation." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/78128.

Full text
Abstract:
This thesis examines the design and implementation of the navigation solution for an autonomous ground vehicle suited with global position system (GPS) receivers, an inertial measurement unit (IMU), and wheel speed sensors (WSS) using the framework of Kalman filtering (KF). To demonstrate the flexibility of the KF several methods are explored and implemented such as constraints, multi-rate data, and cascading filters to augment the measurement matrix of a main filter. GPS and IMU navigation are discussed, along with common errors and disadvantages of each type of navigation system. It is shown that the coupling of sensors, constraints, and self-alignment techniques provide an accurate solution to the navigation problem for an autonomous vehicle. Filter divergence is discussed during times when the states are unobservable. Post processed data is analyzed to demonstrate performance under several test cases, such as GPS outage, and the effect that the initial calibration and alignment has on the accuracy of the solution.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Fraser, Stewart Ian. "Multiresolutional techniques for digital image filtering and watermarking." Thesis, University of Aberdeen, 2006. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU214436.

Full text
Abstract:
This thesis examines the use of multiresolutional techniques in two areas of digital image processing: denoising (speckle reduction) and watermarking. A speckle reduction algorithm operating in the wavelet a irons domain is proposed. This novel algorithm iteratively approaches the difference between the estimated noise standard deviation (in an image.) and the removed noise standard deviation. A method for ascertaining the overall performance of a filter, based upon noise removal and edge; preservation, is presented. Comparisons between the novel denoising algorithm and existing denoising filters are carried out using test images and medical ultrasound images. Results show that the novel denoising algorithm reduces speckle drastically whilst maintaining sharp edges. Two distinct areas of digital image watermarking are addressed in this thesis: (1)  the presentation of a novel watermarking system for copyright protection and (2)  a fair comparison of the effects of incorporating Error Correcting Codes (ECC) into various watermarking systems. The newly proposed watermarking system is blind, quantization based and operates in the wavelet domain. Tests carried out on this novel system show it to be highly robust and reliable. An extensive and fair study of the effects of incorporating ECCs (Bose. Chaud-huri and Hoequenghem (BCI1) and repetition codes) into various watermarking systems is carried out. Spatial. Discrete Cosine Transform (I)CT) and wavelet based systems are tested. It is shown that it is not always beneficial to add ECCs into a watermarking system.
APA, Harvard, Vancouver, ISO, and other styles
25

Lanquillon, Carsten. "Enhancing text classification to improve information filtering." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963801805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Khan, Imran. "Personal adaptive web agent for information filtering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq23361.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sheth, Beerud Dilip. "A learning approach to personalized information filtering." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37998.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 96-100).
by Beerud Dilip Sheth.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
28

Ayers, James Thomas. "Structural damage diagnostics via wave propagation-based filtering techniques." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34723.

Full text
Abstract:
Structural health monitoring (SHM) of aerospace components is a rapidly emerging field due in part to commercial and military transport vehicles remaining in operation beyond their designed life cycles. Damage detection strategies are sought that provide real-time information of the structure's integrity. One approach that has shown promise to accurately identify and quantify structural defects is based on guided ultrasonic wave (GUW) inspections, where low amplitude attenuation properties allow for long range and large specimen evaluation. One drawback to GUWs is that they exhibit a complex multi-modal response, such that each frequency corresponds to at least two excited modes, and thus intelligent signal processing is required for even the simplest of structures. In addition, GUWs are dispersive, whereby the wave velocity is a function of frequency, and the shape of the wave packet changes over the spatial domain, requiring sophisticated detection algorithms. Moreover, existing damage quantification measures are typically formulated as a comparison of the damaged to undamaged response, which has proven to be highly sensitive to changes in environment, and therefore often unreliable. As a response to these challenges inherent to GUW inspections, this research develops techniques to locate and estimate the severity of the damage. Specifically, a phase gradient based localization algorithm is introduced to identify the defect position independent of excitation frequency and damage size. Mode separation through the filtering technique is central in isolating and extracting single mode components, such as reflected, converted, and transmitted modes that may arise from the incident wave impacting a damage. Spatially-integrated single and multiple component mode coefficients are also formulated with the intent to better characterize wave reflections and conversions and to increase the signal to noise ratios. The techniques are applied to damaged isotropic finite element plate models and experimental data obtained from Scanning Laser Doppler Vibrometry tests. Numerical and experimental parametric studies are conducted, and the current strengths and weaknesses of the proposed approaches are discussed. In particular, limitations to the damage profiling characterization are shown for low ultrasonic frequency regimes, whereas the multiple component mode conversion coefficients provide excellent noise mitigation. Multiple component estimation relies on an experimental technique developed for the estimation of Lamb wave polarization using a 1D Laser Vibrometer. Lastly, suggestions are made to apply the techniques to more structurally complex geometries.
APA, Harvard, Vancouver, ISO, and other styles
29

Nilsson, Fredrik. "Diagnosis of a Truck Engine using Nolinear Filtering Techniques." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8959.

Full text
Abstract:

Scania CV AB is a large manufacturer of heavy duty trucks that, with an increasingly stricter emission legislation, have a rising demand for an effective On Board Diagnosis (OBD) system. One idea for improving the OBD system is to employ a model for the construction of an observer based diagnosis system. The proposal in this report is, because of a nonlinear model, to use a nonlinear filtering method for improving the needed state estimates. Two nonlinear filters are tested, the Particle Filter (PF) and the Extended Kalman Filter (EKF). The primary objective is to evaluate the use of the PF for Fault Detection and Isolation (FDI), and to compare the result against the use of the EKF.

With the information provided by the PF and the EKF, two residual based diagnosis systems and two likelihood based diagnosis systems are created. The results with the PF and the EKF are evaluated for both types of systems using real measurement data. It is shown that the four systems give approximately equal results for FDI with the exception that using the PF is more computational demanding than using the EKF. There are however some indications that the PF, due to the nonlinearities, could offer more if enough CPU time is available.

APA, Harvard, Vancouver, ISO, and other styles
30

Murphy, Timothy A. "MLS Flight inspection techniques: Digital filtering and coordinate transformation." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184070645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Johan, Ottersten. "Sparse Estimation Techniques for l1 Mean and Trend Filtering." Thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179142.

Full text
Abstract:
It is often desirable to find the underlying trends in time series data. This is a wellknown signal processing problem that has many applications in areas such as financial dataanalysis, climatology, biological and medical sciences etc. Mean filtering finds a piece-wiseconstant trend in the data while trend filtering finds a piece-wise linear trend. When thesignal is noisy, the main difficulty is finding the changing points in the data. These are thepoints where the mean or the trend changes. We focus on a quadratic cost function with apenalty term on the number of changing points. We use the `1 norm for the penalty termas it leads to a sparse solution. This is attractive because the problem is convex in theunknown parameters and well known optimization algorithms exist for this problem. Weinvestigate the Alternating Direction Method of Multipliers (ADMM) algorithm and twofast taut string methods in terms of computational speed and performance. A well knownproblem is the occurrence of false changing point detection. We incorporate a techniqueto remove these false changing points to the fast mean filtering algorithm resulting in anefficient method with fewer false detections. We also propose an extension of the fast meanfiltering technique to the trend filtering problem. This is an approximate solution that workswell for signals with low noise levels.
APA, Harvard, Vancouver, ISO, and other styles
32

Butt, David. "An investigation of harmonic correction techniques using active filtering." Thesis, University of Nottingham, 1999. http://eprints.nottingham.ac.uk/12981/.

Full text
Abstract:
This thesis presents an investigation of techniques used to mitigate the undesirable effects of harmonics in power systems. The first part of this research develops an effective and useful comparison of alternative AC-DC converter topologies. In particular, a full evaluation of the circuit first proposed by Enjeti (known here as the Texas circuit) with a capacitively smoothed output voltage is made, specifically for operation as a 'clean power' supply interface for a variable speed drive (VSD). This mode of operation has not previously been reported in research literature. Simulation and experimental results verify the performance of the circuit and demonstrate that it draws a current with low harmonic content, but the circuit has a number of problems. This part of the research concludes that the six-switch rectifier is the most viable circuit for operation as a supply interface for a VSD due to its bidirectional power flow capability and its excellent versatility of performance. The second part of this research exploits the versatility of the six-switch rectifier and develops the current control strategy for operation of the circuit as a sinusoidal frontend and as a shunt active filter. It is found that the 'traditional' current control method suffers a significant drop in performance when the switching frequency is constrained to 2kHz due to high power levels. The major development in this thesis was an advanced current control strategy, where additional rotating frames of reference are introduced, thereby converting previously oscillatory current values to d.c. values. This is demonstrated to result in vastly improved immunity to disturbances such as supply distortion and a greatly improved steady state performance. In addition, the new controller requires no additional circuitry (apart from current transducers on the load current) and can be applied to an existing sinusoidal front end. Simulation results confirm the operation of the controller with the circuit operating as both a shunt active filter and as a sinusoidal front end. The new controller has been implemented on an experimental rig exhibiting the features of a high power inverter, i.e. low switching frequency and significant device turn-on and turn-off times, and the results confirm the superior performance of the new current controller.
APA, Harvard, Vancouver, ISO, and other styles
33

Squeri, Daniel Stephen. "A Sample of Collaborative Filtering Techniques and Evaluation Metrics." Kent State University Honors College / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1525806431425122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Motwani, Amit. "Interval Kalman filtering techniques for unmanned surface vehicle navigation." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3368.

Full text
Abstract:
This thesis is about a robust filtering method known as the interval Kalman filter (IKF), an extension of the Kalman filter (KF) to the domain of interval mathematics. The key limitation of the KF is that it requires precise knowledge of the system dynamics and associated stochastic processes. In many cases however, system models are at best, only approximately known. To overcome this limitation, the idea is to describe the uncertain model coefficients in terms of bounded intervals, and operate the filter within the framework of interval arithmetic. In trying to do so, practical difficulties arise, such as the large overestimation of the resulting set estimates owing to the over conservatism of interval arithmetic. This thesis proposes and demonstrates a novel and effective way to limit such overestimation for the IKF, making it feasible and practical to implement. The theory developed is of general application, but is applied in this work to the heading estimation of the Springer unmanned surface vehicle, which up to now relied solely on the estimates from a traditional KF. However, the IKF itself simply provides the range of possible vehicle headings. In practice, the autonomous steering system requires a single, point-valued estimate of the heading. In order to address this requirement, an innovative approach based on the use of machine learning methods to select an adequate point-valued estimate has been developed. In doing so, the so called weighted IKF (wIKF) estimate provides a single heading estimate that is robust to bounded model uncertainty. In addition, in order to exploit low-cost sensor redundancy, a multi-sensor data fusion algorithm compatible with the wIKF estimates and which additionally provides sensor fault tolerance has been developed. All these techniques have been implemented on the Springer platform and verified experimentally in a series of full-scale trials, presented in the last chapter of the thesis. The outcomes demonstrate that the methods are both feasible and practicable, and that they are far more effective in providing accurate estimates of the vehicle’s heading than the conventional KF when there is uncertainty in the system model and/or sensor failure occurs.
APA, Harvard, Vancouver, ISO, and other styles
35

Alam, Syed Asad. "Techniques for Efficient Implementation of FIR and Particle Filtering." Doctoral thesis, Linköpings universitet, Datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124195.

Full text
Abstract:
FIR filters occupy a central place many signal processing applications which either alter the shape, frequency or the sampling frequency of the signal. FIR filters are used because of their stability and possibility to have linear-phase but require a high filter order to achieve the same magnitude specifications as compared to IIR filters. Depending on the size of the required transition bandwidth the filter order can range from tens to hundreds to even thousands. Since the implementation of the filters in digital domain requires multipliers and adders, high filter orders translate to a large number of these arithmetic units for its implementation. Research towards reducing the complexity of FIR filters has been going on for decades and the techniques used can be roughly divided into two categories; reduction in the number of multipliers and simplification of the multiplier implementation.  One technique to reduce the number of multipliers is to use cascaded sub-filters with lower complexity to achieve the desired specification, known as FRM. One of the sub-filters is a upsampled model filter whose band edges are an integer multiple, termed as the period L, of the target filter's band edges. Other sub-filters may include complement and masking filters which filter different parts of the spectrum to achieve the desired response. From an implementation point-of-view, time-multiplexing is beneficial because generally the allowable maximum clock frequency supported by the current state-of-the-art semiconductor technology does not correspond to the application bound sample rate. A combination of these two techniques plays a significant role towards efficient implementation of FIR filters. Part of the work presented in this dissertation is architectures for time-multiplexed FRM filters that benefit from the inherent sparsity of the periodic model filters. These time-multiplexed FRM filters not only reduce the number of multipliers but lowers the memory usage. Although the FRM technique requires a higher number delay elements, it results in fewer memories and more energy efficient memory schemes when time-multiplexed. Different memory arrangements and memory access schemes have also been discussed and compared in terms of their efficiency when using both single and dual-port memories. An efficient pipelining scheme has been proposed which reduces the number of pipelining registers while achieving similar clock frequencies. The single optimal point where the number of multiplications is minimum for non-time-multiplexed FRM filters is shown to become a function of both the period, L and time-multiplexing factor, M. This means that the minimum number of multipliers does not always correspond to the minimum number of multiplications which also increases the flexibility of implementation. These filters are shown to achieve power reduction between 23% and 68% for the considered examples. To simplify the multiplier, alternate number systems like the LNS have been used to implement FIR filters, which reduces the multiplications to additions. FIR filters are realized by directly designing them using ILP in the LNS domain in the minimax sense using finite word length constraints. The branch and bound algorithm, a typical algorithm to implement ILP problems, is implemented based on LNS integers and several branching strategies are proposed and evaluated. The filter coefficients thus obtained are compared with the traditional finite word length coefficients obtained in the linear domain. It is shown that LNS FIR filters provide a better approximation  error compared to a standard FIR filter for a given coefficient word length. FIR filters also offer an opportunity in complexity reduction by implementing the multipliers using Booth or standard high-radix multiplication. Both of these multiplication schemes generate pre-computed multiples of the multiplicand which are then selected based on the encoded bits of the multiplier. In TDF FIR filters, one input data is multiplied with a number of coefficients and complexity can be reduced by sharing the pre-computation of the multiplies of the input data for all multiplications. Part of this work includes a systematic and unified approach to the design of such computation sharing multipliers and a comparison of the two forms of multiplication. It also gives closed form expressions for the cost of different parts of multiplication and gives an overview of various ways to implement the select unit with respect to the design of multiplexers. Particle filters are used to solve problems that require estimation of a system. Improved resampling schemes for reducing the latency of the resampling stage is proposed which uses a pre-fetch technique to reduce the latency between 50% to 95%  dependent on the number of pre-fetches. Generalized division-free architectures and compact memory structures are also proposed that map to different resampling algorithms and also help in reducing the complexity of the multinomial resampling algorithm and reduce the number of memories required by up to 50%.
APA, Harvard, Vancouver, ISO, and other styles
36

Tam, Ming-wai, and 譚銘威. "Scalable collaborative filtering using updatable indexing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40687351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tam, Ming-wai. "Scalable collaborative filtering using updatable indexing." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40687351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Akkapeddi, Raghu C. "Grouping annotating and filtering history information in VKB." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/227.

Full text
Abstract:
History mechanisms available in hypertext systems allow users access to past interactions with the system and help users incorporate those interactions into the current context. The history information can be useful to both the system and the user. The Visual Knowledge Builder (VKB) creates spatial hypertexts - visual workspaces for collecting, organizing, and sharing. It is based on prior work on VIKI. VKB records all edit events and presents them in the form of a "navigable history" as end-users work within an information workspace. My thesis explores attaching user interpretations of history via the grouping and annotation of edit events. Annotations can take the form of a plain text statement or one or more attribute/value pairs attached to individual events or group of events in the list. Moreover, I explore the value of history event filtering, limiting the edits and groups presented to those that match user descriptions. My contribution in this thesis is the addition of mechanisms whereby users can cope with larger history records in VKB via the process of grouping, annotating and filtering history information.
APA, Harvard, Vancouver, ISO, and other styles
39

Rydberg, Christoffer. "Time Efficiency of Information Retrieval with Geographic Filtering." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172918.

Full text
Abstract:
This study addresses the question of time efficiency of two major models within Information Retrieval (IR): the Extended Boolean Model (EBM) and the Vector Space Model (VSM). Both models use the same weighting scheme, based on term-frequency-inverse document frequency (tf-idf). The VSM uses a cosine score computation to rank the document-query similarity. In the EBM, P-norm scores are used, which ranks documents not just by matching terms, but also by taking the Boolean interconnections between the terms in the query into account. Additionally, this study investigates how documents with a single geographic affiliation can be retrieved based on features such as the location and geometry of the geographic surface. Furthermore, we want to answer how to best integrate this geographic search with the two IR-models previously described. From previous research we conclude that using an index based on Z-Space Filling Curves (Z-SFC) is the best approach for documents containing a single geographic affiliation. When documents are retrieved from the Z-SFC-index, there are no guarantees that the retrieved documents are relevant for the search area. It is, however, guaranteed that only the retrieved documents can be relevant. Furthermore, the ranked output of the IR models gives a great advantage to the geographic search, namely that we can focus on documents with a high relevance. We intersect the results from one of the IR models with the results from the Z-SFC index and sort the resulting list of documents by relevance. At this point we can iterate over the list, check for intersections of each document's geometry and the search geometry, and only retrieve documents whose geometries are relevant for the search. Since the user is only interested in the top results we can stop as soon as a sufficient amount of results have been obtained. The conclusion of this study is that the VSM is an easy-to-implement, time efficient, retrieval model. It is inferior to the EBM in the sense that it is a rather simple bag-of-words model, while the EBM allows to specify term- conjunctions and disjunctions. The geographic search has shown to be time efficient and independent of which of the two IR models that is used. The gap in efficiency between the VSM and the EBM, however, drastically increases as the query gets longer and more results are obtained. Depending on the requirements of the user, the collection size, the length of queries, etc., the benefits of the EBM might outweigh the downside of performance. For search engines with a big document collection and many users, however, it is likely to be too slow.
Den här studien addresserar tidseffektiviteten av två större modeller inom informationssökning: ”Extended Boolean Model” (EBM) och ”Vector Space Model” (VSM) . Båda modellerna använder samma typ av viktningsschema, som bygger på ”term frequency–inverse document frequency“ (tf- idf). I VSM rankas varje dokument, utifrån en söksträng, genom en skalärprodukt av dokumentets och söksträngens vektorrepresentationer. I EBM används såkallade ”p-norm score functions” som rankar dokument, inte bara utifrån matchande termer, utan genom att ta hänsyn till de Booleska sammanbindningar som finns mellan sökorden. Utöver detta undersöker studien hur dokument med en geografisk anknytning kan hämtas baserat på positionen och geometrin av den geografiska ytan. Vidare vill vi besvara hur denna geografiska sökning på bästa sätt kan integreras med de två informationssökningmodellerna. Utifrån tidigare forskning dras slutsatsen att det bästa tillvägagångssättet för dokument med endast en geografisk anknytning är att använda ett index baserat på ”Z-Space Filling Curves” (Z-SFC). När dokument hämtas genom Z-SFC-indexet finns det inga garantier att de hämtade dokumenten är relevanta för sökytan. Det är däremot garanterat att endast dessa dokument kan vara relevanta. Vidare är det rankade utdatat från IR-modellerna till en stor fördel för den geografiska sökningen, nämligen att vi kan fokusera på dokument med hög relevans. Detta görs genom att jämföra resultaten från vald IR-modell med resultaten från Z-SFC-indexet och sortera de matchande dokumenten efter relevans. Därefter kan vi iterera över listan och beräkna vilka dokuments geometrier som skär sökningens geometri. Eftersom användaren endast är intresserad av de högst rankade dokumenten kan vi avbryta när vi har tillräckligt många sökresultat. Slutsatsen av studien är att VSM är enkel att implementera och mycket tidseffektiv jämfört med EBM. Modellen är underlägsen EBM i den mening att det är en ganska enkel ”bag of words”-modell, medan EBM tillåter specificering av konjuktioner och disjunktioner. Den geografiska sökningen har visats vara tidseffektiv och oberoende av vilken av de två IR-modellerna som används.Skillnaden i tidseffektivitet mellan VSM och EBM ökar däremot drastiskt när söksträngen blir längre och fler resultat erhålls. Emellertid, beroende på användarens krav, storleken på dokumentsamlingen, söksträngens längd, etc., kan fördelarna med EBM ibland överväga nackdelen av den lägre prestandan. För sökmotorer med stora dokumentsamlingar och många användare är dock modellen sannolikt för långsam.
APA, Harvard, Vancouver, ISO, and other styles
40

Forbes, Jason. "Discrete signal processing techniques for power converters : multi-carrier modulation and efficient filtering techniques." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45446.

Full text
Abstract:
Digital control has become ubiquitous in the field of power electronics due to the ease of implementation, reusability, and flexibility. Practical engineers have been hesitant to use digital control rather than the more traditional analog control methods due to the unfamiliar theory, relatively complicated implementation and various challenges associated with digital quantization. This thesis presents discrete signal processing theory to solve issues in digitally controlled power converters including reference generation and filtering. First, this thesis presents advancements made in the field of digital control of dc-ac and ac-dc power converters. First, a multi-carrier PWM strategy is proposed for the accurate and computationally inexpensive generation of sinusoidal signals. This method aims to reduce the cost of implementing a sine-wave generator by reducing both memory and computational requirements. The technique, backed by theoretical and experimental evidence, is simple to implement, and does not rely on any specialized hardware. The method was simulated and experimentally implemented in a voltage-controlled PWM inverter and can be extended to any application involving the digital generation of periodic signals. The second advancement described in this thesis is the use of simple digital filters to improve the response time of single-phase active rectifiers. Under traditional analog control strategies, the bandwidth of an active rectifier is unduly restricted in order to reduce any unwanted harmonic distortion. This work investigates digital filters as a proposed means to improve the bandwidth, and thereby create a faster, more efficient ac-dc power converter. Finally, a moving average filter is proposed, due to its simple implementation and minor computational burden, as an efficient means to expand the bandwidth. Since moving average filters are well known and widely understood in industry, this proposed filter is an attractive solution for practicing engineers. The theory developed in this thesis is verified through simulations and experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

Scott, Hugh R. R. "Multiresolution techniques for audio signal restoration." Thesis, University of Warwick, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bizer, Christian. "Quality-driven Information Filtering in the Context of web-based Information Systems." [S.l.] : [s.n.], 2007. http://www.diss.fu-berlin.de/2007/217/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ma, Chon Teng. "Biopotential readout front-end circuits using frequency-translation filtering techniques." Thesis, University of Macau, 2010. http://umaclib3.umac.mo/record=b2182904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Forney, Fredric D. "Acoustic noise removal by combining wiener and wavelet filtering techniques." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA350427.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, June 1998.
"June 1998." Thesis advisor(s): Monique P. Fargues, Ralph Hippenstiel. Includes bibliographical references (p. 133-135). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
45

Irniger, Christophe-André. "Graph matching filtering databases of graphs using machine learning techniques." Berlin Aka, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2677754&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vongurai, Rawin. "Development of digital filtering techniques in three-dimensional TLM models." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13651/.

Full text
Abstract:
Digital filtering (DF) techniques are receiving significant interest, because they can represent fine features such as vias, thin-panels and thin-wires in full-field solutions of electromagnetic problems with significant savings in computational costs. However, a limitation of this technique is that DF can only represent a fine feature as a plane or as an internal boundary. In other words, an internal boundary can represent the electromagnetic properties of a fine feature in one dimension or two directions. The DF technique is usually involved with time domain solvers such as the Finite-difference time-domain (FDTD) and the Transmission Line Modeling (TLM) methods. Both of them are commonly used to investigate the electromagnetic fields in the problem spaces. Here the TLM method is selected for demonstrating the DF technique. This thesis presents the formulation of TLM in three-dimensions in order to investigate the limitations of the DF technique and the solutions. As a result, new techniques have been developed. These techniques can be applied to the three dimensional TLM method in order to represent the fine features in three-dimensions appropriately. The developed techniques were demonstrated using some examples of three-dimensional embedded objects, such as conducting volumes and dielectrics. Their accuracy and efficiency are compared with the standard TLM method in the time and frequency-domain. The results show good agreement between these techniques and the standard TLM method.
APA, Harvard, Vancouver, ISO, and other styles
47

Kuchler, Ryan J. "Comparison of channel equalization filtering techniques in underwater acoustic communications." Thesis, Monterey California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/5887.

Full text
Abstract:
In this thesis, underwater acoustic communications signal processing techniques, which are used to equalize the distortional effects associated with the ocean as a communications channel, are investigated for a shallow water ocean environment. The majority of current signal processing techniques employ a Finite Impulse Response (FIR) filter. Three equalization filters were investigated and presented as alternatives; they were the passive time-reversed filter, the inverse filter, and the Infinite Impulse Response (IIR) filter. The main advantage of the passive time-reversed filter and the inverse filter is simplicity of design. Bit error rates for the time-reversed filter were consistently around 10-1 and those for the inverse filter were greater than 10-1. However, inability of the passive time-reversed filter to completely eliminate multipath components and the ill-conditioned nature of the inverse filter made it difficult to achieve Probability of Error results below 10-1. Research into the development of an array receiver using a time-reversed filter should improve calculated bit error rates. Simulations of the IIR filter were conducted with limited success. The main advantage of an IIR filter is that fewer parameters are required in the design of the filter. However, the potential for instability in the filter is a significant limitation. Probability of Error results were found to be on the order of those for current FIR filters at close ranges. Unfortunately, instability issues arose for receivers as range from the source increased. This research on the IIR filter is still in the embryonic stage, whereas research using FIR filters is relatively highly developed. Further research is needed to address the issue of instability in IIR filters in order to make them an effective signal processing technique employable in underwater acoustic communications.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Ben. "Removing Textured Artifacts from Digital Photos Using Spatial Frequency Filtering." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/148.

Full text
Abstract:
An abstract of the thesis of Ben Huang for the Master of Science in Electric and Computer Science presented [August 12nd, 2010]. Title: Removing textured artifacts from digital photos by using spatial frequency filtering Virtually all image processing is now done with digital images. These images, captured with digital cameras, can be readily processed with various types of editing software to serve a multitude of personal and commercial purposes. But not all images are directly captured and even of those that are directly captured many are not of sufficiently high quality. Digital images are also acquired by scanning old paper images. The result is often a digital image of poor quality. Textured artifacts on some old paper pictures were designed to help protect pictures from discoloration. However, after scanning, these textured artifacts exhibit annoying textured noise in the digital image, highly degrading the visual definition of images on electronic screens. This kind of image noise is academically called global periodic noise. It is in a spurious and repetitive pattern that exists consistently throughout the image. There does not appear to be any commercial graphic software with a tool box to directly resolve this global periodic noise. Even Photoshop, considered to be the most powerful and authoritative graphic software, does not have an effective function to reduce textured noise. This thesis addresses this problem by proposing the use of an alternative graphic filter to what is currently available. To achieve the best image quality in photographic editing, spatial frequency domain filtering is utilized instead of spatial domain filtering. In frequency domain images, the consistent periodicity of the textured noise leads to well defined spikes in the frequency transform of the noisy image. When the noise spikes are at a sufficient distance from the image spectrum, they can be removed by reducing their frequency amplitudes. The filtered spectrum may then yield a noise reduced image through inverse frequency transforming. This thesis proposes a method to reduce periodic noise in the spatial frequency domain; summarizes the difference between DFT and DCT, FFT and fast DCT in image processing applications; uses fast DCT as the frequency transform to solve the problem in order to improve both computational load and filtered image quality; and develops software that can be implemented as a plug in for large graphic software to remove textured artifacts from digital images.
APA, Harvard, Vancouver, ISO, and other styles
49

McGinnity, Shaun Joseph. "Nonlinear estimation techniques for target tracking." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tong, Shan. "Dynamic physiological information recovery : a sampled-data filtering framework /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20TONG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography