Dissertationen zum Thema „Precisión y recall“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-27 Dissertationen für die Forschung zum Thema "Precisión y recall" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Parkin, Jennifer. „Memory for spatial mental models : examining the precision of recall“. Thesis, Loughborough University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415926.
Der volle Inhalt der QuelleAl-Dallal, Ammar Sami. „Enhancing recall and precision of web search using genetic algorithm“. Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7379.
Der volle Inhalt der QuelleKlitkou, Gabriel. „Automatisk trädkartering i urban miljö : En fjärranalysbaserad arbetssättsutveckling“. Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27301.
Der volle Inhalt der QuelleDigitala urbana trädregister tjänar många syften och underlättar för städer och kommuner att administrera, sköta och hantera sina park- och gatuträd. Dagens kartering av urbana trädbestånd sker ofta manuellt med metoder vilka är både arbetsintensiva och tidskrävande. Denna studie syftar till att utveckla ett arbetssätt för att med hjälp av befintliga LiDAR-data och ortofoton automatiskt kartera individuella träd. Med hjälp av tilläggen LIDAR Analyst och FeatureAnalyst för ArcMap utfördes en trädkartering över Östermalms stadsdelsnämndsområde i Stockholms stad. Efter kontroll mot stadens träddatabas och validering av resultatet genom beräknandet av Precision och Recall konstaterades att användningen av FeatureAnalyst resulterade i det bästa trädkarteringsresultatet. Dessa träd representeras av polygoner vilket medför att resultatet trots sin goda täckning inte lämpar sig för identifierandet av enskilda trädpositioner. Även om användningen av LIDAR Analyst resulterade i ett mindre precist karteringsresultat erhölls goda positionsbestämmelser för enskilda träd, främst i områden med jämna, glesa trädbestånd. Slutsatsen av detta är att användandet av de båda verktygen kompenserar varandras tillkortakommanden där FeatureAnalyst ger en godtagbar trädtäckning medan LIDAR Analyst bättre identifierar enskilda trädpositioner. En kombination av de båda resultaten skulle alltså kunna användas i trädkarteringssyfte.
Johansson, Ann, und Karolina Johansson. „Utvärdering av sökmaskiner : en textanalys kring utvärderingar av sökmaskiner på Webben“. Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-18323.
Der volle Inhalt der QuelleUppsatsnivå: D
Carlsson, Bertil. „Guldstandarder : dess skapande och utvärdering“. Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19954.
Der volle Inhalt der QuelleForskningsområdet för att skapa bra automatiska sammanfattningar har ökat stadigt genom de senaste åren. Detta på grund av den efterfrågan som finns både inom den privata och offentliga sektorn på att kunna ta till sig mer information än vad som idag är möjligt. Man vill slippa sitta och läsa hela rapporter och informationstexter utan istället smidigt kunna läsa en sammanfattning av dessa för att på så sätt kunna läsa fler. För att veta om dessa automatiska sammanfattare håller en bra standard måste dessa utvärderas på något sätt. Ofta görs detta genom att se till hur mycket information som kommer med i sammanfattningen och hur mycket som utelämnas. För att detta ska vara möjligt att kontrollera behövs en så kallad guldstandard, en sammanfattning som agerar som facit gentemot de automatiskt sammanfattade texterna.
Den här rapporten behandlar ämnet guldstandarder och skapandet av dessa. I projektet har fem guldstandarder på informationstexter från Försäkringskassan skapats och utvärderats med positiva resultat.
Nordh, Andréas. „Musikwebb : En evaluering av webbtjänstens återvinningseffektivitet“. Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19907.
Der volle Inhalt der QuelleSantos, Juliana Bonato dos. „Automatizando o processo de estimativa de revocação e precisão de funções de similaridade“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15889.
Der volle Inhalt der QuelleTraditional database query mechanisms, which use the equality criterion, have become inefficient when the stored data have spelling and format variations. In such cases, it's necessary to use similarity functions instead of boolean operators. Query mechanisms that use similarity functions return a ranking of elements ordered by their score in relation to the query object. To define the relevant elements that must be returned in this ranking, a threshold value can be used. However, the definition of the appropriated threshold value is complex, because it depends on the similarity function used and the semantics of the queried data. One way to help to choose an appropriate threshold is to evaluate the quality of similarity functions results using different thresholds values on a database sample. This work presents an automatic method to estimate the quality of similarity functions through recall and precision measures computed for different thresholds. The results obtained by this method can be used as metadata and, through the requirements of an specific application, assist in setting the appropriated threshold value. This process uses clustering methods and cluster validity measures to eliminate human intervention during the process of estimating recall and precision.
Chiow, Sheng-wey. „A precision measurement of the photon recoil using large area atom interferometry /“. May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Der volle Inhalt der QuelleLopes, Miguel. „Inference of gene networks from time series expression data and application to type 1 Diabetes“. Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/216729.
Der volle Inhalt der QuelleDoctorat en Sciences
info:eu-repo/semantics/nonPublished
Afram, Gabriel. „Genomsökning av filsystem för att hitta personuppgifter : Med Linear chain conditional random field och Regular expression“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34069.
Der volle Inhalt der QuelleDen nya lagen General data protection regulation (GDPR) började gälla för alla företag inom Europeiska unionen efter den 25 maj. Detta innebär att det blir strängare lagkrav för företag som på något sätt lagrar personuppgifter. Målet med detta projekt är därför att underlätta för företag att uppfylla de nya lagkraven. Detta genom att skapa ett verktyg som söker igenom filsystem och visuellt visar användaren i ett grafiskt användargränssnitt vilka filer som innehåller personuppgifter. Verktyget använder Named Entity Recognition med algoritmen Linear Chain Conditional Random Field som är en typ av ”supervised” learning metod inom maskininlärning. Denna algoritm används för att hitta namn och adresser i filer. De olika modellerna tränas med olika parametrar och träningen sker med hjälp av biblioteket Stanford NER i Java. Modellerna testas genom en testfil som innehåller 45 000 ord där modellerna själva får förutspå alla klasser till orden i filen. Modellerna jämförs sedan med varandra med hjälp av mätvärdena precision, recall och F-score för att hitta den bästa modellen. Verktyget använder även Regular expression för att hitta e- mails, IP-nummer och personnummer. Resultatet på den slutgiltiga maskininlärnings modellen visar att den inte hittar alla namn och adresser men att det är något som kan förbättras genom att öka träningsdata. Detta är dock något som kräver en kraftfullare dator än den som användes i detta projekt. En undersökning på hur det svenska språket är uppbyggt skulle även också behöva göras för att använda de lämpligaste parametrarna vid träningen av modellen.
Li, Chaoyang, und Ke Liu. „Smart Search Engine : A Design and Test of Intelligent Search of News with Classification“. Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37601.
Der volle Inhalt der QuelleGebert, Florian [Verfasser]. „Precision measurement of the isotopic shift in calcium ions using photon recoil spectroscopy / Florian Gebert“. Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2015. http://d-nb.info/1072060299/34.
Der volle Inhalt der QuelleJavar, Shima. „Measurement and comparison of clustering algorithms“. Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1735.
Der volle Inhalt der QuelleIn this project, a number of different clustering algorithms are described and their workings explained. They are compared to each other by implementing them on number of graphs with a known architecture.
These clustering algorithm, in the order they are implemented, are as follows: Nearest neighbour hillclimbing, Nearest neighbour big step hillclimbing, Best neighbour hillclimbing, Best neighbour big step hillclimbing, Gem 3D, K-means simple, K-means Gem 3D, One cluster and One cluster per node.
The graphs are Unconnected, Directed KX, Directed Cycle KX and Directed Cycle.
The results of these clusterings are compared with each other according to three criteria: Time, Quality and Extremity of nodes distribution. This enables us to find out which algorithm is most suitable for which graph. These artificial graphs are then compared with the reference architecture graph to reach the conclusions.
Aula, Lara. „Improvement of Optical Character Recognition on Scanned Historical Documents Using Image Processing“. Thesis, Högskolan i Gävle, Datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-36244.
Der volle Inhalt der QuelleJelassi, Mohamed Nidhal. „Un système personnalisé de recommandation à partir de concepts quadratiques dans les folksonomies“. Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22693/document.
Der volle Inhalt der QuelleRecommender systems are now popular both commercially as well as within the research community, where many approaches have been suggested for providing recommendations. Folksonomies' users are sharing items (e.g., movies, books, bookmarks, etc.) by annotating them with freely chosen tags. Within the Web 2.0 age, users become the core of the system since they are both the contributors and the creators of the information. In this respect, it is of paramount importance to match their needs for providing a more targeted recommendation. For such purpose, we consider a new dimension in a folksonomy classically composed of three dimensions and propose an approach to group users with close interests through quadratic concepts. Then, we use such structures in order to propose our personalized recommendation system of users, tags and resources. We carried out extensive experiments on two real-life datasets, i.e., MovieLens and BookCrossing which highlight good results in terms of precision and recall as well as a promising social evaluation. Moreover, we study some of the key assessment metrics namely coverage, diversity, adaptivity, serendipity and scalability. In addition, we conduct a user study as a valuable complement to our evaluation in order to get further insights. Finally, we propose a new algorithm that aims to maintain a set of triadic concepts without the re-scan of the whole folksonomy. The first results comparing the performances of our proposition andthe running from scratch the whole process over four real-life datasets show its efficiency
Pakyurek, Muhammet. „A Comparative Evaluation Of Foreground / Background Segmentation Algorithms“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614666/index.pdf.
Der volle Inhalt der Quellezde Bozdagi Akar September 2012, 77 pages Foreground Background segmentation is a process which separates the stationary objects from the moving objects on the scene. It plays significant role in computer vision applications. In this study, several background foreground segmentation algorithms are analyzed by changing their critical parameters individually to see the sensitivity of the algorithms to some difficulties in background segmentation applications. These difficulties are illumination level, view angles of camera, noise level, and range of the objects. This study is mainly comprised of two parts. In the first part, some well-known algorithms based on pixel difference, probability, and codebook are explained and implemented by providing implementation details. The second part includes the evaluation of the performances of the algorithms which is based on the comparison v between the foreground background regions indicated by the algorithms and ground truth. Therefore, some metrics including precision, recall and f-measures are defined at first. Then, the data set videos having different scenarios are run for each algorithm to compare the performances. Finally, the performances of each algorithm along with optimal values of their parameters are given based on f measure.
Wahab, Nor-Ul. „Evaluation of Supervised Machine LearningAlgorithms for Detecting Anomalies in Vehicle’s Off-Board Sensor Data“. Thesis, Högskolan Dalarna, Mikrodataanalys, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-28962.
Der volle Inhalt der QuelleKondapalli, Swetha. „An Approach To Cluster And Benchmark Regional Emergency Medical Service Agencies“. Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1596491788206805.
Der volle Inhalt der QuelleMassaccesi, Luciano. „Machine Learning Software for Automated Satellite Telemetry Monitoring“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20502/.
Der volle Inhalt der QuelleČeloud, David. „Vyhledávání informací TRECVid Search“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237260.
Der volle Inhalt der QuelleNilsson, Olof. „Visualization of live search“. Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102448.
Der volle Inhalt der QuelleChang, Jing-Shin, und 張景新. „Automatic Lexicon Acquisition and Precision-Recall Maximization for Untagged Text Corpora“. Thesis, 1997. http://ndltd.ncl.edu.tw/handle/62275166919316022114.
Der volle Inhalt der Quelle國立清華大學
電機工程學系
85
Automatic lexicon acquisition from large text corpora is surveyed in thisdissertation, with special emphases on optimization techniques for maximizingthe joint precision- recall performance. Both English compound word extractionand Chinese unknown word identification tasks are studied in order to exploreprecision-recall optimization techniques in different languages of differentcomplexity using different available resources. In the English compound wordextraction task, the simplest system architecture, which assumes that thelexicon extraction task is conducted using a classifier (or a filter) based ona set of multiple association features, is studied. Under such circumstances,a two stage optimization scheme is proposed, in which the first stage aims atminimizing classification error and the second stage focuses on maximizingjoint precision-recall, starting from the minimum error status. To achieveminimum error rate, various approaches are used to improve the error rateperformance of the classifier. In addition, a non-linear learning algorithm isdeveloped for achieving maximum precision-recall performance in terms of userspecified objective function of precision and recall. In the Chinese unknownword extraction task, where contextual information as well as word associationmetrics are used, an iterative approach, which allows us to improve bothprecision and recall simultaneously, is proposed to iteratively improve theprecision and recall performance. For the English compound word extractiontask, the weighted precision and recall (WPR) using the proposed approach canachieve as high as about 88% for bigram compounds, and 88% for trigramcompounds for a training (testing) corpus of 20715 (2301) sentences sampledfrom technical manuals of cars. The F-measure performances are about 84% forbigrams and 86% for trigrams. By applying the proposed optimization method,the precision and recall profile is observed to follow the preferred criteriaof different lexicographers. For the Chinese unknown word identification task,experiment results show that both precision and recall rates are improvedalmost monotonically, in contrast to non-iterative segmentation-merging-filtering- and-disambiguation approaches, which often sacrifice precision forrecall or vice versa. With a corpus of 311,591 sentences, the performance is76% (bigram), 54% (trigram), and 70% (quadgram) in F-measure, which issignificantly better than using the non-iterative approach with F-measures of74% (bigram), 46% (trigram), and 58% (quadgram).
Gallé, Matthias. „Algoritmos para la búsqueda eficiente de instancias similares“. Bachelor's thesis, 2007. http://hdl.handle.net/11086/8.
Der volle Inhalt der QuelleEn el presente trabajo encaramos el desafío de buscar objetos similares dentro de una colección muy grande de estos objetos. Encontramos dos dificultades en éste problema: en primer lugar definir una medida de similitud entre dos objetos y luego implementar un algoritmo que, basandose en esa medida, encuentre de una manera eficiente los objetos suficientemente parecidos. La solución presentada utiliza una medida basada fuertemente en los conceptos de precisión y recall, obteniendose una medida similar a la de Jaccard. La eficiencia del algoritmo radica en la generación de grupos de objetos similares, y solamente después busca éstos objetos en la base de datos. Usamos éste algoritmo en dos aplicaciones: por un lado a una base de datos de usuarios que evalúan películas a fin de proyectar éstas notas. Por otro lado, la utilizamos para encontrar pérfiles genéticos que pueden haber aportado a una evidencia genética.
Matthias Gallé.
Wang, Hui-Ju, und 王惠如. „Using concept map navigation to improve the precision and recall of document searching“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/gkecnw.
Der volle Inhalt der Quelle銘傳大學
資訊管理學系碩士班
94
The user needs to find an enormous amount of papers related to solve their problem or as reference data. Currently, the most common two search methods are keyword searching, and natural language searching. The advantage of keyword searching is that is simple, convenient and comprehensive; yet keyword may not fully express the semantics of the user, thus generating a lot of search results which are not the most desired. Secondly, natural language searching has a more complete description than keyword searching, improving the precision. However, the disadvantage is that the semantic of the sentence is more personalized. The search results may be disappointing if users have different recognition than that of the database. In context, a concept will be generated between terms as a result, which express the concept of the article. Using association rule has great results in terms of retrieving and document classification, and concept map navigation shows the links between these concepts. The research proposes that concept map navigation searching which extracts the main concept of documents and links the concepts through the association rule forming a concept map, which allows users to browse and search for data needed, thus solving the problem of low precision searching because of unclear semantic expression. Finally, we have implemented our approach and used conference papers as samples to test the precision and recall. The result shows that concept map navigation can improve precision and recall by more than 5%. The investigated results show that our approach is feasible and effective.
Garrett, LeAnn. „Authority control and its influence on recall and precision in an online bibliographic catalog“. 1997. http://books.google.com/books?id=_PHgAAAAMAAJ.
Der volle Inhalt der QuelleEl, Demerdash Osama. „Mining photographic collections to enhance the precision and recall of search results using semantically controlled query expansion“. Thesis, 2013. http://spectrum.library.concordia.ca/977207/1/ElDemerdash_PhD_S2013.pdf.
Der volle Inhalt der Quelle„Utility of Considering Multiple Alternative Rectifications in Data Cleaning“. Master's thesis, 2013. http://hdl.handle.net/2286/R.I.18825.
Der volle Inhalt der QuelleDissertation/Thesis
M.S. Computer Science 2013