Auswahl der wissenschaftlichen Literatur zum Thema „Precisión y recall“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Precisión y recall" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Precisión y recall"

1

Florez Carvajal, Daniel Mauricio, und Germán Andrés Garnica Gaitán. „Detección de grupos de fajillas en imágenes de paquetes de billete en diversas condiciones de iluminación y fondo mediante un clasificador SVM“. AVANCES Investigación en Ingeniería 14 (15.12.2017): 145. http://dx.doi.org/10.18041/1794-4953/avances.1.1293.

Der volle Inhalt der Quelle
Annotation:
Este artículo presenta los resultados de una clasificación binaria de imágenes con dos diferentes condiciones de iluminación y fondo para un problema específico de detección de grupos de fajillas en paquetes de billete. La detección se lleva a cabo con un clasificador “Support Vector Machines” entrenado con vectores característicos obtenidos de las imágenes mediante la aplicación de la transformada wavelet y de la técnica de concatenación de histograma. Para cada condición de fondo e iluminación se entrena un clasificador diferente, se obtiene la matriz de confusión de cada uno y luego se comparan mediante los parámetros de recall, especificidad, precisión, exactitud y Fscore.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cardoso, Alejandra Carolina, M. Alicia Pérez Abelleira und Enzo Notario. „Búsqueda de respuestas como aplicación del problema de extracción de relaciones“. Revista Tecnología y Ciencia, Nr. 33 (17.10.2018): 45–64. http://dx.doi.org/10.33414/rtyc.33.45-64.2018.

Der volle Inhalt der Quelle
Annotation:
El volumen de información no estructurada en forma de documentos de texto de diversos orígenes es cada vez mayor. Para poder acceder a ella y obtener conocimiento que pueda ser aplicado a una diversidad de tareas, primero hay que “estructurar” esa información. La extracción de la estructura relacional entre entidades, en forma de tripletas basadas en un verbo, puede ser aplicada al problema de búsqueda de respuestas. Este trabajo ha hecho uso de técnicas eficientes de análisis superficial de texto y el sistema construido tiene una precisión y recall comparable a otros sistemas del estado del arte. Las tripletas extraídas forman la base de conocimientos sobre la que se hacen consultas para obtener respuestas a preguntas en lenguaje natural. Los resultados de este sistema de búsqueda de respuestas sobre un banco de preguntas a un corpus de 2052 documentos sobre Salta obtenidos de la web demuestran la validez de este enfoque.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cardoso, Alejandra Carolina, María Lorena Talamé, Matías Nicolás Amor und Agustina Monge. „Creación de un Corpus de Opiniones con Emociones Usando Aprendizaje Automático“. Revista Tecnología y Ciencia, Nr. 37 (03.04.2020): 11–23. http://dx.doi.org/10.33414/rtyc.37.11-23.2020.

Der volle Inhalt der Quelle
Annotation:
La identificación de los sentimientos expresados en opiniones textuales puede entenderse como la categorización de los mismos según sus características, y resulta de gran interés en la actualidad. El aprendizaje supervisado es uno de los métodos más populares para la clasificación textual, pero se necesitan muchos datos etiquetados para el entrenamiento. El aprendizaje semi supervisado supera esta limitación, ya que implica trabajar con un pequeño conjunto de datos etiquetados y otro mayor sin etiquetar. Se desarrolló un método de clasificación de textos que combina ambos tipos de aprendizajes. Se recopilaron textos breves u opiniones de la red social Twitter, a los que se aplicaron una serie de acciones de limpieza y preparación, para luego clasificarlos en cuatro sentimientos: ira, asco, tristeza y felicidad. La precisión y recall obtenidos con el método fueron satisfactorios y como consecuencia, se logró obtener un corpus de mensajes categorizados según el sentimiento expresado.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Putri, Vinna Utami, Eko Budi Cahyono und Yufis Azhar. „Deteksi Botnet Pada Passive DNS Dengan Menggunakan Metode K Nearest Neighbor“. Jurnal Repositor 2, Nr. 12 (04.12.2020): 1631. http://dx.doi.org/10.22219/repositor.v2i12.450.

Der volle Inhalt der Quelle
Annotation:
AbstrakTeknologi internet di masa kini berkembang dengan pesat berbanding lurus dengan penggunanya yang juga semakin banyak. Salah satu kejahatan software yang berbahaya adalah robot network (Botnet). Botnet adalah sebuah zombie dalam jaringan dari jutaan perangkat yang tersambung ke internet, yang mana bot diinfeksi dengan malware yang khusus agar bisa dikendalikan oleh cybercriminal dari jarak jauh untuk memberikan serangan seperti mengirim email, mencuri informasi pribadi, dan meluncurkan serangan DDoS.Pada penelitian ini penulis mengelompokan dan mengklasifikasikan dataset yang botnets dan normal pada passive DNS yang terdapat pada dataset CTU-13 dengan metode k Nearest Neighbor dan juga pengujian dengan mengunakan confusion matrix dengan nilai precision, recall dan accuracy dari k-nearest neighbor dari standart bahasa pemograman python dengan library sciketlearn disetiap kelas prediksi dan hasil yang dicapai cukup tinggi dengan nilai dari uniform dengan nilai 76% untuk precission 86% dan recal-nya 93,9% untuk accuracy. Uniform ternormalisasi dengan nilai 76% untuk precission 88% dan recal-nya 83% untuk accuracy. Hasil Distance didapatkan nilai 100% untuk precission 85% dan recal-nya 92% untuk accuracy. Hasil Distance ternormalisasi 100% untuk precission 87% dan recal-nya 93% untuk accuracy.Abstract . The present internet technology develops by leaps and bounds is directly proportional to its users which is also more and more. One of the crime of malicious software is a robot network (Botnet). A botnet is a network of millions of zombies in a device that is connected to the internet, which is where a bot infection with malware that specifically so that it can be controlled by the cybercriminal remotely to provide attack such as sending email, steal personal information, and launching DDoS attacks.In this study the authors classify and classify the botnets and normal dataset on a passive DNS contained on dataset CTU-13 k Nearest Neighbor method and also testing using confusion matrix with values of precision, recall and the accuracy of k-nearest neighbor of the python programming language with the standard library sciketlearn every class predictions and results achieved high enough with the value of the uniform with a value of 76% to 86% and precission recal its 93.9% for accuracy. Uniform ternormalisasi with a value of 76% to 88% and precission recal 83% for its accuracy. The results obtained by the value of 100% Distance for precission 85% and 92% of his recal for accuracy. Ternormalisasi 100% Distance results for precission 87% and 93% of his recal for accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

López-Trujillo, Sebastián, und María C. Torres-Madroñero. „Comparación de algoritmos de resumen de texto para el procesamiento de editoriales y noticias en español“. TecnoLógicas 24, Nr. 51 (11.06.2021): e1816. http://dx.doi.org/10.22430/22565337.1816.

Der volle Inhalt der Quelle
Annotation:
El lenguaje se ve afectado, no solo por las reglas gramaticales, sino también por el contexto y las diversidades socioculturales, por lo cual, el resumen automático de textos (un área de interés en el procesamiento de lenguaje natural - PLN), enfrenta desafíos como la identificación de fragmentos importantes según el contexto y el tipo de texto analizado. Trabajos anteriores describen diferentes métodos de resúmenes automáticos, sin embargo, no existen estudios sobre su efectividad en contextos específicos y tampoco en textos en español. En este artículo se presenta la comparación de tres algoritmos de resumen automático usando noticias y editoriales en español. Los tres algoritmos son métodos extractivos que buscan estimar la importancia de una frase o palabra a partir de métricas de similitud o frecuencia de palabras. Para esto se construyó una base de datos de documentos donde se incluyeron 33 editoriales y 27 noticias, obteniéndose un resumen manual para cada texto. La comparación de los algoritmos se realizó cuantitativamente, empleando la métrica Recall-Oriented Understudy for Gisting Evaluation. Asimismo, se analizó el potencial de los algoritmos seleccionados para identificar los componentes principales del texto. En el caso de las editoriales, el resumen automático debía incluir un problema y la opinión del autor, mientras que, en las noticias, el resumen debía describir las características temporales y espaciales de un suceso. En términos de porcentaje de reducción de palabras y precisión, el método que permite obtener los mejores resultados, tanto para noticias como para editoriales, es el basado en la matriz de similitud. Este método permite reducir en un 70 % los textos, tanto editoriales como noticiosos. No obstante, es necesario incluir la semántica y el contexto en los algoritmos para mejorar su desempeño en cuanto a precisión y sensibilidad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Galarza Bravo, Michelle Alejandra, und Marco Flores. „Detección de peatones en la noche usando Faster R-CNN e imágenes infrarrojas“. Ingenius, Nr. 20 (30.06.2018): 48–57. http://dx.doi.org/10.17163/ings.n20.2018.05.

Der volle Inhalt der Quelle
Annotation:
En este artículo se presenta un sistema de detección de peatones en la noche, para aplicaciones en seguridad vehicular. Para este desarrollo se ha analizado el desempeño del algoritmo Faster R-CNN con imágenes en el infrarrojo lejano. Por lo que se constató que presenta inconvenientes a la hora de detectar peatones a larga distancia. En consecuencia, se presenta una nueva arquitectura Faster R-CNN dedicada a la detección en múltiples escalas, mediante dos generadores de regiones de interés (ROI) dedicados a peatones a corta y larga distancia, denominados RPNCD y RPNLD, respectivamente. Esta arquitectura ha sido comparada con los modelos para Faster R-CNN que han presentado los mejores resultados, como son VGG-16 y Resnet 101. Los resultados experimentales se han desarrollado sobre las bases de datos CVC-09 y LSIFIR, los cuales demostraron mejoras, especialmente en la detección de peatones a larga distancia, presentando una tasa de error versus FPPI de 16 % y sobre la curva Precisión vs. Recall un AP de 89,85 % para la clase peatón y un mAP de 90 % sobre el conjunto de pruebas de las bases de datos LSIFIR y CVC-09.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kuznetsova, Anna A. „Statistical Precision – Recall curves for object detection quality assessment“. Journal Of Applied Informatics 15, Nr. 90 (28.12.2020): 42–57. http://dx.doi.org/10.37791/2687-0649-2020-15-6-42-57.

Der volle Inhalt der Quelle
Annotation:
Average precision (AP) as the area under the Precision – Recall curve is the de facto standard for comparing the quality of algorithms for classification, information retrieval, object detection, etc. However, traditional Precision – Recall curves usually have a zigzag shape, which makes it difficult to calculate the average precision and to compare algorithms. This paper proposes a statistical approach to the construction of Precision – Recall curves when assessing the quality of algorithms for object detection in images. This approach is based on calculating Statistical Precision and Statistical Recall. Instead of the traditional confidence level, a statistical confidence level is calculated for each image as a percentage of objects detected. For each threshold value of the statistical confidence level, the total number of correctly detected objects (Integral TP) and the total number of background objects mistakenly assigned by the algorithm to one of the classes (Integral FP) are calculated for each image. Next, the values of Precision and Recall are calculated. Precision – Recall statistical curves, unlike traditional curves, are guaranteed to be monotonically non-increasing. At the same time, the Statistical Average Precision of object detection algorithms on small test datasets turns out to be less than the traditional Average Precision. On relatively large test image datasets, these differences are smoothed out. The comparison of the use of conventional and statistical Precision – Recall curves is given on a specific example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

POYNTON, MOLLIE R. „Recall to Precision“. Clinical Nurse Specialist 17, Nr. 4 (Juli 2003): 182–84. http://dx.doi.org/10.1097/00002800-200307000-00012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Maharana, Adyasha, Kunlin Cai, Joseph Hellerstein, Yulin Hswen, Michael Munsell, Valentina Staneva, Miki Verma, Cynthia Vint, Derry Wijaya und Elaine O. Nsoesie. „Detecting reports of unsafe foods in consumer product reviews“. JAMIA Open 2, Nr. 3 (05.08.2019): 330–38. http://dx.doi.org/10.1093/jamiaopen/ooz030.

Der volle Inhalt der Quelle
Annotation:
Abstract Objectives Access to safe and nutritious food is essential for good health. However, food can become unsafe due to contamination with pathogens, chemicals or toxins, or mislabeling of allergens. Illness resulting from the consumption of unsafe foods is a global health problem. Here, we develop a machine learning approach for detecting reports of unsafe food products in consumer product reviews from Amazon.com. Materials and Methods We linked Amazon.com food product reviews to Food and Drug Administration (FDA) food recalls from 2012 to 2014 using text matching approaches in a PostGres relational database. We applied machine learning methods and over- and under-sampling methods to the linked data to automate the detection of reports of unsafe food products. Results Our data consisted of 1 297 156 product reviews from Amazon.com. Only 5149 (0.4%) were linked to recalled food products. Bidirectional Encoder Representation from Transformations performed best in identifying unsafe food reviews, achieving an F1 score, precision and recall of 0.74, 0.78, and 0.71, respectively. We also identified synonyms for terms associated with FDA recalls in more than 20 000 reviews, most of which were associated with nonrecalled products. This might suggest that many more products should have been recalled or investigated. Discussion and Conclusion Challenges to improving food safety include, urbanization which has led to a longer food chain, underreporting of illness and difficulty in linking contaminated food to illness. Our approach can improve food safety by enabling early identification of unsafe foods which can lead to timely recall thereby limiting the health and economic impact on the public.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Puspito, Yuda, FX Arinto Setyawan und Helmy Fitriawan. „Deteksi Posisi Plat Nomor Kendaraan Menggunakan Metode Transformasi Hough dan Hit or Miss“. Electrician 12, Nr. 3 (04.09.2018): 118. http://dx.doi.org/10.23960/elc.v12n3.2084.

Der volle Inhalt der Quelle
Annotation:
Abstrak: Penelitian ini dikembangkan sebuah sistem pendeteksi posisi plat nomor kendaraan yang ditampilkan pada GUI Matlab. Pendeteksian posisi plat nomor kendaraan menggunakan dua metode, yaitu metode transformasi hough dan transformasi hit or miss. Tahap pengolahan citra yang digunakan meliputi: binerisasi, aras keabuan, deteksi tepi, pemotongan citra, filtering, dan resizing. Keefektifan sistem ini diukur dengan perhitungan terhadap nilai perolehan (recall) dan nilai ketepatan (precision). Berdasarkan hasil penelitian didapatkan bahwa sistem berhasil mendeteksi posisi plat nomor kendaraan dengan tingkat keberhasilan pendeteksian sebesar 76% untuk nilai threshold 0,75, 72% untuk nilai threshold 0,8 dan 48% untuk nilai threshold 0,85. Hasil penelitian juga menunjukkan nilai rata-rata recall sebesar 54% untuk nilai threshold 0,75, 50% untuk nilai threshold 0,8 dan 40% untuk nilai threshold 0,85, sedangkan nilai rata-rata precision sebesar 14% untuk nilai threshold 0,75, 14% untuk nilai threshold 0,8 dan 12% untuk nilai threshold 0,85.Kata kunci: Trasnformasi Hough, Transformasi Hit Or Miss, Recall, Precission. Abstract: This research was developed a detection system of vehicle license plates that displayed at the GUI Matlab. The detecting the number plate position of the vehicle uses two methods, namely the transformation method of hough and the transformation of hit or miss. The image processing stages used include: binerization, gray level, edge detection, image cutting, filtering, and resizing. The effectiveness of this system is measured by calculating the value of the recall and the precision. Based on the results of the study it was found that the system successfully detected the number plate position of the vehicle with a detection rate of 76% for the threshold value of 0.75, 72% for the threshold value of 0.8 and 48% for the threshold value of 0.85. The results also showed an average recall value of 54% for the threshold value of 0.75, 50% for the threshold value of 0.8 and 40% for the threshold value of 0.85, while the average value of precision was 14% for the threshold value of 0.75, 14% for the threshold value of 0.85, while the average value of precision was 14% for the threshold value of 0.75, 14% for the threshold value of 0.8 and 12% for the threshold value of 0.85.Keywords: Trasnformasi Hough, Transformasi Hit Or Miss, Recall, Precission.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Precisión y recall"

1

Parkin, Jennifer. „Memory for spatial mental models : examining the precision of recall“. Thesis, Loughborough University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415926.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Al-Dallal, Ammar Sami. „Enhancing recall and precision of web search using genetic algorithm“. Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7379.

Der volle Inhalt der Quelle
Annotation:
Due to rapid growth of the number of Web pages, web users encounter two main problems, namely: many of the retrieved documents are not related to the user query which is called low precision, and many of relevant documents have not been retrieved yet which is called low recall. Information Retrieval (IR) is an essential and useful technique for Web search; thus, different approaches and techniques are developed. Because of its parallel mechanism with high-dimensional space, Genetic Algorithm (GA) has been adopted to solve many of optimization problems where IR is one of them. This thesis proposes searching model which is based on GA to retrieve HTML documents. This model is called IR Using GA or IRUGA. It is composed of two main units. The first unit is the document indexing unit to index the HTML documents. The second unit is the GA mechanism which applies selection, crossover, and mutation operators to produce the final result, while specially designed fitness function is applied to evaluate the documents. The performance of IRUGA is investigated using the speed of convergence of the retrieval process, precision at rank N, recall at rank N, and precision at recall N. In addition, the proposed fitness function is compared experimentally with Okapi-BM25 function and Bayesian inference network model function. Moreover, IRUGA is compared with traditional IR using the same fitness function to examine the performance in terms of time required by each technique to retrieve the documents. The new techniques developed for document representation, the GA operators and the fitness function managed to achieves an improvement over 90% for the recall and precision measures. And the relevance of the retrieved document is much higher than that retrieved by the other models. Moreover, a massive comparison of techniques applied to GA operators is performed by highlighting the strengths and weaknesses of each existing technique of GA operators. Overall, IRUGA is a promising technique in Web search domain that provides a high quality search results in terms of recall and precision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Klitkou, Gabriel. „Automatisk trädkartering i urban miljö : En fjärranalysbaserad arbetssättsutveckling“. Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27301.

Der volle Inhalt der Quelle
Annotation:
Digital urban tree registers serve many porposes and facilitate the administration, care and management of urban trees within a city or municipality. Currently, mapping of urban tree stands is carried out manually with methods which are both laborious and time consuming. The aim of this study is to establish a way of operation based on the use of existing LiDAR data and othophotos to automatically detect individual trees. By using the extensions LIDAR Analyst and Feature Analyst for ArcMap a tree extraction was performed. This was carried out over the extent of the city district committee area of Östermalm in the city of Stockholm, Sweden. The results were compared to the city’s urban tree register and validated by calculating its Precision and Recall. This showed that FeatureAnalyst generated the result with the highest accuracy. The derived trees were represented by polygons which despite their high accuracy make the result unsuitable for detecting individual tree positions. Even though the use of LIDAR Analyst resulted in a less precise tree mapping result, individual tree positions were detected satisfactory. This especially in areas with more sparse, regular tree stands. The study concludes that the use of both tools complement each other and compensate the shortcomings of the other. FeatureAnalyst maps an acceptable tree coverage while LIDAR Analyst more accurately identifies individual tree positions. Thus, a combination of the two results could be used for individual tree mapping.
Digitala urbana trädregister tjänar många syften och underlättar för städer och kommuner att administrera, sköta och hantera sina park- och gatuträd. Dagens kartering av urbana trädbestånd sker ofta manuellt med metoder vilka är både arbetsintensiva och tidskrävande. Denna studie syftar till att utveckla ett arbetssätt för att med hjälp av befintliga LiDAR-data och ortofoton automatiskt kartera individuella träd. Med hjälp av tilläggen LIDAR Analyst och FeatureAnalyst för ArcMap utfördes en trädkartering över Östermalms stadsdelsnämndsområde i Stockholms stad. Efter kontroll mot stadens träddatabas och validering av resultatet genom beräknandet av Precision och Recall konstaterades att användningen av FeatureAnalyst resulterade i det bästa trädkarteringsresultatet. Dessa träd representeras av polygoner vilket medför att resultatet trots sin goda täckning inte lämpar sig för identifierandet av enskilda trädpositioner. Även om användningen av LIDAR Analyst resulterade i ett mindre precist karteringsresultat erhölls goda positionsbestämmelser för enskilda träd, främst i områden med jämna, glesa trädbestånd. Slutsatsen av detta är att användandet av de båda verktygen kompenserar varandras tillkortakommanden där FeatureAnalyst ger en godtagbar trädtäckning medan LIDAR Analyst bättre identifierar enskilda trädpositioner. En kombination av de båda resultaten skulle alltså kunna användas i trädkarteringssyfte.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Johansson, Ann, und Karolina Johansson. „Utvärdering av sökmaskiner : en textanalys kring utvärderingar av sökmaskiner på Webben“. Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-18323.

Der volle Inhalt der Quelle
Annotation:
The purpose of this thesis is to analyse studies that evaluate Web search engines. This is done in four categories; the researchers’ purpose, the evaluation measurements, the relevance, and the time aspect. Our method is based on a text analysis in which we use the direction of analysis of the content of sixteen evaluation experiments. Our results indicate fundamental differences in the way the researchers are tackling the problem of evaluation of Web search engines. We think that, despite the differences that we have been able to identify, it is necessary to perform evaluation experiments, so that methods can be developed that can guarantee the quality of the Web search engines. To provide people with the kind of information they need is the main task for Web search engines. In an increasing flow of information that task will be even more important. Evaluation of Web search engines can be a part of improving the efficiency of the Web search engines and in that way strengthen their roll as important information resources.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Carlsson, Bertil. „Guldstandarder : dess skapande och utvärdering“. Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19954.

Der volle Inhalt der Quelle
Annotation:

Forskningsområdet för att skapa bra automatiska sammanfattningar har ökat stadigt genom de senaste åren. Detta på grund av den efterfrågan som finns både inom den privata och offentliga sektorn på att kunna ta till sig mer information än vad som idag är möjligt. Man vill slippa sitta och läsa hela rapporter och informationstexter utan istället smidigt kunna läsa en sammanfattning av dessa för att på så sätt kunna läsa fler. För att veta om dessa automatiska sammanfattare håller en bra standard måste dessa utvärderas på något sätt. Ofta görs detta genom att se till hur mycket information som kommer med i sammanfattningen och hur mycket som utelämnas. För att detta ska vara möjligt att kontrollera behövs en så kallad guldstandard, en sammanfattning som agerar som facit gentemot de automatiskt sammanfattade texterna.

Den här rapporten behandlar ämnet guldstandarder och skapandet av dessa. I projektet har fem guldstandarder på informationstexter från Försäkringskassan skapats och utvärderats med positiva resultat.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nordh, Andréas. „Musikwebb : En evaluering av webbtjänstens återvinningseffektivitet“. Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19907.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis was to evaluate the music downloading service Musikwebb regarding its indexing and retrieval effectiveness. This was done by performing various kinds of search in the system. The outcome of these searches were then analysed according to the criteria specificity, precision, recall, exclusivity and authority control. The study showed that Musikwebb had several flaws regarding its retrieval effectiveness. The most prominent cases were the criteria exclusivity and specificity. Several of Musikwebb’s classes could be regarded as almost similar and the average number of songs in each class was over 50 000. As this study shows, having over 50 000 unique entries in a class results in problems regarding the effectiveness of the browsing technique. The developers of Musikwebb are recommended by the author to acquire their licensed material from All Music Guide, including the implementation of the All Music Guide classification system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Santos, Juliana Bonato dos. „Automatizando o processo de estimativa de revocação e precisão de funções de similaridade“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15889.

Der volle Inhalt der Quelle
Annotation:
Os mecanismos tradicionais de consulta a bases de dados, que utilizam o critério de igualdade, têm se tornado ineficazes quando os dados armazenados possuem variações tanto ortográficas quanto de formato. Nesses casos, torna-se necessário o uso de funções de similaridade ao invés dos operadores booleanos. Os mecanismos de consulta por similaridade retornam um ranking de elementos ordenados pelo seu valor de similaridade em relação ao objeto consultado. Para delimitar os elementos desse ranking que efetivamente fazem parte do resultado pode-se utilizar um limiar de similaridade. Entretanto, a definição do limiar de similaridade adequado é complexa, visto que este valor varia de acordo com a função de similaridade usada e a semântica dos dados consultados. Uma das formas de auxiliar na definição do limiar adequado é avaliar a qualidade do resultado de consultas que utilizam funções de similaridade para diferentes limiares sobre uma amostra da coleção de dados. Este trabalho apresenta um método automático de estimativa da qualidade de funções de similaridade através de medidas de revocação e precisão computadas para diferentes limiares. Os resultados obtidos a partir da aplicação desse método podem ser utilizados como metadados e, a partir dos requisitos de uma aplicação específica, auxiliar na definição do limiar mais adequado. Este processo automático utiliza métodos de agrupamento por similaridade, bem como medidas para validar os grupos formados por esses métodos, para eliminar a intervenção humana durante a estimativa de valores de revocação e precisão.
Traditional database query mechanisms, which use the equality criterion, have become inefficient when the stored data have spelling and format variations. In such cases, it's necessary to use similarity functions instead of boolean operators. Query mechanisms that use similarity functions return a ranking of elements ordered by their score in relation to the query object. To define the relevant elements that must be returned in this ranking, a threshold value can be used. However, the definition of the appropriated threshold value is complex, because it depends on the similarity function used and the semantics of the queried data. One way to help to choose an appropriate threshold is to evaluate the quality of similarity functions results using different thresholds values on a database sample. This work presents an automatic method to estimate the quality of similarity functions through recall and precision measures computed for different thresholds. The results obtained by this method can be used as metadata and, through the requirements of an specific application, assist in setting the appropriated threshold value. This process uses clustering methods and cluster validity measures to eliminate human intervention during the process of estimating recall and precision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chiow, Sheng-wey. „A precision measurement of the photon recoil using large area atom interferometry /“. May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lopes, Miguel. „Inference of gene networks from time series expression data and application to type 1 Diabetes“. Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/216729.

Der volle Inhalt der Quelle
Annotation:
The inference of gene regulatory networks (GRN) is of great importance to medical research, as causal mechanisms responsible for phenotypes are unravelled and potential therapeutical targets identified. In type 1 diabetes, insulin producing pancreatic beta-cells are the target of an auto-immune attack leading to apoptosis (cell suicide). Although key genes and regulations have been identified, a precise characterization of the process leading to beta-cell apoptosis has not been achieved yet. The inference of relevant molecular pathways in type 1 diabetes is then a crucial research topic. GRN inference from gene expression data (obtained from microarrays and RNA-seq technology) is a causal inference problem which may be tackled with well-established statistical and machine learning concepts. In particular, the use of time series facilitates the identification of the causal direction in cause-effect gene pairs. However, inference from gene expression data is a very challenging problem due to the large number of existing genes (in human, over twenty thousand) and the typical low number of samples in gene expression datasets. In this context, it is important to correctly assess the accuracy of network inference methods. The contributions of this thesis are on three distinct aspects. The first is on inference assessment using precision-recall curves, in particular using the area under the curve (AUPRC). The typical approach to assess AUPRC significance is using Monte Carlo, and a parametric alternative is proposed. It consists on deriving the mean and variance of the null AUPRC and then using these parameters to fit a beta distribution approximating the true distribution. The second contribution is an investigation on network inference from time series. Several state of the art strategies are experimentally assessed and novel heuristics are proposed. One is a fast approximation of first order Granger causality scores, suited for GRN inference in the large variable case. Another identifies co-regulated genes (ie. regulated by the same genes). Both are experimentally validated using microarray and simulated time series. The third contribution of this thesis is on the context of type 1 diabetes and is a study on beta cell gene expression after exposure to cytokines, emulating the mechanisms leading to apoptosis. 8 datasets of beta cell gene expression were used to identify differentially expressed genes before and after 24h, which were functionally characterized using bioinformatics tools. The two most differentially expressed genes, previously unknown in the type 1 Diabetes literature (RIPK2 and ELF3) were found to modulate cytokine induced apoptosis. A regulatory network was then inferred using a dynamic adaptation of a state of the art network inference method. Three out of four predicted regulations (involving RIPK2 and ELF3) were experimentally confirmed, providing a proof of concept for the adopted approach.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Afram, Gabriel. „Genomsökning av filsystem för att hitta personuppgifter : Med Linear chain conditional random field och Regular expression“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34069.

Der volle Inhalt der Quelle
Annotation:
The new General Data Protection Regulation (GDPR) Act will apply to all companies within the European Union after 25 May. This means stricter legal requirements for companies that in some way store personal data. The goal of this project is therefore to make it easier for companies to meet the new legal requirements. This by creating a tool that searches file systems and visually shows the user in a graphical user interface which files contain personal data. The tool uses Named entity recognition with the Linear chain conditional random field algorithm which is a type of supervised learning method in machine learning. This algorithm is used in the project to find names and addresses in files. The different models are trained with different parameters and the training is done using the stanford NER library in Java. The models are tested by a test file containing 45,000 words where the models themselves can predict all classes to the words in the file. The models are then compared with each other using the measurements of precision, recall and F-score to find the best model. The tool also uses Regular Expression to find emails, IP numbers, and social security numbers. The result of the final machine learning model shows that it does not find all names and addresses, but that can be improved by increasing exercise data. However, this is something that requires a more powerful computer than the one used in this project. An analysis of how the Swedish language is built would also need to be done to apply the most appropriate parameters for the training of the model.
Den nya lagen General data protection regulation (GDPR) började gälla för alla företag inom Europeiska unionen efter den 25 maj. Detta innebär att det blir strängare lagkrav för företag som på något sätt lagrar personuppgifter. Målet med detta projekt är därför att underlätta för företag att uppfylla de nya lagkraven. Detta genom att skapa ett verktyg som söker igenom filsystem och visuellt visar användaren i ett grafiskt användargränssnitt vilka filer som innehåller personuppgifter. Verktyget använder Named Entity Recognition med algoritmen Linear Chain Conditional Random Field som är en typ av ”supervised” learning metod inom maskininlärning. Denna algoritm används för att hitta namn och adresser i filer. De olika modellerna tränas med olika parametrar och träningen sker med hjälp av biblioteket Stanford NER i Java. Modellerna testas genom en testfil som innehåller 45 000 ord där modellerna själva får förutspå alla klasser till orden i filen. Modellerna jämförs sedan med varandra med hjälp av mätvärdena precision, recall och F-score för att hitta den bästa modellen. Verktyget använder även Regular expression för att hitta e- mails, IP-nummer och personnummer. Resultatet på den slutgiltiga maskininlärnings modellen visar att den inte hittar alla namn och adresser men att det är något som kan förbättras genom att öka träningsdata. Detta är dock något som kräver en kraftfullare dator än den som användes i detta projekt. En undersökning på hur det svenska språket är uppbyggt skulle även också behöva göras för att använda de lämpligaste parametrarna vid träningen av modellen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Precisión y recall"

1

Ting, Kai Ming. „Precision and Recall“. In Encyclopedia of Machine Learning and Data Mining, 990–91. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_659.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Carterette, Ben. „Precision and Recall“. In Encyclopedia of Database Systems, 2779. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_5050.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Carterette, Ben. „Precision and Recall“. In Encyclopedia of Database Systems, 2126–27. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_5050.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Carterette, Ben. „Precision and Recall“. In Encyclopedia of Database Systems, 1–2. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_5050-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag et al. „Precision and Recall“. In Encyclopedia of Machine Learning, 781. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ting, Kai Ming. „Precision and Recall“. In Encyclopedia of Machine Learning and Data Mining, 1. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_659-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Ethan, und Yi Zhang. „Eleven Point Precision-Recall Curve“. In Encyclopedia of Database Systems, 1289–90. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_481.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Ethan, und Yi Zhang. „Eleven Point Precision-recall Curve“. In Encyclopedia of Database Systems, 981–82. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_481.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Torgo, Luis, und Rita Ribeiro. „Precision and Recall for Regression“. In Discovery Science, 332–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04747-3_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Ethan, und Yi Zhang. „Eleven Point Precision-Recall Curve“. In Encyclopedia of Database Systems, 1–2. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_481-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Precisión y recall"

1

Pagani, Fabio, Matteo Dell'Amico und Davide Balzarotti. „Beyond Precision and Recall“. In CODASPY '18: Eighth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3176258.3176306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Melamed, I. Dan, Ryan Green und Joseph P. Turian. „Precision and recall of machine translation“. In the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. Morristown, NJ, USA: Association for Computational Linguistics, 2003. http://dx.doi.org/10.3115/1073483.1073504.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Stuedi, Patrick, und Gustavo Alonso. „Recall and Precision in Distributed Bandwidth Allocation“. In 2007 Fifteenth IEEE International Workshop on Quality of Service. IEEE, 2007. http://dx.doi.org/10.1109/iwqos.2007.376559.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clémençon, Stéphan, und Nicolas Vayatis. „Nonparametric estimation of the precision-recall curve“. In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553398.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Brodersen, Kay Henning, Cheng Soon Ong, Klaas Enno Stephan und Joachim M. Buhmann. „The Binormal Assumption on Precision-Recall Curves“. In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.1036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kuperus, Jasper, Cor J. Veenman und Maurice van Keulen. „Increasing NER Recall with Minimal Precision Loss“. In 2013 European Intelligence and Security Informatics Conference (EISIC). IEEE, 2013. http://dx.doi.org/10.1109/eisic.2013.23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zanibbi, R., D. Blostein und J. R. Cordy. „Historical recall and precision: summarizing generated hypotheses“. In Eighth International Conference on Document Analysis and Recognition (ICDAR'05). IEEE, 2005. http://dx.doi.org/10.1109/icdar.2005.128.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Peng, und Wanhua Su. „Statistical inference on recall, precision and average precision under random selection“. In 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2012. http://dx.doi.org/10.1109/fskd.2012.6234049.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lingras, Pawan, und Cory J. Butz. „Precision and Recall in Rough Support Vector Machines“. In 2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007. http://dx.doi.org/10.1109/grc.2007.4403181.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lingras, Pawan, und Cory J. Butz. „Precision and Recall in Rough Support Vector Machines“. In 2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007. http://dx.doi.org/10.1109/grc.2007.77.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Precisión y recall"

1

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang und Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), Juli 2021. http://dx.doi.org/10.21079/11681/41302.

Der volle Inhalt der Quelle
Annotation:
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR activity classes (i.e., agonist, antagonist, inactive, and inconclusive) at the same distribution rates amongst the training, validation and test subsets. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p < 0.001, ANOVA) by 22–27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Further in-depth analyses of chemical scaffolding shed insights on structural alerts for AR agonists/antagonists and inactive/inconclusive compounds, which may aid in future drug discovery and improvement of toxicity prediction modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie