To see the other types of publications on this topic, follow the link: Precisión y recall.

Journal articles on the topic 'Precisión y recall'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Precisión y recall.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Florez Carvajal, Daniel Mauricio, and Germán Andrés Garnica Gaitán. "Detección de grupos de fajillas en imágenes de paquetes de billete en diversas condiciones de iluminación y fondo mediante un clasificador SVM." AVANCES Investigación en Ingeniería 14 (December 15, 2017): 145. http://dx.doi.org/10.18041/1794-4953/avances.1.1293.

Full text
Abstract:
Este artículo presenta los resultados de una clasificación binaria de imágenes con dos diferentes condiciones de iluminación y fondo para un problema específico de detección de grupos de fajillas en paquetes de billete. La detección se lleva a cabo con un clasificador “Support Vector Machines” entrenado con vectores característicos obtenidos de las imágenes mediante la aplicación de la transformada wavelet y de la técnica de concatenación de histograma. Para cada condición de fondo e iluminación se entrena un clasificador diferente, se obtiene la matriz de confusión de cada uno y luego se comparan mediante los parámetros de recall, especificidad, precisión, exactitud y Fscore.
APA, Harvard, Vancouver, ISO, and other styles
2

Cardoso, Alejandra Carolina, M. Alicia Pérez Abelleira, and Enzo Notario. "Búsqueda de respuestas como aplicación del problema de extracción de relaciones." Revista Tecnología y Ciencia, no. 33 (October 17, 2018): 45–64. http://dx.doi.org/10.33414/rtyc.33.45-64.2018.

Full text
Abstract:
El volumen de información no estructurada en forma de documentos de texto de diversos orígenes es cada vez mayor. Para poder acceder a ella y obtener conocimiento que pueda ser aplicado a una diversidad de tareas, primero hay que “estructurar” esa información. La extracción de la estructura relacional entre entidades, en forma de tripletas basadas en un verbo, puede ser aplicada al problema de búsqueda de respuestas. Este trabajo ha hecho uso de técnicas eficientes de análisis superficial de texto y el sistema construido tiene una precisión y recall comparable a otros sistemas del estado del arte. Las tripletas extraídas forman la base de conocimientos sobre la que se hacen consultas para obtener respuestas a preguntas en lenguaje natural. Los resultados de este sistema de búsqueda de respuestas sobre un banco de preguntas a un corpus de 2052 documentos sobre Salta obtenidos de la web demuestran la validez de este enfoque.
APA, Harvard, Vancouver, ISO, and other styles
3

Cardoso, Alejandra Carolina, María Lorena Talamé, Matías Nicolás Amor, and Agustina Monge. "Creación de un Corpus de Opiniones con Emociones Usando Aprendizaje Automático." Revista Tecnología y Ciencia, no. 37 (April 3, 2020): 11–23. http://dx.doi.org/10.33414/rtyc.37.11-23.2020.

Full text
Abstract:
La identificación de los sentimientos expresados en opiniones textuales puede entenderse como la categorización de los mismos según sus características, y resulta de gran interés en la actualidad. El aprendizaje supervisado es uno de los métodos más populares para la clasificación textual, pero se necesitan muchos datos etiquetados para el entrenamiento. El aprendizaje semi supervisado supera esta limitación, ya que implica trabajar con un pequeño conjunto de datos etiquetados y otro mayor sin etiquetar. Se desarrolló un método de clasificación de textos que combina ambos tipos de aprendizajes. Se recopilaron textos breves u opiniones de la red social Twitter, a los que se aplicaron una serie de acciones de limpieza y preparación, para luego clasificarlos en cuatro sentimientos: ira, asco, tristeza y felicidad. La precisión y recall obtenidos con el método fueron satisfactorios y como consecuencia, se logró obtener un corpus de mensajes categorizados según el sentimiento expresado.
APA, Harvard, Vancouver, ISO, and other styles
4

Putri, Vinna Utami, Eko Budi Cahyono, and Yufis Azhar. "Deteksi Botnet Pada Passive DNS Dengan Menggunakan Metode K Nearest Neighbor." Jurnal Repositor 2, no. 12 (December 4, 2020): 1631. http://dx.doi.org/10.22219/repositor.v2i12.450.

Full text
Abstract:
AbstrakTeknologi internet di masa kini berkembang dengan pesat berbanding lurus dengan penggunanya yang juga semakin banyak. Salah satu kejahatan software yang berbahaya adalah robot network (Botnet). Botnet adalah sebuah zombie dalam jaringan dari jutaan perangkat yang tersambung ke internet, yang mana bot diinfeksi dengan malware yang khusus agar bisa dikendalikan oleh cybercriminal dari jarak jauh untuk memberikan serangan seperti mengirim email, mencuri informasi pribadi, dan meluncurkan serangan DDoS.Pada penelitian ini penulis mengelompokan dan mengklasifikasikan dataset yang botnets dan normal pada passive DNS yang terdapat pada dataset CTU-13 dengan metode k Nearest Neighbor dan juga pengujian dengan mengunakan confusion matrix dengan nilai precision, recall dan accuracy dari k-nearest neighbor dari standart bahasa pemograman python dengan library sciketlearn disetiap kelas prediksi dan hasil yang dicapai cukup tinggi dengan nilai dari uniform dengan nilai 76% untuk precission 86% dan recal-nya 93,9% untuk accuracy. Uniform ternormalisasi dengan nilai 76% untuk precission 88% dan recal-nya 83% untuk accuracy. Hasil Distance didapatkan nilai 100% untuk precission 85% dan recal-nya 92% untuk accuracy. Hasil Distance ternormalisasi 100% untuk precission 87% dan recal-nya 93% untuk accuracy.Abstract . The present internet technology develops by leaps and bounds is directly proportional to its users which is also more and more. One of the crime of malicious software is a robot network (Botnet). A botnet is a network of millions of zombies in a device that is connected to the internet, which is where a bot infection with malware that specifically so that it can be controlled by the cybercriminal remotely to provide attack such as sending email, steal personal information, and launching DDoS attacks.In this study the authors classify and classify the botnets and normal dataset on a passive DNS contained on dataset CTU-13 k Nearest Neighbor method and also testing using confusion matrix with values of precision, recall and the accuracy of k-nearest neighbor of the python programming language with the standard library sciketlearn every class predictions and results achieved high enough with the value of the uniform with a value of 76% to 86% and precission recal its 93.9% for accuracy. Uniform ternormalisasi with a value of 76% to 88% and precission recal 83% for its accuracy. The results obtained by the value of 100% Distance for precission 85% and 92% of his recal for accuracy. Ternormalisasi 100% Distance results for precission 87% and 93% of his recal for accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

López-Trujillo, Sebastián, and María C. Torres-Madroñero. "Comparación de algoritmos de resumen de texto para el procesamiento de editoriales y noticias en español." TecnoLógicas 24, no. 51 (June 11, 2021): e1816. http://dx.doi.org/10.22430/22565337.1816.

Full text
Abstract:
El lenguaje se ve afectado, no solo por las reglas gramaticales, sino también por el contexto y las diversidades socioculturales, por lo cual, el resumen automático de textos (un área de interés en el procesamiento de lenguaje natural - PLN), enfrenta desafíos como la identificación de fragmentos importantes según el contexto y el tipo de texto analizado. Trabajos anteriores describen diferentes métodos de resúmenes automáticos, sin embargo, no existen estudios sobre su efectividad en contextos específicos y tampoco en textos en español. En este artículo se presenta la comparación de tres algoritmos de resumen automático usando noticias y editoriales en español. Los tres algoritmos son métodos extractivos que buscan estimar la importancia de una frase o palabra a partir de métricas de similitud o frecuencia de palabras. Para esto se construyó una base de datos de documentos donde se incluyeron 33 editoriales y 27 noticias, obteniéndose un resumen manual para cada texto. La comparación de los algoritmos se realizó cuantitativamente, empleando la métrica Recall-Oriented Understudy for Gisting Evaluation. Asimismo, se analizó el potencial de los algoritmos seleccionados para identificar los componentes principales del texto. En el caso de las editoriales, el resumen automático debía incluir un problema y la opinión del autor, mientras que, en las noticias, el resumen debía describir las características temporales y espaciales de un suceso. En términos de porcentaje de reducción de palabras y precisión, el método que permite obtener los mejores resultados, tanto para noticias como para editoriales, es el basado en la matriz de similitud. Este método permite reducir en un 70 % los textos, tanto editoriales como noticiosos. No obstante, es necesario incluir la semántica y el contexto en los algoritmos para mejorar su desempeño en cuanto a precisión y sensibilidad.
APA, Harvard, Vancouver, ISO, and other styles
6

Galarza Bravo, Michelle Alejandra, and Marco Flores. "Detección de peatones en la noche usando Faster R-CNN e imágenes infrarrojas." Ingenius, no. 20 (June 30, 2018): 48–57. http://dx.doi.org/10.17163/ings.n20.2018.05.

Full text
Abstract:
En este artículo se presenta un sistema de detección de peatones en la noche, para aplicaciones en seguridad vehicular. Para este desarrollo se ha analizado el desempeño del algoritmo Faster R-CNN con imágenes en el infrarrojo lejano. Por lo que se constató que presenta inconvenientes a la hora de detectar peatones a larga distancia. En consecuencia, se presenta una nueva arquitectura Faster R-CNN dedicada a la detección en múltiples escalas, mediante dos generadores de regiones de interés (ROI) dedicados a peatones a corta y larga distancia, denominados RPNCD y RPNLD, respectivamente. Esta arquitectura ha sido comparada con los modelos para Faster R-CNN que han presentado los mejores resultados, como son VGG-16 y Resnet 101. Los resultados experimentales se han desarrollado sobre las bases de datos CVC-09 y LSIFIR, los cuales demostraron mejoras, especialmente en la detección de peatones a larga distancia, presentando una tasa de error versus FPPI de 16 % y sobre la curva Precisión vs. Recall un AP de 89,85 % para la clase peatón y un mAP de 90 % sobre el conjunto de pruebas de las bases de datos LSIFIR y CVC-09.
APA, Harvard, Vancouver, ISO, and other styles
7

Kuznetsova, Anna A. "Statistical Precision – Recall curves for object detection quality assessment." Journal Of Applied Informatics 15, no. 90 (December 28, 2020): 42–57. http://dx.doi.org/10.37791/2687-0649-2020-15-6-42-57.

Full text
Abstract:
Average precision (AP) as the area under the Precision – Recall curve is the de facto standard for comparing the quality of algorithms for classification, information retrieval, object detection, etc. However, traditional Precision – Recall curves usually have a zigzag shape, which makes it difficult to calculate the average precision and to compare algorithms. This paper proposes a statistical approach to the construction of Precision – Recall curves when assessing the quality of algorithms for object detection in images. This approach is based on calculating Statistical Precision and Statistical Recall. Instead of the traditional confidence level, a statistical confidence level is calculated for each image as a percentage of objects detected. For each threshold value of the statistical confidence level, the total number of correctly detected objects (Integral TP) and the total number of background objects mistakenly assigned by the algorithm to one of the classes (Integral FP) are calculated for each image. Next, the values of Precision and Recall are calculated. Precision – Recall statistical curves, unlike traditional curves, are guaranteed to be monotonically non-increasing. At the same time, the Statistical Average Precision of object detection algorithms on small test datasets turns out to be less than the traditional Average Precision. On relatively large test image datasets, these differences are smoothed out. The comparison of the use of conventional and statistical Precision – Recall curves is given on a specific example.
APA, Harvard, Vancouver, ISO, and other styles
8

POYNTON, MOLLIE R. "Recall to Precision." Clinical Nurse Specialist 17, no. 4 (July 2003): 182–84. http://dx.doi.org/10.1097/00002800-200307000-00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maharana, Adyasha, Kunlin Cai, Joseph Hellerstein, Yulin Hswen, Michael Munsell, Valentina Staneva, Miki Verma, Cynthia Vint, Derry Wijaya, and Elaine O. Nsoesie. "Detecting reports of unsafe foods in consumer product reviews." JAMIA Open 2, no. 3 (August 5, 2019): 330–38. http://dx.doi.org/10.1093/jamiaopen/ooz030.

Full text
Abstract:
Abstract Objectives Access to safe and nutritious food is essential for good health. However, food can become unsafe due to contamination with pathogens, chemicals or toxins, or mislabeling of allergens. Illness resulting from the consumption of unsafe foods is a global health problem. Here, we develop a machine learning approach for detecting reports of unsafe food products in consumer product reviews from Amazon.com. Materials and Methods We linked Amazon.com food product reviews to Food and Drug Administration (FDA) food recalls from 2012 to 2014 using text matching approaches in a PostGres relational database. We applied machine learning methods and over- and under-sampling methods to the linked data to automate the detection of reports of unsafe food products. Results Our data consisted of 1 297 156 product reviews from Amazon.com. Only 5149 (0.4%) were linked to recalled food products. Bidirectional Encoder Representation from Transformations performed best in identifying unsafe food reviews, achieving an F1 score, precision and recall of 0.74, 0.78, and 0.71, respectively. We also identified synonyms for terms associated with FDA recalls in more than 20 000 reviews, most of which were associated with nonrecalled products. This might suggest that many more products should have been recalled or investigated. Discussion and Conclusion Challenges to improving food safety include, urbanization which has led to a longer food chain, underreporting of illness and difficulty in linking contaminated food to illness. Our approach can improve food safety by enabling early identification of unsafe foods which can lead to timely recall thereby limiting the health and economic impact on the public.
APA, Harvard, Vancouver, ISO, and other styles
10

Puspito, Yuda, FX Arinto Setyawan, and Helmy Fitriawan. "Deteksi Posisi Plat Nomor Kendaraan Menggunakan Metode Transformasi Hough dan Hit or Miss." Electrician 12, no. 3 (September 4, 2018): 118. http://dx.doi.org/10.23960/elc.v12n3.2084.

Full text
Abstract:
Abstrak: Penelitian ini dikembangkan sebuah sistem pendeteksi posisi plat nomor kendaraan yang ditampilkan pada GUI Matlab. Pendeteksian posisi plat nomor kendaraan menggunakan dua metode, yaitu metode transformasi hough dan transformasi hit or miss. Tahap pengolahan citra yang digunakan meliputi: binerisasi, aras keabuan, deteksi tepi, pemotongan citra, filtering, dan resizing. Keefektifan sistem ini diukur dengan perhitungan terhadap nilai perolehan (recall) dan nilai ketepatan (precision). Berdasarkan hasil penelitian didapatkan bahwa sistem berhasil mendeteksi posisi plat nomor kendaraan dengan tingkat keberhasilan pendeteksian sebesar 76% untuk nilai threshold 0,75, 72% untuk nilai threshold 0,8 dan 48% untuk nilai threshold 0,85. Hasil penelitian juga menunjukkan nilai rata-rata recall sebesar 54% untuk nilai threshold 0,75, 50% untuk nilai threshold 0,8 dan 40% untuk nilai threshold 0,85, sedangkan nilai rata-rata precision sebesar 14% untuk nilai threshold 0,75, 14% untuk nilai threshold 0,8 dan 12% untuk nilai threshold 0,85.Kata kunci: Trasnformasi Hough, Transformasi Hit Or Miss, Recall, Precission. Abstract: This research was developed a detection system of vehicle license plates that displayed at the GUI Matlab. The detecting the number plate position of the vehicle uses two methods, namely the transformation method of hough and the transformation of hit or miss. The image processing stages used include: binerization, gray level, edge detection, image cutting, filtering, and resizing. The effectiveness of this system is measured by calculating the value of the recall and the precision. Based on the results of the study it was found that the system successfully detected the number plate position of the vehicle with a detection rate of 76% for the threshold value of 0.75, 72% for the threshold value of 0.8 and 48% for the threshold value of 0.85. The results also showed an average recall value of 54% for the threshold value of 0.75, 50% for the threshold value of 0.8 and 40% for the threshold value of 0.85, while the average value of precision was 14% for the threshold value of 0.75, 14% for the threshold value of 0.85, while the average value of precision was 14% for the threshold value of 0.75, 14% for the threshold value of 0.8 and 12% for the threshold value of 0.85.Keywords: Trasnformasi Hough, Transformasi Hit Or Miss, Recall, Precission.
APA, Harvard, Vancouver, ISO, and other styles
11

Robertson, Claire, Rana Conway, Barbara Dennis, John Yarnell, Jeremiah Stamler, and Paul Elliott. "Attainment of precision in implementation of 24h dietary recalls: INTERMAP UK." British Journal of Nutrition 94, no. 4 (October 2005): 588–94. http://dx.doi.org/10.1079/bjn20051543.

Full text
Abstract:
Collection of complete and accurate dietary intake data is necessary to investigate the association of nutrient intakes with disease outcomes. A standardised multiple-pass 24 h dietary recall method was used in the International Collaborative Study of Macro- and Micronutrients and Blood Pressure (INTERMAP) to obtain maximally objective data. Dietary interviewers were intensively trained and recalls taped, with consent, for randomly selected evaluations by the local site nutritionist (SN) and/or country nutritionists (CN) using a twelve-criterion checklist marked on a four-point scale (1, retrain, to 4, excellent). In the Belfast centre, seven dietary interviewers collected 932 24 h recalls from 40–59-year-old men and women. Total scores from the 134 evaluated recalls ranged from thirty-four to the maximum forty-eight points. All twelve aspects of the interviews were completed satisfactorily on average whether scored by the SN (n 53, range: probing 3·25 to privacy of interview 3·98) or CN (n 19, range: probing 3·26 to pace of interview and general manner of interviewer 3·95); the CN gave significantly lower scores than the SN for recalls evaluated by both nutritionists (n 31, Wilcoxon signed rank test, P=0·001). Five evaluations of three recalls identified areas requiring retraining or work to improve performance. Reporting accuracy was estimated using BMR; energy intake estimates less than 1·2 × BMR identifying under-reporting. Mean ratios in all age, sex and body-mass groups were above this cut-off point; overall, 26·1 % were below. Experiences from the INTERMAP Belfast centre indicate that difficulties in collection of dietary information can be anticipated and contained by the systematic use of methods to prevent, detect and correct errors.
APA, Harvard, Vancouver, ISO, and other styles
12

Piwowarski, B., P. Gallinari, and G. Dupret. "Precision recall with user modeling (PRUM)." ACM Transactions on Information Systems 25, no. 1 (February 2007): 1. http://dx.doi.org/10.1145/1198296.1198297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Slaney, Malcolm. "Precision-Recall Is Wrong for Multimedia." IEEE Multimedia 18, no. 3 (March 2011): 4–7. http://dx.doi.org/10.1109/mmul.2011.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cook, Jonathan, and Vikram Ramadas. "When to consult precision-recall curves." Stata Journal: Promoting communications on statistics and Stata 20, no. 1 (March 2020): 131–48. http://dx.doi.org/10.1177/1536867x20909693.

Full text
Abstract:
Receiver operating characteristic (ROC) curves are commonly used to evaluate predictions of binary outcomes. When there is a small percentage of items of interest (as would be the case with fraud detection, for example), ROC curves can provide an inflated view of performance. This can cause challenges in determining which set of predictions is better. In this article, we discuss the conditions under which precision-recall curves may be preferable to ROC curves. As an illustrative example, we compare two commonly used fraud predictors (Beneish’s [1999, Financial Analysts Journal 55: 24–36] M score and Dechow et al.’s [2011, Contemporary Accounting Research 28: 17–82] F score) using both ROC and precision-recall curves. To aid the reader with using precision-recall curves, we also introduce the command prcurve to plot them.
APA, Harvard, Vancouver, ISO, and other styles
15

Gordon, Michael, and Manfred Kochen. "Recall-precision trade-off: A derivation." Journal of the American Society for Information Science 40, no. 3 (May 1989): 145–51. http://dx.doi.org/10.1002/(sici)1097-4571(198905)40:3<145::aid-asi1>3.0.co;2-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Buckland, Michael, and Fredric Gey. "The relationship between Recall and Precision." Journal of the American Society for Information Science 45, no. 1 (January 1994): 12–19. http://dx.doi.org/10.1002/(sici)1097-4571(199401)45:1<12::aid-asi2>3.0.co;2-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Arora, Monika, Uma Kanjilal, and Dinesh Varshney. "Evaluation of information retrieval: precision and recall." International Journal of Indian Culture and Business Management 12, no. 2 (2016): 224. http://dx.doi.org/10.1504/ijicbm.2016.074482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pramono, Djoko, and Nanang Yudi Setiawan. "PEMBANGUNAN LINK PENELUSURAN KEBUTUHAN FUNGSIONAL DAN METHOD PADA KODE SUMBER DENGAN METODE PENGAMBILAN INFORMASI." JURNAL ELTEK 16, no. 2 (December 19, 2018): 151. http://dx.doi.org/10.33795/eltek.v16i2.106.

Full text
Abstract:
Link penelusuran antara dokumen kebutuhan dan kode sumber sangatmembantu dalam proses pengembangan dan pemeliharaan perangkatlunak. Dalam proses pemeliharaan perangkat lunak, pengembangmelakukan perubahan pada kode sumber tetapi sering tidakmemperbarui dokumen yang menyertainya. Adanya link penelusuranantara dokumen kebutuhan dengan kode sumber diharapkanmeningkatkan kecepatan menemukan bagian kode sumber yang perludiubah ketika ada perubahan kebutuhan.Dalam penelitian ini dilakukan evaluasi terhadap dua metodepengambilan informasi (information retrieval/IR) yaitu LSA(LatentSemantic Analysis) dan LDA(Latent Dirichlet Allocation) untukmenemukan link antara kebutuhan fungsional dan method dalam kodesumber program. LSA merupakan sebuah metode yang memanfaatkanmodel statistik matematis untuk menganalisa struktur semantik suatuteks. LDA adalah model probabilistik generatif untuk sekelompok datadiskrit seperti corpus. Langkah pertama adalah membentuk kumpulankata(bag of words) dari method dalam kode sumber dan dokumenkebutuhan fungsional. Langkah berikutnya adalah menghitung nilaikedekatan semantic menggunakan cosine similarity.Pengujian dilakukan menggunakan kedua metode pada dataset yangterdiri dari kebutuhan fungsional dan kode sumber ITrust dan GanttProject. Kemudian dihitung nilai precision dan recall. Nilai f-measuresebesar 0,26 diperoleh pada recall sebesar 0,23 dan precission 0,305pada pengujian menggunakan metode LDA. Metode LDA memberikanhasil lebih baik daripada LSA namun nilai precision dan recall yangdihasilkan dari kedua metode tersebut masih rendah .
APA, Harvard, Vancouver, ISO, and other styles
19

Lambert, Nancy. "Online searching of polymer patents: precision and recall." Journal of Chemical Information and Modeling 31, no. 4 (November 1, 1991): 443–46. http://dx.doi.org/10.1021/ci00004a002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Hailiang, and Joel S. Bader. "Precision and recall estimates for two-hybrid screens." Bioinformatics 25, no. 3 (December 17, 2008): 372–78. http://dx.doi.org/10.1093/bioinformatics/btn640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chadwick, M. J., H. M. Bonnici, and E. A. Maguire. "CA3 size predicts the precision of memory recall." Proceedings of the National Academy of Sciences 111, no. 29 (July 7, 2014): 10720–25. http://dx.doi.org/10.1073/pnas.1319641111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Shih-Hao, and Peter B. Danzig. "Precision and recall of ranking information-filtering systems." Journal of Intelligent Information Systems 7, no. 3 (November 1996): 287–306. http://dx.doi.org/10.1007/bf00125371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

William H. Walters. "Google Scholar Search Performance: Comparative Recall and Precision." portal: Libraries and the Academy 9, no. 1 (2008): 5–24. http://dx.doi.org/10.1353/pla.0.0034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

BISWAS, ASHIS KUMER, BAOJU ZHANG, XIAOYONG WU, and JEAN X. GAO. "CNCTDISCRIMINATOR: CODING AND NONCODING TRANSCRIPT DISCRIMINATOR — AN EXCURSION THROUGH HYPOTHESIS LEARNING AND ENSEMBLE LEARNING APPROACHES." Journal of Bioinformatics and Computational Biology 11, no. 05 (October 2013): 1342002. http://dx.doi.org/10.1142/s021972001342002x.

Full text
Abstract:
The statistics about the open reading frames, the base compositions and the properties of the predicted secondary structures have potential to address the problem of discriminating coding and noncoding transcripts. Again, the Next Generation Sequencing platform, RNA-seq, provides us bounty of data from which expression profiles of the transcripts can be extracted which urged us adding a new set of dimension in this classification task. In this paper, we proposed CNCTDiscriminator — a coding and noncoding transcript discriminating system where we applied the integration of these four categories of features about the transcripts. The feature integration was done using both hypothesis learning and feature specific ensemble learning approaches. The CNCTDiscriminator model which was trained with composition and ORF features outperforms (precision 83.86%, recall 82.01%) other three popular methods — CPC (precision 98.31%, recall 25.95%), CPAT (precision 97.74%, recall 52.50%) and PORTRAIT (precision 84.37%, recall 73.2%) when applied to an independent benchmark dataset. However, the CNCTDiscriminator model that was trained using the ensemble approach shows comparable performance (precision 89.85%, recall 71.08%).
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Yu, and Jihong Li. "Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution." Neural Computation 28, no. 8 (August 2016): 1694–722. http://dx.doi.org/10.1162/neco_a_00857.

Full text
Abstract:
In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the interval length in all 27 cases of simulated and real data experiments. However, the confidence intervals based on the K-fold and corrected K-fold cross-validated t distributions are in the two extremes. Thus, when focusing on the reliability of the inference for precision and recall, the proposed methods are preferable, especially for the first credible interval.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Xiao. "The Improvement of Computer Information Retrieval Efficiency." Applied Mechanics and Materials 651-653 (September 2014): 1984–87. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.1984.

Full text
Abstract:
Recall ratio and Precision ratio are two important indices of the effect evaluation of computer information retrieval. This paper analyzed factors impacting Recall ratio and Precision ratio, then discussed the improving of computer information retrieval efficiency from retrieval approach, database selecting and retrieval pattern, etc.
APA, Harvard, Vancouver, ISO, and other styles
27

Bautista-Gomez, Leonardo, Anne Benoit, Aurélien Cavelan, Saurabh K. Raina, Yves Robert, and Hongyang Sun. "Coping with recall and precision of soft error detectors." Journal of Parallel and Distributed Computing 98 (December 2016): 8–24. http://dx.doi.org/10.1016/j.jpdc.2016.07.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Su, Louise T. "The relevance of recall and precision in user evaluation." Journal of the American Society for Information Science 45, no. 3 (April 1994): 207–17. http://dx.doi.org/10.1002/(sici)1097-4571(199404)45:3<207::aid-asi10>3.0.co;2-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ziółko, Bartosz. "Fuzzy precision and recall measures for audio signals segmentation." Fuzzy Sets and Systems 279 (November 2015): 101–11. http://dx.doi.org/10.1016/j.fss.2015.03.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Williams, Christopher K. I. "The Effect of Class Imbalance on Precision-Recall Curves." Neural Computation 33, no. 4 (April 1, 2021): 853–57. http://dx.doi.org/10.1162/neco_a_01362.

Full text
Abstract:
In this note, I study how the precision of a binary classifier depends on the ratio r of positive to negative cases in the test set, as well as the classifier's true and false-positive rates. This relationship allows prediction of how the precision-recall curve will change with r, which seems not to be well known. It also allows prediction of how Fβ and the precision gain and recall gain measures of Flach and Kull (2015) vary with r.
APA, Harvard, Vancouver, ISO, and other styles
31

Basmalah Wicaksono, Viko, Ristu Saptono, and Sari Widya Sihwi. "Analisis Perbandingan Metode Vector Space Model dan Weighted Tree Similarity dengan Cosine Similarity pada kasus Pencarian Informasi Pedoman Pengobatan Dasar di Puskesmas." Jurnal Teknologi & Informasi ITSmart 4, no. 2 (September 3, 2016): 73. http://dx.doi.org/10.20961/its.v4i2.1768.

Full text
Abstract:
<p class="keywords">Sistem pencarian merupakan salah satu solusi yang dapat membantu dalam mendapatkan informasi yang diinginkan. Dengan sistem pencarian, proses pencarian informasi akan menjadi lebih efisien. Sistem pencarian informasi pada ebook pedoman pengobatan di puskesmas sangat dibutuhkan karena terdapat banyak data penyakit di dalamnya. Dalam mengembangkan sistem pencarian pada pedoman pengobatan di puskesmas, dapat memanfaatkan metode Vector Space Model (VSM) atau Weighted Tree Similarity (WTS). Penelitian ini membandingkan metode VSM dengan WTS untuk mendapatkan metode terbaik. Selain itu, ditambahkan algoritma Hamming Distance untuk mengetahui pengaruh eksekusi waktu sistem.</p><p class="keywords">Penelitian ini menunjukkan bahwa WTS lebih baik dibandingkan VSM, hal ini dapat dilihat pada hasil pengujian, nilai precision pada WTS lebih baik dibandingkan VSM. Karena pada metode pencarian yang efektif adalah yang memberikan nilai ketepatan(precision) terbaik, meskipun nilai recall lebih rendah. Pada pengujian sistem, VSM menunjukkan hasil nilai rata – rata precision sebesar 44,82983 % dan recall sebesar 99,08165 %. Sedangkan pada WTS nilai rata – rata precision sebesar 52,17332% dan recall sebesar 98,61761%. Kemudian pada pengujian pakar menunjukkan precision WTS dengan rata – rata sebesar 46,675% dan recall sebesar 73,6111%. Sedangkan nilai precision VSM sebesar 33,6737% dan nilai recall sebesar 86,8056%.</p>Algoritma Hamming Distance sangat membantu dalam mempercepat eksekusi sistem. Pengaruh penggunaan algoritma Hamming Distance pada VSM memberikan hasil denganrata – rata waktu pengujian adalah 4,512 detik, sedangkan tanpa Hamming Distance adalah 9,185 detik. Kemudian Pada metode WTS dengan Hamming Distance memberikan hasil rata – rata dengan waktu pengujian adalah 6,042 detik, sedangkan tanpa Hamming Distance adalah 14,421 detik
APA, Harvard, Vancouver, ISO, and other styles
32

Wardani, Ni Wayan, Gede Rasben Dantes, and Gede Indrawan. "Prediksi Customer Churn dengan Algoritma Decision Tree C4.5 Berdasarkan Segmentasi Pelanggan untuk Mempertahankan Pelanggan pada Perusahaan Retail." Jurnal RESISTOR (Rekayasa Sistem Komputer) 1, no. 1 (April 21, 2018): 16–24. http://dx.doi.org/10.31598/jurnalresistor.v1i1.219.

Full text
Abstract:
Customer is a very important asset for retail companies. This is the reason why retail companies should plan and use a fairly clear strategy in treating customers. With the large number of customers, the problem that must be faced is how to identify the characteristics of all customers and able to retain existing customers in order not to stop buying and moving to a competitor retail company. By applying the concept of CRM, a company can identify customers by segmenting customers while also being able to implement customer retention programs by predicting potential churn on each customer class. The data used comes from UD.Mawar Sari. Customer segmentation process uses RFM model to get customer class. UD. Mawar Sari customer class is dormant, everyday, golden and superstar. The construction of prediction models using the Decision Tree C4.5. The application of the prediction model obtains performance results, that is: Dormant: Recall 97.51%, Precision 75.18%, Accuracy 76.18%. Everyday: Recall 100%, Precision 99.04%, Accuracy 99.04%. Golden: Recall 100%, Precision 98.84%, Accuracy 98.84%. Superstar: Recall 96.15%, Precision 99.43%, Accuracy 95.63%. Results of the evaluation with confusion matrix it can be concluded that the dormant customer class is a potentially churn customer class.
APA, Harvard, Vancouver, ISO, and other styles
33

Meystre, Stéphane M., Youngjun Kim, Glenn T. Gobbel, Michael E. Matheny, Andrew Redd, Bruce E. Bray, and Jennifer H. Garvin. "Congestive heart failure information extraction framework for automated treatment performance measures assessment." Journal of the American Medical Informatics Association 24, e1 (July 12, 2016): e40-e46. http://dx.doi.org/10.1093/jamia/ocw097.

Full text
Abstract:
Objective: This paper describes a new congestive heart failure (CHF) treatment performance measure information extraction system – CHIEF – developed as part of the Automated Data Acquisition for Heart Failure project, a Veterans Health Administration project aiming at improving the detection of patients not receiving recommended care for CHF. Design: CHIEF is based on the Apache Unstructured Information Management Architecture framework, and uses a combination of rules, dictionaries, and machine learning methods to extract left ventricular function mentions and values, CHF medications, and documented reasons for a patient not receiving these medications. Measurements: The training and evaluation of CHIEF were based on subsets of a reference standard of various clinical notes from 1083 Veterans Health Administration patients. Domain experts manually annotated these notes to create our reference standard. Metrics used included recall, precision, and the F1-measure. Results: In general, CHIEF extracted CHF medications with high recall (&gt;0.990) and good precision (0.960–0.978). Mentions of Left Ventricular Ejection Fraction were also extracted with high recall (0.978–0.986) and precision (0.986–0.994), and quantitative values of Left Ventricular Ejection Fraction were found with 0.910–0.945 recall and with high precision (0.939–0.976). Reasons for not prescribing CHF medications were more difficult to extract, only reaching fair accuracy with about 0.310–0.400 recall and 0.250–0.320 precision. Conclusion: This study demonstrated that applying natural language processing to unlock the rich and detailed clinical information found in clinical narrative text notes makes fast and scalable quality improvement approaches possible, eventually improving management and outpatient treatment of patients suffering from CHF.
APA, Harvard, Vancouver, ISO, and other styles
34

Cooper, Matthew J., Randall V. Martin, Alexei I. Lyapustin, and Chris A. McLinden. "Assessing snow extent data sets over North America to inform and improve trace gas retrievals from solar backscatter." Atmospheric Measurement Techniques 11, no. 5 (May 22, 2018): 2983–94. http://dx.doi.org/10.5194/amt-11-2983-2018.

Full text
Abstract:
Abstract. Accurate representation of surface reflectivity is essential to tropospheric trace gas retrievals from solar backscatter observations. Surface snow cover presents a significant challenge due to its variability and thus snow-covered scenes are often omitted from retrieval data sets; however, the high reflectance of snow is potentially advantageous for trace gas retrievals. We first examine the implications of surface snow on retrievals from the upcoming TEMPO geostationary instrument for North America. We use a radiative transfer model to examine how an increase in surface reflectivity due to snow cover changes the sensitivity of satellite retrievals to NO2 in the lower troposphere. We find that a substantial fraction (> 50 %) of the TEMPO field of regard can be snow covered in January and that the average sensitivity to the tropospheric NO2 column substantially increases (doubles) when the surface is snow covered.We then evaluate seven existing satellite-derived or reanalysis snow extent products against ground station observations over North America to assess their capability of informing surface conditions for TEMPO retrievals. The Interactive Multisensor Snow and Ice Mapping System (IMS) had the best agreement with ground observations (accuracy of 93 %, precision of 87 %, recall of 83 %). Multiangle Implementation of Atmospheric Correction (MAIAC) retrievals of MODIS-observed radiances had high precision (90 % for Aqua and Terra), but underestimated the presence of snow (recall of 74 % for Aqua, 75 % for Terra). MAIAC generally outperforms the standard MODIS products (precision of 51 %, recall of 43 % for Aqua; precision of 69 %, recall of 45 % for Terra). The Near-real-time Ice and Snow Extent (NISE) product had good precision (83 %) but missed a significant number of snow-covered pixels (recall of 45 %). The Canadian Meteorological Centre (CMC) Daily Snow Depth Analysis Data set had strong performance metrics (accuracy of 91 %, precision of 79 %, recall of 82 %). We use the Fscore, which balances precision and recall, to determine overall product performance (F = 85 %, 82 (82) %, 81 %, 58 %, 46 (54) % for IMS, MAIAC Aqua (Terra), CMC, NISE, MODIS Aqua (Terra), respectively) for providing snow cover information for TEMPO retrievals from solar backscatter observations. We find that using IMS to identify snow cover and enable inclusion of snow-covered scenes in clear-sky conditions across North America in January can increase both the number of observations by a factor of 2.1 and the average sensitivity to the tropospheric NO2 column by a factor of 2.7.
APA, Harvard, Vancouver, ISO, and other styles
35

Matthies, Benjamin. "Performancemaße von Business Analytics Methoden." Controlling 32, no. 4 (2020): 79–80. http://dx.doi.org/10.15358/0935-0381-2020-4-79.

Full text
Abstract:
Controller sollten sich mit den Aussagen und Limitationen der dargestellten Performancemaße grundlegend auseinandersetzen. Accuracy ist ein einfaches Maß für die Richtigkeit eines Modells, welches jedoch nur bei Datensätzen mit ausbalancierten Klassen aussagekräftig ist. Precision und Recall sind zwei komplementäre Kennzahlen zur erweiterten Modellbewertung, zwischen denen jedoch ein Spannungsfeld besteht. Wenn die Vollständigkeit (Recall) der positiven Klassifikation verbessert werden soll, werden in der Regel auch zunehmend ungenaue Ergebnisse erzeugt, was in Konsequenz das Maß Precision reduziert. Ferner ist bei der Interpretation des F1-Scores zu beachten, dass Precision und Recall bei der Berechnung gleich gewichtet werden. Ergänzend zu den vorgestellten Kennzahlen gibt es eine Reihe weiterer Verfahren der Performancebewertung, wie z. B. die ROC (Receiver Operating Characteristic)-Kurve. Für eine erweiterte Darstellung der dargestellten sowie weiterer Kennzahlen siehe Seiter (2019, S. 148 f.).
APA, Harvard, Vancouver, ISO, and other styles
36

Isnaini, Rizki Shofak, and Jamzanah Wahyu Widayati. "EFEKTIVITAS OPAC SEBAGAI SARANA TEMU KEMBALI INFORMASI DI UPT PERPUSTAKAAN UNIVERSITAS MUHAMMADIYAH MAGELANG (UNIMMA)." Fihris: Jurnal Ilmu Perpustakaan dan Informasi 16, no. 1 (June 30, 2021): 80. http://dx.doi.org/10.14421/fhrs.2021.161.80-95.

Full text
Abstract:
Evaluation of information retrieval tool is very important to find out how far the retrieval system has worked. This study evaluates the OPAC of Muhammadiyah University of Magelang (UNIMMA) by determining the effectiveness of the information retrieval system seen from the relevance of the data displayed with the data requested by the user. This is descriptive quantitative research that calculates the value of recall and precision, while the data collection is conducted through literature studies and documentation. Based on the data collected, it can be concluded that OPAC UNIMMA is in the effective category with a high level of precision, and in the value range of 0.68 - 1.00 with recall and precision values respectively 0.77 or 77% and 0.84 or 84%. However, from the observations on each class number, it is found that the class number has a higher recall value than the precision, ie: the class numbers of 200, 800, and 900.
APA, Harvard, Vancouver, ISO, and other styles
37

Raghavan, V. V., P. Bollmann, and G. S. Jung. "Retrieval system evaluation using recall and precision: problems and answers." ACM SIGIR Forum 23, SI (June 25, 1989): 59–68. http://dx.doi.org/10.1145/75335.75342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chipperfield, James, Noel Hansen, and Peter Rossiter. "Estimating Precision and Recall for Deterministic and Probabilistic Record Linkage." International Statistical Review 86, no. 2 (February 1, 2018): 219–36. http://dx.doi.org/10.1111/insr.12246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Keilwagen, Jens, Ivo Grosse, and Jan Grau. "Area under Precision-Recall Curves for Weighted and Unweighted Data." PLoS ONE 9, no. 3 (March 20, 2014): e92209. http://dx.doi.org/10.1371/journal.pone.0092209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bollmann, Peter, Vijay V. Raghavan, Gwang S. Jung, and Lih C. Shu. "On probabilistic notions of precision as a function of recall." Information Processing & Management 28, no. 3 (January 1992): 291–315. http://dx.doi.org/10.1016/0306-4573(92)90077-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Zhongkai, and Howard D. Bondell. "Binormal Precision–Recall Curves for Optimal Classification of Imbalanced Data." Statistics in Biosciences 11, no. 1 (February 11, 2019): 141–61. http://dx.doi.org/10.1007/s12561-019-09231-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zahedi, M., and A. Ghanbari Sorkhi. "Improving Text Classification Performance Using PCA and Recall-Precision Criteria." Arabian Journal for Science and Engineering 38, no. 8 (March 9, 2013): 2095–102. http://dx.doi.org/10.1007/s13369-013-0569-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Mishra, Bhavana Dalvi, Niket Tandon, and Peter Clark. "Domain-Targeted, High Precision Knowledge Extraction." Transactions of the Association for Computational Linguistics 5 (December 2017): 233–46. http://dx.doi.org/10.1162/tacl_a_00058.

Full text
Abstract:
Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domain-targeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain - in our case, elementary science. To measure the KB’s coverage of the target domain’s knowledge (its “comprehensiveness” with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available.
APA, Harvard, Vancouver, ISO, and other styles
44

He, Jing Fang, and Jing Ye Peng. "A Shape Retrieval Study Based on Geometric Signature." Advanced Materials Research 411 (November 2011): 597–601. http://dx.doi.org/10.4028/www.scientific.net/amr.411.597.

Full text
Abstract:
This paper discusses a shape retrieval study based on geometric signature. We apply a variation signature method to extract the geometric feature of shapes. Then, we compare the performance of geometric signature and Fourier descriptor. The geometric signature gets a recall of 74.498%, precision of 71.519%, and F-measure of 72.978% while the Fourier descriptor gets a recall of 18.326%, precision of 17.593%, and F-measure of 17.952%. The proposed method performs better.
APA, Harvard, Vancouver, ISO, and other styles
45

Pradhan, Biswajeet, Husam A. H. Al-Najjar, Maher Ibrahim Sameen, Ivor Tsang, and Abdullah M. Alamri. "Unseen Land Cover Classification from High-Resolution Orthophotos Using Integration of Zero-Shot Learning and Convolutional Neural Networks." Remote Sensing 12, no. 10 (May 23, 2020): 1676. http://dx.doi.org/10.3390/rs12101676.

Full text
Abstract:
Zero-shot learning (ZSL) is an approach to classify objects unseen during the training phase and shown to be useful for real-world applications, especially when there is a lack of sufficient training data. Only a limited amount of works has been carried out on ZSL, especially in the field of remote sensing. This research investigates the use of a convolutional neural network (CNN) as a feature extraction and classification method for land cover mapping using high-resolution orthophotos. In the feature extraction phase, we used a CNN model with a single convolutional layer to extract discriminative features. In the second phase, we used class attributes learned from the Word2Vec model (pre-trained by Google News) to train a second CNN model that performed class signature prediction by using both the features extracted by the first CNN and class attributes during training and only the features during prediction. We trained and tested our models on datasets collected over two subareas in the Cameron Highlands (training dataset, first test dataset) and Ipoh (second test dataset) in Malaysia. Several experiments have been conducted on the feature extraction and classification models regarding the main parameters, such as the network’s layers and depth, number of filters, and the impact of Gaussian noise. As a result, the best models were selected using various accuracy metrics such as top-k categorical accuracy for k = [1,2,3], Recall, Precision, and F1-score. The best model for feature extraction achieved 0.953 F1-score, 0.941 precision, 0.882 recall for the training dataset and 0.904 F1-score, 0.869 precision, 0.949 recall for the first test dataset, and 0.898 F1-score, 0.870 precision, 0.838 recall for the second test dataset. The best model for classification achieved an average of 0.778 top-one, 0.890 top-two and 0.942 top-three accuracy, 0.798 F1-score, 0.766 recall and 0.838 precision for the first test dataset and 0.737 top-one, 0.906 top-two, 0.924 top-three, 0.729 F1-score, 0.676 recall and 0.790 precision for the second test dataset. The results demonstrated that the proposed ZSL is a promising tool for land cover mapping based on high-resolution photos.
APA, Harvard, Vancouver, ISO, and other styles
46

Kaur, Ramanpreet. "An Environment for detection of Bugs through SVM." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, no. 8 (June 2, 2015): 6009–13. http://dx.doi.org/10.24297/ijct.v14i8.1856.

Full text
Abstract:
Mining technique finds hidden patterns from the data stored in the repositories and turn it into useful information and knowledge. Most open source software development projects include an open bug repository in which users of the software can gain full access. Training data size for bug priority classification using SVM.In this paper, calculate the precision and recall of different features categories for each class. The bug report having status value NEW, UNCONFIRMED, ASSIGNED bugs are not included in our training .because the priority level of these classifications may not be authentic. A trigger’s task is to manage the bug repository so that it contains only real bugs and important bugs are addressed quickly. We also use the precision and recall that can be used to measure the accuracy of the classifier. For a bug priority classification we need high precision and recall especially for higher priorities.
APA, Harvard, Vancouver, ISO, and other styles
47

Al Rivan, Muhammad Ezar, and Gabriela Repca Sung. "Identifikasi Mutu Buah Pepaya California (Carica Papaya L.) Menggunakan Metode Jaringan Syaraf Tiruan." Jurnal Sisfokom (Sistem Informasi dan Komputer) 10, no. 1 (April 20, 2021): 113–19. http://dx.doi.org/10.32736/sisfokom.v10i1.1105.

Full text
Abstract:
Papaya is one of the fruits that grows in the tropics area, one of the kinds that people’s love the most is papaya California. The quality identification of papaya California fruit can be measured using color, defect, and size. Color, defect and size extracted from image of papaya. The dataset that used in this research are 150 images papaya California. The dataset consist of 3 quality there are good, fair and low. Identification of papaya using the backpropagation neural network method with 17 training function in each training data with 3 different neurons in the hidden layer. The best result of the test is using training function trainrp with 10 neurons is 81,33% for accuracy, 73,37% for precision, and 72% for recall, with 20 neurons is 82,67% for accuracy, 75,24% for precision, and 74% for recall, and with 25 neurons is 80,89% for accuracy, 74,42% for precision, and 71,33% for recall.
APA, Harvard, Vancouver, ISO, and other styles
48

NAIR, PRIYA C., and Dr T. Jebarajan. "Enhanced LTrP For Image Retrieval." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 12, no. 9 (March 17, 2014): 3912–20. http://dx.doi.org/10.24297/ijct.v12i9.2832.

Full text
Abstract:
Local Tetra Pattern (LTrP) is an image retrieval and indexing algorithm for content based image retrieval (CBIR) which made a significant improvement in the precision and recall rates of the retrieved images. Enhanced LTrP for Image Retrieval (ELIR) proposes a novel method of image retrieval by adding additional features to LTrP together with the features of coarseness, contrast, directionality and busyness. The experimental results show that precision and recall of image retrieval improved from that of using LTrP alone.
APA, Harvard, Vancouver, ISO, and other styles
49

Miura, Hideharu, Shuichi Ozawa, Tsubasa Enosaki, Masahiro Hayata, Kiyoshi Yamada, and Yasushi Nagata. "Gantry angle classification with a fluence map in intensity-modulated radiotherapy for prostate cases using machine learning." Polish Journal of Medical Physics and Engineering 24, no. 4 (December 1, 2018): 165–69. http://dx.doi.org/10.2478/pjmpe-2018-0023.

Full text
Abstract:
Abstract We investigated the gantry-angle classifier performance with a fluence map using three machine-learning algorithms, and compared it with human performance. Eighty prostate cases were investigated using a seven-field-intensity modulated radiotherapy treatment (IMRT) plan with beam angles of 0°, 50°, 100°, 155°, 205°, 260°, and 310°. The k-nearest neighbor (k-NN), logistic regression (LR), and support vector machine (SVM) algorithms were used. In the observer test, three radiotherapists assessed the gantry angle classification in a blind manner. The precision and recall rates were calculated for the machine learning and observer test. The average precision rate of the k-NN and LR algorithms were 94.8% and 97.9%, respectively. The average recall rate of the k-NN and LR algorithms were 94.3% and 97.9%, respectively. The SVM had 100% precision and recall rates. The gantry angles of 0°, 155°, and 205° had an accuracy of 100% in all algorithms. In the observer test, average precision and recall rates were 82.6% and 82.6%, respectively. All observers could easily classify the gantry angles of 0°, 155°, and 205° with a high degree of accuracy. Misclassifications occurred in gantry angles of 50°, 100°, 260°, and 310°. Machine learning could better classify gantry angles for prostate IMRT than human beings. In particular, the SVM algorithm had a perfect classification of 100%.
APA, Harvard, Vancouver, ISO, and other styles
50

Christensen, Cade, Torrey Wagner, and Brent Langhals. "Year-Independent Prediction of Food Insecurity Using Classical and Neural Network Machine Learning Methods." AI 2, no. 2 (May 23, 2021): 244–60. http://dx.doi.org/10.3390/ai2020015.

Full text
Abstract:
Current food crisis predictions are developed by the Famine Early Warning System Network, but they fail to classify the majority of food crisis outbreaks with model metrics of recall (0.23), precision (0.42), and f1 (0.30). In this work, using a World Bank dataset, classical and neural network (NN) machine learning algorithms were developed to predict food crises in 21 countries. The best classical logistic regression algorithm achieved a high level of significance (p < 0.001) and precision (0.75) but was deficient in recall (0.20) and f1 (0.32). Of particular interest, the classical algorithm indicated that the vegetation index and the food price index were both positively correlated with food crises. A novel method for performing an iterative multidimensional hyperparameter search is presented, which resulted in significantly improved performance when applied to this dataset. Four iterations were conducted, which resulted in excellent 0.96 for metrics of precision, recall, and f1. Due to this strong performance, the food crisis year was removed from the dataset to prevent immediate extrapolation when used on future data, and the modeling process was repeated. The best “no year” model metrics remained strong, achieving ≥0.92 for recall, precision, and f1 while meeting a 10% f1 overfitting threshold on the test (0.84) and holdout (0.83) datasets. The year-agnostic neural network model represents a novel approach to classify food crises and outperforms current food crisis prediction efforts.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography