Academic literature on the topic 'Classifiers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Classifiers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Classifiers"

1

Amerineni, Rajesh, Resh S. Gupta, and Lalit Gupta. "Multimodal Object Classification Models Inspired by Multisensory Integration in the Brain." Brain Sciences 9, no. 1 (2019): 3. http://dx.doi.org/10.3390/brainsci9010003.

Full text
Abstract:
Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classi
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Gang, Mengdi Shen, Meixuan Li, and Jingyi Cheng. "Personal Credit Default Discrimination Model Based on Super Learner Ensemble." Mathematical Problems in Engineering 2021 (March 31, 2021): 1–16. http://dx.doi.org/10.1155/2021/5586120.

Full text
Abstract:
Assessing the default of customers is an essential basis for personal credit issuance. This paper considers developing a personal credit default discrimination model based on Super Learner heterogeneous ensemble to improve the accuracy and robustness of default discrimination. First, we select six kinds of single classifiers such as logistic regression, SVM, and three kinds of homogeneous ensemble classifiers such as random forest to build a base classifier candidate library for Super Learner. Then, we use the ten-fold cross-validation method to exercise the base classifier to improve the base
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Chongya, Alexander Pons, and Kang Yen. "Sieve: An Ensemble Algorithm Using Global Consensus for Binary Classification." AI 1, no. 2 (2020): 242–62. http://dx.doi.org/10.3390/ai1020016.

Full text
Abstract:
In the field of machine learning, an ensemble approach is often utilized as an effective means of improving on the accuracy of multiple weak base classifiers. A concern associated with these ensemble algorithms is that they can suffer from the Curse of Conflict, where a classifier’s true prediction is negated by another classifier’s false prediction during the consensus period. Another concern of the ensemble technique is that it cannot effectively mitigate the problem of Imbalanced Classification, where an ensemble classifier usually presents a similar magnitude of bias to the same class as i
APA, Harvard, Vancouver, ISO, and other styles
4

Gaikwad, D. P. "Intrusion Detection System Using Ensemble of Rule Learners and First Search Algorithm as Feature Selectors." International Journal of Computer Network and Information Security 13, no. 4 (2021): 26–34. http://dx.doi.org/10.5815/ijcnis.2021.04.03.

Full text
Abstract:
Recently, the use of Internet is increased for digital communication to share a lot of sensitive information between computers and mobile devices. For secure communication, data or information must be protected from adversaries. There are many methods of safeties like encryption, firewalls and access control. Intrusion detection system is mainly used to detect internal attacks in organization. Machine leaning techniques are mostly used to implement intrusion detection system. Ensemble method of machine learning gives high accuracy in which moderately accurate classifiers are combined. Ensemble
APA, Harvard, Vancouver, ISO, and other styles
5

Assumpção Silva, Ronan, Alceu S. Britto, Fabricio Enembreck, Robert Sabourin, and Luiz S. Oliveira. "Selecting and Combining Classifiers Based on Centrality Measures." International Journal on Artificial Intelligence Tools 29, no. 03n04 (2020): 2060004. http://dx.doi.org/10.1142/s0218213020600040.

Full text
Abstract:
Centrality measures have been helping to explain the behavior of objects, given their relation, in a wide variety of problems, since sociology to chemistry. This work considers these measures to assess the importance of every classifier belonging to an ensemble of classifiers, aiming to improve a Multiple Classifier System (MCS). Assessing the classifier’s importance by employing centrality measures, inspired two different approaches: one for selecting classifiers and another for fusion. The selection approach, called Centrality Based Selection (CBS), adopts a trade-off between the classifier’
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Amit, and Anand Shanker Tewari. "Risk Identification of Diabetic Macular Edema Using E-Adoption of Emerging Technology." International Journal of E-Adoption 14, no. 3 (2022): 1–20. http://dx.doi.org/10.4018/ijea.310000.

Full text
Abstract:
The accumulation of the blood leaks on the retina is known as diabetic macular edema (DME), which can result in irreversible blindness. Early diagnosis and therapy can stop DME. This study presents an e-adoption of emerging technology such as RadioDense model for detecting and classifying DME from retinal fundus images. The proposed model employs a modified version of DenseNet121, radiomics features, and the gradient boosting classifier. The authors evaluated many classifiers on the concatenated features. The efficacy of the classifier is determined by comparing each classifier's accuracy valu
APA, Harvard, Vancouver, ISO, and other styles
7

Chuah, Joshua, Uwe Kruger, Ge Wang, Pingkun Yan, and Juergen Hahn. "Framework for Testing Robustness of Machine Learning-Based Classifiers." Journal of Personalized Medicine 12, no. 8 (2022): 1314. http://dx.doi.org/10.3390/jpm12081314.

Full text
Abstract:
There has been a rapid increase in the number of artificial intelligence (AI)/machine learning (ML)-based biomarker diagnostic classifiers in recent years. However, relatively little work has focused on assessing the robustness of these biomarkers, i.e., investigating the uncertainty of the AI/ML models that these biomarkers are based upon. This paper addresses this issue by proposing a framework to evaluate the already-developed classifiers with regard to their robustness by focusing on the variability of the classifiers’ performance and changes in the classifiers’ parameter values using fact
APA, Harvard, Vancouver, ISO, and other styles
8

Zaidi, Ahmad Zairi, and Chun Yong Chong. "A Dynamic Selection Method for Touch-Based Continuous Authentication On Mobile Devices." Applied Mathematics and Computational Intelligence (AMCI) 13, no. 3 (2024): 26–65. http://dx.doi.org/10.58915/amci.v13i3.554.

Full text
Abstract:
Touch biometric is one of the promising modalities to realise continuous authentication (CA) on mobile devices by distinguishing between touch strokes performed by the legitimate and illegitimate users. While the benefit of the scheme is promising, the effectiveness of different classification methods is not thoroughly understood. Little consideration has been given to dynamic selection of classifiers. In this paper, we proposed a dynamic selection method to deal with the security and usability needs of touch-based CA. Instead of classifying all touch samples using the same classifier, the met
APA, Harvard, Vancouver, ISO, and other styles
9

Paillassa, M., E. Bertin, and H. Bouy. "MAXIMASK and MAXITRACK: Two new tools for identifying contaminants in astronomical images using convolutional neural networks." Astronomy & Astrophysics 634 (February 2020): A48. http://dx.doi.org/10.1051/0004-6361/201936345.

Full text
Abstract:
In this work, we propose two convolutional neural network classifiers for detecting contaminants in astronomical images. Once trained, our classifiers are able to identify various contaminants, such as cosmic rays, hot and bad pixels, persistence effects, satellite or plane trails, residual fringe patterns, nebulous features, saturated pixels, diffraction spikes, and tracking errors in images. They encompass a broad range of ambient conditions, such as seeing, image sampling, detector type, optics, and stellar density. The first classifier, MAXIMASK, performs semantic segmentation and generate
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, Jiangbo, Xiaokang Fu, Lingling Guo, Chunhua Ju, and Jingjing Chen. "Creating Ensemble Classifiers with Information Entropy Diversity Measure." Security and Communication Networks 2021 (May 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/9953509.

Full text
Abstract:
Ensemble classifiers improve the classification accuracy by incorporating the decisions made by its component classifiers. Basically, there are two steps to create an ensemble classifier: one is to generate base classifiers and the other is to align the base classifiers to achieve maximum accuracy integrally. One of the major problems in creating ensemble classifiers is the classification accuracy and diversity of the component classifiers. In this paper, we propose an ensemble classifier generating algorithm to improve the accuracy of an ensemble classification and to maximize the diversity o
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Classifiers"

1

Cornforth, David. "Classifiers for machine intelligence." Thesis, University of Nottingham, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bermejo, Sánchez Sergio. "Learning with nearest neighbour classifiers." Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6323.

Full text
Abstract:
Premi extraordinari ex-aequo en l'àmbit d'Electrònica i Telecomunicacions. Convocatoria 1999 - 2000<br>Nearest Neighbour (NN) classifiers are one of the most celebrated algorithms in machine learning. In recent years, interest in these methods has flourished again in several fields (including statistics, machine learning and pattern recognition) since, in spite of their simplicity, they reveal as powerful non-parametric classification systems in real-world problems. The present work is mainly devoted to the development of new learning algorithms for these classifiers and is focused on the foll
APA, Harvard, Vancouver, ISO, and other styles
3

Szeto, Ka-sinn Kitty, and 司徒嘉善. "The acquisition of Cantonese classifiers." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31219913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Billing, Jeffrey J. (Jeffrey Joel) 1979. "Learning classifiers from medical data." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8068.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>Includes bibliographical references (leaf 32).<br>The goal of this thesis was to use machine-learning techniques to discover classifiers from a database of medical data. Through the use of two software programs, C5.0 and SVMLight, we analyzed a database of 150 patients who had been operated on by Dr. David Rattner of the Massachusetts General Hospital. C5.0 is an algorithm that learns decision trees from data while SVMLight learns support vector machines from the data
APA, Harvard, Vancouver, ISO, and other styles
5

Tsampouka, Petroula. "Perceptron-like large margin classifiers." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/264242/.

Full text
Abstract:
We address the problem of binary linear classification with emphasis on algorithms that lead to separation of the data with large margins. We motivate large margin classification from statistical learning theory and review two broad categories of large margin classifiers, namely Support Vector Machines which operate in a batch setting and Perceptron-like algorithms which operate in an incremental setting and are driven by their mistakes. We subsequently examine in detail the class of Perceptron-like large margin classifiers. The algorithms belonging to this category are further classified on t
APA, Harvard, Vancouver, ISO, and other styles
6

Klautau, Aldebaro. "Speech recognition using discriminative classifiers /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3091208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

CODECASA, DANIELE. "Continuous time bayesian network classifiers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/80691.

Full text
Abstract:
Streaming data are relevant to finance, computer science, and engineering, while they are becoming increasingly important to medicine and biology. Continuous time Bayesian networks are designed for analyzing efficiently multivariate streaming data, exploiting the conditional independencies in continuous time homogeneous Markov processes. Continuous time Bayesian network classifiers are a specialization of continuous time Bayesian networks designed for multivariate streaming data classification when time duration of events matters and the class occurs in the future. Continuous time Bayesian net
APA, Harvard, Vancouver, ISO, and other styles
8

Eldud, Omer Ahmed Abdelkarim. "Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019985.

Full text
Abstract:
The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix
APA, Harvard, Vancouver, ISO, and other styles
9

Mak, David Lai-woon. "The acquisition to classifiers in Cantonese." Online version, 1991. http://ethos.bl.uk/OrderDetails.do?did=1&uin=uk.bl.ethos.293547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Haddad, Nicholas K. "Performance analysis of active sonar classifiers." Ohio : Ohio University, 1990. http://www.ohiolink.edu/etd/view.cgi?ohiou1173206177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Classifiers"

1

Wozniak, Michal. Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kuncheva, Ludmila I. Combining Pattern Classifiers. John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118914564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

American Institute of Chemical Engineers. Equipment Testing Procedures Committee., ed. Particle size classifiers. 2nd ed. American Institute of Chemical Engineers, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Raudys, Šarūnas. Statistical and Neural Classifiers. Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0359-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mache, Avazeh. Numeral classifiers in Persian. LINCOM Europa, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oviedo, Alejandro. Classifiers in Venezuelan Sign Language. Signum, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

James, Alex Pappachen, ed. Deep Learning Classifiers with Memristive Networks. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-14524-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bachalo, W. D. Mass flux measurements of a high number density spray system using the phase Doppler particle analyzer. AIAA, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ashur, Cherry, ed. Determinatives in Assyriology. Ashur Cherry, York University, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hanna, William J. A linguistic analysis of classifiers in the Tai Lue language of Chiang Kham District, Prayao Province. Payap Research and Development Institute, Payap University, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Classifiers"

1

del Gobbo, Francesca. "Classifiers." In The Handbook of Chinese Linguistics. John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118584552.ch2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheung, Candice Chi-Hang. "Classifiers." In Parts of Speech in Mandarin. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0398-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Damarla, Thyagaraju. "Classifiers." In Battlefield Acoustics. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16036-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ross, Claudia, Jing-Heng Sheng Ma, Pei-Chia Chen, Baozhang He, and Meng Yeh. "Classifiers." In Modern Mandarin Chinese Grammar, 3rd ed. Routledge, 2023. http://dx.doi.org/10.4324/9781003335078-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ross, Claudia, Jing-Heng Sheng Ma, Baozhang He, Pei-Chia Chen, and Meng Yeh. "Classifiers." In Modern Mandarin Chinese Grammar Workbook, 3rd ed. Routledge, 2023. http://dx.doi.org/10.4324/9781003334521-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Woźniak, Michał. "Introduction." In Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Woźniak, Michał. "Data and Knowledge Hybridization." In Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Woźniak, Michał. "Classifier Hybridization." In Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Woźniak, Michał. "Chosen Applications of Hybrid Classifiers." In Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Woźniak, Michał. "Conclusions." In Hybrid Classifiers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-40997-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Classifiers"

1

Chockler, Hana, and Joseph Y. Halpern. "Explaining Image Classifiers." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/25.

Full text
Abstract:
We focus on explaining image classifiers, taking the work of Mothilal et al. 2021 (MMTS) as our point of departure. We observe that, although MMTS claim to be using the definition of explanation proposed by Halpern 2016, they do not quite do so. Roughly speaking, Halpern’s definition has a necessity clause and a sufficiency clause. MMTS replace the necessity clause by a requirement that, as we show, implies it. Halpern’s definition also allows agents to restrict the set of options considered. While these difference may seem minor, as we show, they can have a nontrivial impact on explanations.
APA, Harvard, Vancouver, ISO, and other styles
2

Mickus, Timothee, Stig-Arne Grönroos, and Joseph Attieh. "Isotropy, Clusters, and Classifiers." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-short.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shih, Andy, Arthur Choi, and Adnan Darwiche. "A Symbolic Approach to Explaining Bayesian Network Classifiers." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/708.

Full text
Abstract:
We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form. We introduce two types of explanations for why a classifier may have classified an instance positively or negatively and suggest algorithms for computing these explanations. The first type of explanation identifies a minimal set of the currently active features that is responsible for the current classification, while the second type of explanation identifies a minimal set of features whose current state (active or not
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Saurabh, Yongqin Xian, Ning Yu, and Ambuj Singh. "Learning Prototype Classifiers for Long-Tailed Recognition." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/151.

Full text
Abstract:
The problem of long-tailed recognition (LTR) has received attention in recent years due to the fundamental power-law distribution of objects in the real-world. Most recent works in LTR use softmax classifiers that are biased in that they correlate classifier norm with the amount of training data for a given class. In this work, we show that learning prototype classifiers addresses the biased softmax problem in LTR. Prototype classifiers can deliver promising results simply using Nearest-Class-Mean (NCM), a special case where prototypes are empirical centroids. We go one step further and propos
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xianneng, Wen He, and Kotaro Hirasawa. "Generalized classifier system: Evolving classifiers with cyclic conditions." In 2014 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2014. http://dx.doi.org/10.1109/cec.2014.6900457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Caili, Takato Tatsumi, Hiyoyuki Sato, Tim Kovacs, and Keiki Takadama. "Classifier generalization for comprehensive classifiers subsumption in XCS." In GECCO '18: Genetic and Evolutionary Computation Conference. ACM, 2018. http://dx.doi.org/10.1145/3205651.3208260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khosravi, Pasha, Yitao Liang, YooJung Choi, and Guy Van den Broeck. "What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/377.

Full text
Abstract:
While discriminative classifiers often yield strong predictive performance, missing feature values at prediction time can still be a challenge. Classifiers may not behave as expected under certain ways of substituting the missing values, since they inherently make assumptions about the data distribution they were trained on. In this paper, we propose a novel framework that classifies examples with missing features by computing the expected prediction with respect to a feature distribution. Moreover, we use geometric programming to learn a naive Bayes distribution that embeds a given logistic r
APA, Harvard, Vancouver, ISO, and other styles
8

Rahman, Md Sharifur, and Pratheepan Yogarajah. "Evaluating the Performance of Common Machine Learning Classifiers using various Validation Methods." In 12th International Conference on Artificial Intelligence, Soft Computing and Applications. Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122304.

Full text
Abstract:
The selection of the proper classifier and the implementation of the proper training strategy have the most impact on the performance of machine learning classifiers. The amount and distribution of data used for training and validation is another crucial aspect of classifier performance. The goal of this study was to identify the optimal combination of classifiers and validation strategies for achieving the highest accuracy rate while testing models with a small dataset. To that end, five primary classifiers were examined with varying proportions of training data and validation procedures. Mos
APA, Harvard, Vancouver, ISO, and other styles
9

Raal, David. "Classifiers." In the ACM international conference companion. ACM Press, 2011. http://dx.doi.org/10.1145/2048147.2048188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ramos-Jiménez, Gonzalo, José del Campo-Ávila, and Rafael Morales-Bueno. "Hybridizing Ensemble Classifiers with Individual Classifiers." In 2009 Ninth International Conference on Intelligent Systems Design and Applications. IEEE, 2009. http://dx.doi.org/10.1109/isda.2009.148.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Classifiers"

1

KAB LABS INC SAN DIEGO CA. Feature Set Evaluation for Classifiers. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada226903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

KAB LABS INC SAN DIEGO CA. Feature Set Evaluation for Classifiers. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada226905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hoang, Vanessa, and Mateusz Monterial. Optimizing Classifiers for Radionuclide Identification. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1769169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vondrick, Carl, Hamed Pirsiavash, Aude Oliva, and Antonio Torralba. Acquiring Visual Classifiers from Human Imagination. Defense Technical Information Center, 2014. http://dx.doi.org/10.21236/ada612443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Perone, Michael P., and Nathan Intrator. Unsupervised Splitting Rules for Neural Tree Classifiers. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada264961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gertz, E. M., and J. D. Griffin. Support vector machine classifiers for large data sets. Office of Scientific and Technical Information (OSTI), 2006. http://dx.doi.org/10.2172/881587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dakin, Gordon, and Sankar Virdhagriswaran. Misleading Information Detection Through Probabilistic Decision Tree Classifiers. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada406823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Punyakanok, Vasin, Dan Roth, Wen-tau Yih, and Dav Zimak. Semantic Role Labeling Via Generalized Inference Over Classifiers. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada457895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Debroux, Patrick. Analysis Methodology of Image Classifiers in Stressed Environments. DEVCOM Analysis Center, 2022. http://dx.doi.org/10.21236/ad1180069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ostendorf, M., L. Atlas, R. Fish, O. Cetin, S. Sukittanon, and G. D. Bernard. Joint Use of Dynamical Classifiers and Ambiguity Plane Features. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada436824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!