Academic literature on the topic 'Feature selection'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Feature selection.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Feature selection"
Huber, Florian, and Volker Steinhage. "Conditional Feature Selection: Evaluating Model Averaging When Selecting Features with Shapley Values." Geomatics 4, no. 3 (August 8, 2024): 286–310. http://dx.doi.org/10.3390/geomatics4030016.
Full textUsha, P., and J. G. R. Sathiaseelan. "Enhanced Filtrate Feature Selection Algorithm for Feature Subset Generation." Indian Journal Of Science And Technology 17, no. 29 (July 31, 2024): 3002–11. http://dx.doi.org/10.17485/ijst/v17i29.2127.
Full textXindong Wu, Kui Yu, Wei Ding, Hao Wang, and Xingquan Zhu. "Online Feature Selection with Streaming Features." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 5 (May 2013): 1178–92. http://dx.doi.org/10.1109/tpami.2012.197.
Full textLi, Jundong, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, and Huan Liu. "Feature Selection." ACM Computing Surveys 50, no. 6 (January 12, 2018): 1–45. http://dx.doi.org/10.1145/3136625.
Full textSutherland, Stuart. "Feature selection." Nature 392, no. 6674 (March 1998): 350. http://dx.doi.org/10.1038/32817.
Full textPatel, Damodar, and Amit Kumar Saxena. "Feature Selection in High Dimension Datasets using Incremental Feature Clustering." Indian Journal Of Science And Technology 17, no. 32 (August 24, 2024): 3318–26. http://dx.doi.org/10.17485/ijst/v17i32.2077.
Full textWang, Gang, Yang Zhao, Jiasi Zhang, and Yongjie Ning. "A Novel End-To-End Feature Selection and Diagnosis Method for Rotating Machinery." Sensors 21, no. 6 (March 15, 2021): 2056. http://dx.doi.org/10.3390/s21062056.
Full textFahrudy, Dony, and Shofwatul 'Uyun. "Classification of Student Graduation using Naïve Bayes by Comparing between Random Oversampling and Feature Selections of Information Gain and Forward Selection." JOIV : International Journal on Informatics Visualization 6, no. 4 (December 31, 2022): 798. http://dx.doi.org/10.30630/joiv.6.4.982.
Full textKar Hoou, Hui, Ooi Ching Sheng, Lim Meng Hee, and Leong Mohd Salman. "Feature selection tree for automated machinery fault diagnosis." MATEC Web of Conferences 255 (2019): 02004. http://dx.doi.org/10.1051/matecconf/201925502004.
Full textHeriyanto, Heriyanto, and Dyah Ayu Irawati. "Comparison of Mel Frequency Cepstral Coefficient (MFCC) Feature Extraction, With and Without Framing Feature Selection, to Test the Shahada Recitation." RSF Conference Series: Engineering and Technology 1, no. 1 (December 23, 2021): 335–54. http://dx.doi.org/10.31098/cset.v1i1.395.
Full textDissertations / Theses on the topic "Feature selection"
Zheng, Ling. "Feature grouping-based feature selection." Thesis, Aberystwyth University, 2017. http://hdl.handle.net/2160/41e7b226-d8e1-481f-9c48-4983f64b0a92.
Full textDreyer, Sigve. "Evolutionary Feature Selection." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24225.
Full textDoquet, Guillaume. "Agnostic Feature Selection." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS486.
Full textWith the advent of Big Data, databases whose size far exceed the human scale are becoming increasingly common. The resulting overabundance of monitored variables (friends on a social network, movies watched, nucleotides coding the DNA, monetary transactions...) has motivated the development of Dimensionality Reduction (DR) techniques. A DR algorithm such as Principal Component Analysis (PCA) or an AutoEncoder typically combines the original variables into new features fewer in number, such that most of the information in the dataset is conveyed by the extracted feature set.A particular subcategory of DR is formed by Feature Selection (FS) methods, which directly retain the most important initial variables. How to select the best candidates is a hot topic at the crossroad of statistics and Machine Learning. Feature importance is usually inferred in a supervised context, where variables are ranked according to their usefulness for predicting a specific target feature.The present thesis focuses on the unsupervised context in FS, i.e. the challenging situation where no prediction goal is available to help assess feature relevance. Instead, unsupervised FS algorithms usually build an artificial classification goal and rank features based on their helpfulness for predicting this new target, thus falling back on the supervised context. Additionally, the efficiency of unsupervised FS approaches is typically also assessed in a supervised setting.In this work, we propose an alternate model combining unsupervised FS with data compression. Our Agnostic Feature Selection (AgnoS) algorithm does not rely on creating an artificial target and aims to retain a feature subset sufficient to recover the whole original dataset, rather than a specific variable. As a result, AgnoS does not suffer from the selection bias inherent to clustering-based techniques.The second contribution of this work( Agnostic Feature Selection, G. Doquet & M. Sebag, ECML PKDD 2019) is to establish both the brittleness of the standard supervised evaluation of unsupervised FS, and the stability of the new proposed AgnoS
Sima, Chao. "Small sample feature selection." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5796.
Full textCoelho, Frederico Gualberto Ferreira. "Semi-supervised feature selection." Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/BUOS-97NJ9S.
Full textComo a aquisição de dados tem se tornado relativamente mais fácil e barata, o conjunto de dados tem adquirido dimensões extremamente grandes, tanto em relação ao número de variáveis, bem como em relação ao número de instâncias. Contudo, o mesmo não ocorre com os rótulos de cada instância. O custo para se obter estes rótulos é, via de regra, muito alto, e por causa disto, dados não rotulados são a grande maioria, principalmente quando comparados com a quanti-dade de dados rotulados. A utilização destes dados requer cuidados especiais uma vez que vários problemas surgem com o aumento da dimensionalidade e com a escassez de rótulos. Reduzir a dimensão dos dados é então uma necessidade primordial. Em meio às suas características mais relevantes, usualmente encontramos variáveis redundantes e mesmo irrelevantes, que podem e devem ser eliminadas. Na procura destas variáveis, ao desprezar os dados não rotulados, implementando-se apenas estratégias supervisionadas, abrimos mão de informações estruturais que podem ser úteis. Da mesma forma, desprezar os dados rotulados implementando-se apenas métodos não supervisionados é igualmente disperdício de informação. Neste contexto, a aplicação de uma abordagem semi-supervisionada é bastante apropriada, onde pode-se tentar aproveitar o que cada tipo de dado tem de melhor a oferecer. Estamos trabalhando no problema de seleção de características semi-supervisionada através de duas abordagens distintas, mas que podem, eventualmente se complementarem mais à frente. O problema pode ser abordado num contexto de agrupamento de características, agrupando variáveis semelhantes e desprezando as irrelevantes. Por outro lado, podemos abordar o problema através de uma metodologia multiobjetiva, uma vez que temos argumentos estabelecendo claramente esta sua natureza multiobjetiva. Na primeira abordagem, uma medida de semelhança capaz de levar em consideração tanto os dados rotulados como os não rotulados, baseado na informação mútua, está sendo desenvolvida, bem como, um critério, baseado nesta medida, para agrupamento e eliminação de variáveis. Também o princípio da homogeneidade entre os rótulos e os clusters de dados é explorado e dois métodos semissupervisionados de seleção de características são desenvolvidos. Finalmente um estimador de informaçã mútua para um conjunto misto de variáveis discretas e contínuas é desenvolvido e constitue uma contribuição secundária do trabalho. Na segunda abordagem, a proposta é tentar resolver o problema de seleção de características e de aproximação de funções ao mesmo tempo. O método proposto inclue a consideração de normas diferentes para cada camada de uma rede MLP, pelo treinamento independente de cada camada e pela definição de funções objetivo que sejam capazes de maximizar algum índice de relevância das variáveis.
Garnes, Øystein Løhre. "Feature Selection for Text Categorisation." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9017.
Full textText categorization is the task of discovering the category or class text documents belongs to, or in other words spotting the correct topic for text documents. While there today exists many machine learning schemes for building automatic classifiers, these are typically resource demanding and do not always achieve the best results when given the whole contents of the documents. A popular solution to these problems is called feature selection. The features (e.g. terms) in a document collection are given weights based on a simple scheme, and then ranked by these weights. Next, each document is represented using only the top ranked features, typically only a few percent of the features. The classifier is then built in considerably less time, and might even improve accuracy. In situations where the documents can belong to one of a series of categories, one can either build a multi-class classifier and use one feature set for all categories, or one can split the problem into a series of binary categorization tasks (deciding if documents belong to a category or not) and create one ranked feature subset for each category/classifier. Many feature selection metrics have been suggested over the last decades, including supervised methods that make use of a manually pre-categorized set of training documents, and unsupervised methods that need only training documents of the same type or collection that is to be categorized. While many of these look promising, there has been a lack of large-scale comparison experiments. Also, several methods have been proposed the last two years. Moreover, most evaluations are conducted on a set of binary tasks instead of a multi-class task as this often gives better results, although multi-class categorization with a joint feature set often is used in operational environments. In this report, we present results from the comparison of 16 feature selection methods (in addition to random selection) using various feature set sizes. Of these, 5 were unsupervised , and 11 were supervised. All methods are tested on both a Naive Bayes (NB) classifier and a Support Vector Machine (SVM) classifier. We conducted multi-class experiments using a collection with 20 non-overlapping categories, and each feature selection method produced feature sets common for all the categories. We also combined feature selection methods and evaluated their joint efforts. We found that the classical supervised methods had the best performance, including Chi Square, Information Gain and Mutual Information. The Chi Square variant GSS coefficient was also among the top performers. Odds Ratio showed excellent performance for NB, but not for SVM. The three unsupervised methods Collection Frequency, Collection Frequency Inverse Document Frequency and Term Frequency Document Frequency all showed performances close to the best group. The Bi-Normal Separation metric produced excellent results for the smallest feature subsets. The weirdness factor performed several times better than random selection, but was not among the top performing group. Some combination experiments achieved better results than each method alone, but the majority did not. The top performers Chi square and GSS coefficient classified more documents when used together than alone.Four of the five combinations that showed increase in performance included the BNS metric.
Pradhananga, Nripendra. "Effective Linear-Time Feature Selection." The University of Waikato, 2007. http://hdl.handle.net/10289/2315.
Full textCheng, Iunniang. "Hybrid Methods for Feature Selection." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1244.
Full textAthanasakis, D. "Feature selection in computational biology." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1432346/.
Full textSarkar, Saurabh. "Feature Selection with Missing Data." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1378194989.
Full textBooks on the topic "Feature selection"
Liu, Huan, and Hiroshi Motoda, eds. Feature Extraction, Construction and Selection. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5725-8.
Full textSaunders, Craig, Marko Grobelnik, Steve Gunn, and John Shawe-Taylor, eds. Subspace, Latent Structure and Feature Selection. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752790.
Full textBolón-Canedo, Verónica, Noelia Sánchez-Maroño, and Amparo Alonso-Betanzos. Feature Selection for High-Dimensional Data. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21858-8.
Full textWan, Cen. Hierarchical Feature Selection for Knowledge Discovery. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-97919-9.
Full text1958-, Liu Huan, ed. Spectral feature selection for data mining. Boca Raton, FL: CRC Press, 2012.
Find full textStańczyk, Urszula, and Lakhmi C. Jain, eds. Feature Selection for Data and Pattern Recognition. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-45620-0.
Full textBolón-Canedo, Verónica, and Amparo Alonso-Betanzos. Recent Advances in Ensembles for Feature Selection. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90080-3.
Full textLu, Rui. Feature Selection for High Dimensional Causal Inference. [New York, N.Y.?]: [publisher not identified], 2020.
Find full textLiu, Huan, and Hiroshi Motoda. Feature Selection for Knowledge Discovery and Data Mining. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5689-3.
Full textBook chapters on the topic "Feature selection"
Verma, Nishchal K., and Al Salour. "Feature Selection." In Studies in Systems, Decision and Control, 175–200. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0512-6_5.
Full textDe Silva, Anthony Mihirana, and Philip H. W. Leong. "Feature Selection." In SpringerBriefs in Applied Sciences and Technology, 13–24. Singapore: Springer Singapore, 2015. http://dx.doi.org/10.1007/978-981-287-411-5_2.
Full textBolón-Canedo, Verónica, and Amparo Alonso-Betanzos. "Feature Selection." In Intelligent Systems Reference Library, 13–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90080-3_2.
Full textGarcía, Salvador, Julián Luengo, and Francisco Herrera. "Feature Selection." In Intelligent Systems Reference Library, 163–93. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10247-4_7.
Full textBrank, Janez, Dunja Mladenić, Marko Grobelnik, Huan Liu, Dunja Mladenić, Peter A. Flach, Gemma C. Garriga, Hannu Toivonen, and Hannu Toivonen. "Feature Selection." In Encyclopedia of Machine Learning, 402–6. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_306.
Full textSun, Chenglei. "Feature Selection." In Encyclopedia of Systems Biology, 737. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_431.
Full textCornejo, Roger. "Feature Selection." In Dynamic Oracle Performance Analytics, 79–89. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4137-0_5.
Full textElgendi, Mohamed. "Feature Selection." In PPG Signal Analysis, 165–93. Boca Raton : Taylor & Francis, [2018]: CRC Press, 2020. http://dx.doi.org/10.1201/9780429449581-8.
Full textWright, Marvin N. "Feature Selection." In Applied Machine Learning Using mlr3 in R, 146–60. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003402848-6.
Full textRos, Frederic, and Rabia Riad. "Feature selection." In Unsupervised and Semi-Supervised Learning, 27–44. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_3.
Full textConference papers on the topic "Feature selection"
Duan, Xuan, Songbai Liu, Junkai Ji, Lingjie Li, Qiuzhen Lin, and Kay Chen Tan. "Evolutionary Multiobjective Feature Selection Assisted by Unselected Features." In 2024 IEEE Congress on Evolutionary Computation (CEC), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/cec60901.2024.10611992.
Full textWang, Juanyan, and Mustafa Bilgic. "Context-Aware Feature Selection and Classification." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/480.
Full textChaudhary, Seema, Sangeeta Kakarwal, and Chitra Gaikwad. "Feature Selection." In DSMLAI '21': International Conference on Data Science, Machine Learning and Artificial Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3484824.3484881.
Full textLi, Haiguang, Xindong Wu, Zhao Li, and Wei Ding. "Group Feature Selection with Streaming Features." In 2013 IEEE International Conference on Data Mining (ICDM). IEEE, 2013. http://dx.doi.org/10.1109/icdm.2013.137.
Full textWu, Junfang, and Chao Li. "Feature Selection Based on Features Unit." In 2017 4th International Conference on Information Science and Control Engineering (ICISCE). IEEE, 2017. http://dx.doi.org/10.1109/icisce.2017.76.
Full textNisar, Shibli, and Muhammad Tariq. "Intelligent feature selection using hybrid based feature selection method." In 2016 Sixth International Conference on Innovative Computing Technology (INTECH). IEEE, 2016. http://dx.doi.org/10.1109/intech.2016.7845025.
Full textLi, Jirong. "Feature Selection Based on Correlation between Fuzzy Features and Optimal Fuzzy-Valued Feature Subset Selection." In 2008 Fourth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2008. http://dx.doi.org/10.1109/iih-msp.2008.292.
Full textJoshi, Alok A., Peter H. Meckl, Galen B. King, and Kristofer Jennings. "Information-Theoretic Sensor Subset Selection: Application to Signal-Based Fault Isolation in Diesel Engines." In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-15903.
Full textFei, Hongliang, Brian Quanz, and Jun Huan. "Regularization and feature selection for networked features." In the 19th ACM international conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1871437.1871756.
Full textXiao, Di, and Junfeng Zhang. "Importance Degree of Features and Feature Selection." In 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 2009. http://dx.doi.org/10.1109/fskd.2009.625.
Full textReports on the topic "Feature selection"
Sisto, A., and C. Kamath. Ensemble Feature Selection in Scientific Data Analysis. Office of Scientific and Technical Information (OSTI), September 2013. http://dx.doi.org/10.2172/1097710.
Full textSeo, Young-Woo, Anupriya Ankolekar, and Katia Sycara. Feature Selection for Extracting Semantically Rich Words. Fort Belvoir, VA: Defense Technical Information Center, March 2004. http://dx.doi.org/10.21236/ada597268.
Full textPlinski, M. J. License Application Design Selection Feature Report:Ceramic Coatings. Office of Scientific and Technical Information (OSTI), March 1999. http://dx.doi.org/10.2172/762894.
Full textMassari, J. R. License Application Design Selection Feature Report: Additives and Fillers Design Feature 19. Office of Scientific and Technical Information (OSTI), March 1999. http://dx.doi.org/10.2172/762917.
Full textTang, J. S. License Application Design Selection Feature Report: Waste Package Self Shielding Design Feature 13. Office of Scientific and Technical Information (OSTI), March 2000. http://dx.doi.org/10.2172/752783.
Full textSilapachote, Piyanuch, Deepak R. Karuppiah, and Allen R. Hanson. Feature Selection Using Adaboost for Face Expression Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada438800.
Full textChen, Maximillian Gene, Aleksander Bapst, Kirk Busche, Minh Do, Laura E. Matzen, Laura A. McNamara, and Raymond Yeh. Feature Selection and Inferential Procedures for Video Data. Office of Scientific and Technical Information (OSTI), September 2017. http://dx.doi.org/10.2172/1494165.
Full textPirozzo, David M., Philip A. Frederick, Shawn Hunt, Bernard Theisen, and Mike Del Rose. Spectrally Queued Feature Selection for Robotic Visual Odometery. Fort Belvoir, VA: Defense Technical Information Center, November 2010. http://dx.doi.org/10.21236/ada535663.
Full textScott M. Bennett. License Application Design Selection Feature Report Canistered Assemblies. Office of Scientific and Technical Information (OSTI), March 1999. http://dx.doi.org/10.2172/759933.
Full textNitti, D. A. License Application Design Selection Feature Report: Rod Consolidation. Office of Scientific and Technical Information (OSTI), June 1999. http://dx.doi.org/10.2172/762903.
Full text