Academic literature on the topic 'Hybrid deep-STTF feature learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Hybrid deep-STTF feature learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Hybrid deep-STTF feature learning"

1

Margesh, Keskar, and D. Maktedar Dhananjay. "Hybrid deep-spatio textural feature model for medicinal plant disease classification." Hybrid deep-spatio textural feature model for medicinal plant disease classification 30, no. 1 (2023): 356–65. https://doi.org/10.11591/ijeecs.v30.i1.pp356-365.

Full text
Abstract:
The high-pace rise in the demands of medicinal plants towards pharmaceutical significances as well as the different ayurvedic or herbal remedials have forced agro-industries However, rising plant disease cases have limited the cumulative growth and hence both volumetric production as well as quality of medicine. In this paper a first of its kind evolutionary computing driven ROIspecific hybrid deep-spatio temporal textural feature learning model is developed for medicinal plant disease detection (HDST-MPD). To alleviate any possible class-imbalance problem, HDST-MPD model at first applied firefly heuristic driven fuzzy C-means clustering to retrieve ROI-specific RGB regions. Subsequently, to exploit maximum possible deep spatiotemporal textural features, it applied gray-level co-occurrence matrix (GLCM) and AlexNet transferable network. Here, the use of multiple GLCM features helped in exploiting textural feature distribution, while AlexNet deep model yielded high-dimensional features. These deep-spatio temporal textural feature (deep-STTF) features were fused together to yield a composite vector, which was trained over random forest ensemble to perform two-class classification to classify each sample medicinal image as normal or diseased. Depth performance assessment confirmed that the proposed model yields accuracy of 98.97%, precision 99.42%, recall 98.89%, F-measure 99.15%, and equal error rate of 1.03%, signifying its robustness towards real-time medicinal plant disease detection and classification.
APA, Harvard, Vancouver, ISO, and other styles
2

Keskar, Margesh, and Dhananjay D. Maktedar. "Hybrid deep-spatio textural feature model for medicinal plant disease classification." Indonesian Journal of Electrical Engineering and Computer Science 30, no. 1 (2023): 356. http://dx.doi.org/10.11591/ijeecs.v30.i1.pp356-365.

Full text
Abstract:
The high-pace rise in the demands of medicinal plants towards pharmaceutical significances as well as the different ayurvedic or herbal remedials have forced agro-industries However, rising plant disease cases have limited the cumulative growth and hence both volumetric production as well as quality of medicine. In this paper a first of its kind evolutionary computing driven ROI-specific hybrid deep-spatio temporal textural feature learning model is developed for medicinal plant disease detection (HDST-MPD). To alleviate any possible class-imbalance problem, HDST-MPD model at first applied firefly heuristic driven fuzzy C-means clustering to retrieve ROI-specific RGB regions. Subsequently, to exploit maximum possible deep spatiotemporal textural features, it applied gray-level co-occurrence matrix (GLCM) and AlexNet transferable network. Here, the use of multiple GLCM features helped in exploiting textural feature distribution, while AlexNet deep model yielded high-dimensional features. These deep-spatio temporal textural feature (deep-STTF) features were fused together to yield a composite vector, which was trained over random forest ensemble to perform two-class classification to classify each sample medicinal image as normal or diseased. Depth performance assessment confirmed that the proposed model yields accuracy of 98.97%, precision 99.42%, recall 98.89%, F-measure 99.15%, and equal error rate of 1.03%, signifying its robustness towards real-time medicinal plant disease detection and classification.
APA, Harvard, Vancouver, ISO, and other styles
3

Dr.Devaraj Verma C, Shruthishree S. H, Dr Harshvardhan Tiwari,. "AlexResNet+: A Deep Hybrid Featured Machine Learning Model for Breast Cancer Tissue Classification." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (2021): 2420–38. http://dx.doi.org/10.17762/turcomat.v12i6.5686.

Full text
Abstract:
The exponential rise in cancer diseases, primarily the breast cancer has alarmed academia-industry to achieve more efficient and reliable breast cancer tissue identification and classification. Unlike classical machine learning approaches which merely focus on enhancing classification efficiency, in this paper the emphasis was made on extracting multiple deep features towards breast cancer diagnosis. To achieve it, in this paper A Deep Hybrid Featured Machine Learning Model for Breast Cancer Tissue Classification named, AlexResNet+ was developed. We used two well known and most efficient deep learning models, AlexNet and shorted ResNet50 deep learning concepts for deep feature extraction. To retain high dimensional deep features while retaining optimal computational efficiency, we applied AlexNet with five convolutional layers, and three fully connected layers, while ResNet50 was applied with modified layered architectures. Retrieving the distinct deep features from AlexNet and ResNet deep learning models, we obtained the amalgamated feature set which were applied as input for support vector machine with radial basis function (SVM-RBF) for two-class classification. To assess efficacy of the different feature set, performances were obtained for AlexNet, shorted ResNet50 and hybrid features distinctly. The simulation results over DDMS mammogram breast cancer tissue images revealed that the proposed hybrid deep features (AlexResNet+) based model exhibits the highest classification accuracy of 95.87%, precision 0.9760, sensitivity 1.0, specificity 0.9621, F-Measure 0.9878 and AUC of 0.960.
APA, Harvard, Vancouver, ISO, and other styles
4

Setiadi, De Rosal Ignatius Moses, Ajib Susanto, Kristiawan Nugroho, Ahmad Rofiqul Muslikh, Arnold Adimabua Ojugo, and Hong-Seng Gan. "Rice Yield Forecasting Using Hybrid Quantum Deep Learning Model." Computers 13, no. 8 (2024): 191. http://dx.doi.org/10.3390/computers13080191.

Full text
Abstract:
In recent advancements in agricultural technology, quantum mechanics and deep learning integration have shown promising potential to revolutionize rice yield forecasting methods. This research introduces a novel Hybrid Quantum Deep Learning model that leverages the intricate processing capabilities of quantum computing combined with the robust pattern recognition prowess of deep learning algorithms such as Extreme Gradient Boosting (XGBoost) and Bidirectional Long Short-Term Memory (Bi-LSTM). Bi-LSTM networks are used for temporal feature extraction and quantum circuits for quantum feature processing. Quantum circuits leverage quantum superposition and entanglement to enhance data representation by capturing intricate feature interactions. These enriched quantum features are combined with the temporal features extracted by Bi-LSTM and fed into an XGBoost regressor. By synthesizing quantum feature processing and classical machine learning techniques, our model aims to improve prediction accuracy significantly. Based on measurements of mean square error (MSE), the coefficient of determination (R2), and mean average error (MAE), the results are 1.191621 × 10−5, 0.999929482, and 0.001392724, respectively. This value is so close to perfect that it helps make essential decisions in global agricultural planning and management.
APA, Harvard, Vancouver, ISO, and other styles
5

Yogesh, N., Purohit Shrinivasacharya, Nagaraj Naik, and B. M. Vikranth. "Chronic kidney Disease Classification through Hybrid Feature Selection and Ensemble Deep Learning." International Journal of Statistics in Medical Research 14 (March 3, 2025): 109–17. https://doi.org/10.6000/1929-6029.2025.14.11.

Full text
Abstract:
Diagnosing and treating at-risk patients for chronic kidney disease (CKD) relies heavily on accurately classifying the disease. The use of deep learning models in healthcare research is receiving much interest due to recent developments in the field. CKD has many features; however, only some features contribute weightage for the classification task. Therefore, it is required to eliminate the irrelevant feature before applying the classification task. This paper proposed a hybrid feature selection method by combining the two feature selection techniques: the Boruta and the Recursive Feature Elimination (RFE) method. The features are ranked according to their importance for CKD classification using the Boruta algorithm and refined feature set using the RFE, which recursively eliminates the least important features. The hybrid feature selection method removes the feature with a low recursive score. Later, selected features are given input to the proposed ensemble deep learning method for classification. The experimental ensemble deep learning model with feature selection is compared to Support Vector Machine (SVM), Logistic Regression (LR), and Random Forest (RF) models with and without feature selection. When feature selection is used, the ensemble model improves accuracy by 2%. Experimental results found that these features, age, pus cell clumps, bacteria, and coronary artery disease, do not contribute much to accurate classification tasks. Accuracy, precision, and recall are used to evaluate the ensemble deep learning model.
APA, Harvard, Vancouver, ISO, and other styles
6

Sudeep K. Hase, Rashmi Soni. "Hybrid Feature Selection on Social Media Dataset for Sentiment Classification using Deep Learning Techniques." Communications on Applied Nonlinear Analysis 32, no. 9s (2025): 1899–918. https://doi.org/10.52783/cana.v32.4362.

Full text
Abstract:
Sentiment classification involves determining the sentiment expressed in text, such as positive, negative, or neutral, but social media data presents challenges due to its high dimensionality, noise, and unstructured nature. This study proposes a novel sentiment classification approach by combining hybrid feature selection methods with deep learning techniques. Social media platforms generate vast amounts of data daily, which is often noisy, redundant, and irrelevant for sentiment analysis. Hybrid feature selection techniques, which integrate filter and wrapper-based methods, assist in reducing the feature space while retaining the most informative features. By applying deep learning models, such as recurrent neural networks (RNNs) and long short-term memory (LSTM) networks, classification performance can be substantially enhanced. The proposed framework uses hybrid feature selection to eliminate noisy and irrelevant features, thereby improving the model's generalization capabilities. Experimental results reveal that the combination of hybrid feature selection and deep learning techniques not only boosts sentiment classification accuracy but also decreases computational overhead. This study highlights the effectiveness of merging traditional feature selection methods with modern deep learning models to better address the complexities of social media datasets and deliver more precise sentiment analysis. The results achieved by proposed model is 98.50% on social media dataset which is higher than conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Saproo, Dimple, Aparna N. Mahajan, Seema Narwal, and Niranjan Yadav. "Deep Feature Extraction and Classification of Diabetic Retinopathy Images using a Hybrid Approach." Engineering, Technology & Applied Science Research 15, no. 2 (2025): 21475–81. https://doi.org/10.48084/etasr.10188.

Full text
Abstract:
Hybrid approaches have improved sensitivity, accuracy, and specificity in Diabetic Retinopathy (DR) classification. Deep feature sets provide a more holistic analysis of the retinal images, resulting in better detection of premature signs of DR. Hybrid strategies for classifying DR images combine the strengths of extracted deep features using pre-trained networks and Machine Learning (ML)-based classifiers to improve classification accuracy, robustness, and efficiency. Perfect pre-trained networks VGG19, ResNet101, and Shuffle Net were considered in this work. The networks were trained using a transfer learning approach, the pre-trained networks were chosen according to their classification accuracy in conjunction with the Softmax layer. Enhanced characteristics were extracted from the pre-trained networks' last layer and were fed to the machine learning-based classifier. The feature reduction and selection methods are essential for accomplishing the desired classification accuracy. ML-based kNN classifier was used to classify DR and a PCA-based feature reduction approach was utilized to obtain optimized deep feature sets. The extensive experiments revealed that ResNet101-based deep feature extraction and the kNN classifier delivered enhanced classification accuracy. It is concluded that combining deep features and the ML-based classifier employing a hybrid method enhances accuracy, robustness, and efficiency. The hybrid approach is a powerful tool for the premature identification of DR abnormalities. The PCA-kNN classification algorithm, which employs features obtained from the ResNet101, attained a peak classification accuracy of 98.9%.
APA, Harvard, Vancouver, ISO, and other styles
8

Chu, Yinghao, Chen Huang, Xiaodan Xie, Bohai Tan, Shyam Kamal, and Xiaogang Xiong. "Multilayer Hybrid Deep-Learning Method for Waste Classification and Recycling." Computational Intelligence and Neuroscience 2018 (November 1, 2018): 1–9. http://dx.doi.org/10.1155/2018/5060857.

Full text
Abstract:
This study proposes a multilayer hybrid deep-learning system (MHS) to automatically sort waste disposed of by individuals in the urban public area. This system deploys a high-resolution camera to capture waste image and sensors to detect other useful feature information. The MHS uses a CNN-based algorithm to extract image features and a multilayer perceptrons (MLP) method to consolidate image features and other feature information to classify wastes as recyclable or the others. The MHS is trained and validated against the manually labelled items, achieving overall classification accuracy higher than 90% under two different testing scenarios, which significantly outperforms a reference CNN-based method relying on image-only inputs.
APA, Harvard, Vancouver, ISO, and other styles
9

Atteia, Ghada, Michael J. Collins, Abeer D. Algarni, and Nagwan Abdel Samee. "Deep-Learning-Based Feature Extraction Approach for Significant Wave Height Prediction in SAR Mode Altimeter Data." Remote Sensing 14, no. 21 (2022): 5569. http://dx.doi.org/10.3390/rs14215569.

Full text
Abstract:
Predicting sea wave parameters such as significant wave height (SWH) has recently been identified as a critical requirement for maritime security and economy. Earth observation satellite missions have resulted in a massive rise in marine data volume and dimensionality. Deep learning technologies have proven their capabilities to process large amounts of data, draw useful insights, and assist in environmental decision making. In this study, a new deep-learning-based hybrid feature selection approach is proposed for SWH prediction using satellite Synthetic Aperture Radar (SAR) mode altimeter data. The introduced approach integrates the power of autoencoder deep neural networks in mapping input features into representative latent-space features with the feature selection power of the principal component analysis (PCA) algorithm to create significant features from altimeter observations. Several hybrid feature sets were generated using the proposed approach and utilized for modeling SWH using Gaussian Process Regression (GPR) and Neural Network Regression (NNR). SAR mode altimeter data from the Sentinel-3A mission calibrated by in situ buoy data was used for training and evaluating the SWH models. The significance of the autoencoder-based feature sets in improving the prediction performance of SWH models is investigated against original, traditionally selected, and hybrid features. The autoencoder–PCA hybrid feature set generated by the proposed approach recorded the lowest average RMSE values of 0.11069 for GPR models, which outperforms the state-of-the-art results. The findings of this study reveal the superiority of the autoencoder deep learning network in generating latent features that aid in improving the prediction performance of SWH models over traditional feature extraction methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Kanyal, Hoshiyar Singh, Prakash Joshi, Jitendra Kumar Seth, Arnika, and Tarun Kumar Sharma. "A Hybrid Deep Learning Framework for MRI-Based Brain Tumor Classification Processing." International Journal of Experimental Research and Review 46 (December 30, 2024): 165–76. https://doi.org/10.52756/ijerr.2024.v46.013.

Full text
Abstract:
Classifying tumors from MRI scans is a key medical imaging and diagnosis task. Conventional feature-based methods and traditional machine learning algorithms are used for tumor classification, which limits their performance and generalization. A hybrid framework is implemented for the classification of brain tumors using MRIs. The framework contains three basic components, i.e., Feature Extraction, Feature Fusion, and Classification. The feature extraction module uses a convolutional neural network (CNN) to automatically extract high-level features from MRI images. The high-level features are combined with clinical and demographic features through a feature fusion module for better discriminative power. The Support vector machine (SVM) was employed to classify the fused features as class label tumors by a classification module. The proposed model obtained 90.67% accuracy, 94.67% precision, 83.82% recall and 83.71% f1-score. Experimental results demonstrate the superiority of our framework over those existing solutions and obtain exceptional accuracy rates compared to all other frequently operated models. This hybrid deep learning framework has promising performance for efficient and reproducible tumor classification within brain MRI scans.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Hybrid deep-STTF feature learning"

1

Nassar, Alaa S. N. "A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/16917.

Full text
Abstract:
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.<br>Higher Committee for Education Development in Iraq
APA, Harvard, Vancouver, ISO, and other styles
2

Pan, Hsiang-Hua, and 潘香樺. "A Deep Learning Framework with Region Features and Hybrid Regression for Age Estimation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/dazdp6.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>機械工程系<br>106<br>We propose the Region-based Hybrid Framework (RHF) with moving segmentation and soft-boundary regression for age estimation. The RHF is an ensemble of VGG networks, and each VGG net considers a specific facial region as input. The VGG is selected from a comparison of pretrained facial models originally designed for face recognition, but trained again for age estimation by transfer learning. To improve the accuracy of RHF, we implement two schemes, the moving segmentation and soft boundary regression. The moving segmentation better determines the boundary ages good to segment the age. The soft boundary regression can rectify the age estimate that is falsely classified by the moving segmentation. The proposed approach is validated by experiments on MORPH, LAP and Adience, and compared to the state-of-the-art methods to demonstrate its efficacy.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Hybrid deep-STTF feature learning"

1

Jena, Pradeep Kumar, Bonomali Khuntia, Sarbajit Mohanty, and Charulata Palai. "Multimodal Face Recognition System Using Hybrid Deep Learning Feature." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-4359-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kadhar, S. A. Abdul, and S. Brintha Rajakumari. "Fake News Classification Using Feature Based Hybrid Deep Learning." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-86293-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khaire, Utkarsh Mahadeo, R. Dhanalakshmi, and K. Balakrishnan. "Hybrid Marine Predator Algorithm with Simulated Annealing for Feature Selection." In Machine Learning and Deep Learning in Medical Data Analytics and Healthcare Applications. CRC Press, 2022. http://dx.doi.org/10.1201/9781003226147-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Verma, Jyoti, Isha Kansal, Renu Popli, et al. "A Hybrid Images Deep Trained Feature Extraction and Ensemble Learning Models for Classification of Multi Disease in Fundus Images." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59091-7_14.

Full text
Abstract:
AbstractRetinal disorders, including diabetic retinopathy and macular degeneration due to aging, can lead to preventable blindness in diabetics. Vision loss caused by diseases that affect the retinal fundus cannot be reversed if not diagnosed and treated on time. This paper employs deep-learned feature extraction with ensemble learning models to improve the multi-disease classification of fundus images. This research presents a novel approach to the multi-classification of fundus images, utilizing deep-learned feature extraction techniques and ensemble learning to diagnose retinal disorders and diagnosing eye illnesses involving feature extraction, classification, and preprocessing of fundus images. The study involves analysis of deep learning and implementation of image processing. The ensemble learning classifiers have used retinal photos to increase the classification accuracy. The results demonstrate improved accuracy in diagnosing retinal disorders using DL feature extraction and ensemble learning models. The study achieved an overall accuracy of 87.2%, which is a significant improvement over the previous study. The deep learning models utilized in the study, including NASNetMobile, InceptionResNetV4, VGG16, and Xception, were effective in extracting relevant features from the Fundus images. The average F1-score for Extra Tree was 99%, while for Histogram Gradient Boosting and Random Forest, it was 98.8% and 98.4%, respectively. The results show that all three algorithms are suitable for the classification task. The combination of DenseNet feature extraction technique and RF, ET, and HG classifiers outperforms other techniques and classifiers. This indicates that using DenseNet for feature extraction can effectively enhance the performance of classifiers in the task of image classification.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Siqi, Daguang Xu, S. Kevin Zhou, Sasa Grbic, Weidong Cai, and Dorin Comaniciu. "Anisotropic Hybrid Network for Cross-Dimension Transferable Feature Learning in 3D Medical Images." In Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13969-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Muduli, Debendra, Santosh Kumar Sharma, Debasish Pradhan, Madhusmita Das, Suryakanta Mahapatra, and Saroj Kumar sahoo. "A Hybrid Deep Learning Approach for COVID-19 Detection: Feature Fusion and SVM Classification." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-8093-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krishnachalitha, K. C., and C. Priya. "Wireless Sensor Network-Based Hybrid Intrusion Detection System on Feature Extraction Deep Learning and Reinforcement Learning Techniques." In Intelligent Computing and Innovation on Data Science. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3284-9_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akanksha, Akanksha. "Tamil Language Automatic Speech Recognition Based on Integrated Feature Extraction and Hybrid Deep Learning Model." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9719-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaur, Jasleen, and Jatinderkumar Saini. "Deep Learning and Super-Hybrid Textual Feature Based Multi-category Thematic Classifier for Punjabi Poetry." In Applied Computational Technologies. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2719-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Xiongfei, Jing Wang, Yi Ruan, and Yuanjie Fang. "Wind Power Forecasting Model Based on Feature Processing and Deep Hybrid Kernel Extreme Learning Machine." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-96-0232-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Hybrid deep-STTF feature learning"

1

Sreepadh, A. S., P. Sheshu, R. G. Reshiha, T. Vinay Kumar, and Gurusamy Jeyakumar. "A Hybrid Feature Selection Model for Early Prediction of Metabolic Syndrome." In 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL). IEEE, 2025. https://doi.org/10.1109/icsadl65848.2025.10933207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yuvalatha, S., S. Nithyapriya, R. Tamizh Kuzhali, S. Savitha, S. K. Muthusundar, and Antonidoss A. "Capsule-Infused Vision Transformer: A Hybrid Deep Learning for Hierarchical Feature Learning." In 2024 International Conference on Smart Technologies for Sustainable Development Goals (ICSTSDG). IEEE, 2024. https://doi.org/10.1109/icstsdg61998.2024.11026478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yue, Yuan, Jeremiah D. Deng, Tapabrata Chakraborti, Dirk De Ridder, and Patrick Manning. "Unsupervised Hybrid Deep Feature Encoder for Robust Feature Learning from Resting-State EEG Data." In 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10781741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zaman, Wasim, Muhammad Farooq Siddique, and Jong-Myon Kim. "Centrifugal Pump Fault Detection with Hybrid Feature Pool and Deep Learning." In 2023 20th International Bhurban Conference on Applied Sciences and Technology (IBCAST). IEEE, 2023. http://dx.doi.org/10.1109/ibcast59916.2023.10712967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shyamala, N., and S. Mahaboob Basha. "Hybrid Deep Learning Model for Robust Brain Tumor Detection: Integrating Filtering and Feature Extraction." In 2025 8th International Conference on Trends in Electronics and Informatics (ICOEI). IEEE, 2025. https://doi.org/10.1109/icoei65986.2025.11013737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dongre, Aditya Kumar, and G. Kalaiarasi. "A Survey on Fake News Detection Using Multivariate Feature Selection and Hybrid Deep Learning Approach." In 2025 1st International Conference on AIML-Applications for Engineering & Technology (ICAET). IEEE, 2025. https://doi.org/10.1109/icaet63349.2025.10932142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gowthamy, J., C. Suleka, W. Kenitawin Helena Pereira, and P. Monish Kumar. "Towards Smart Hybrid Feature Analysis and Deep neuroevolutionary Brain Tumour Prediction based on Transfer Learning." In 2024 3rd International Conference for Advancement in Technology (ICONAT). IEEE, 2024. https://doi.org/10.1109/iconat61936.2024.10774740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ponemash, Oleksandr, A. A. M. Muzahid, Reda Lamtoueh, Hua Han, Yujin Zhang, and Ferdous Sohel. "Hybrid Feature Extraction for 12-Lead Ecg Classification by Integrating Handcrafted and Deep Learning Techniques." In 2025 17th International Conference on Computer and Automation Engineering (ICCAE). IEEE, 2025. https://doi.org/10.1109/iccae64891.2025.10980605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mahendra, Wahyu Nata, and Erwin Budi Setiawan. "Detection of Depression on Social Media X With FastText Feature Expansion Using Hybrid Deep Learning CNN-GRU." In 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS). IEEE, 2025. https://doi.org/10.1109/icadeis65852.2025.10933068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bunterngchit, Chayut, Thanaphon Chearanai, and Yuthachai Bunterngchit. "Towards Robust Cross-Subject EEG-fNIRS Classification: A Hybrid Deep Learning Model with Optimized Feature Selection." In 2024 22nd International Conference on Research and Education in Mechatronics (REM). IEEE, 2024. http://dx.doi.org/10.1109/rem63063.2024.10735694.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Hybrid deep-STTF feature learning"

1

Pasupuleti, Murali Krishna. Quantum-Enhanced Machine Learning: Harnessing Quantum Computing for Next-Generation AI Systems. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv125.

Full text
Abstract:
Abstract Quantum-enhanced machine learning (QML) represents a paradigm shift in artificial intelligence by integrating quantum computing principles to solve complex computational problems more efficiently than classical methods. By leveraging quantum superposition, entanglement, and parallelism, QML has the potential to accelerate deep learning training, optimize combinatorial problems, and enhance feature selection in high-dimensional spaces. This research explores foundational quantum computing concepts relevant to AI, including quantum circuits, variational quantum algorithms, and quantum kernel methods, while analyzing their impact on neural networks, generative models, and reinforcement learning. Hybrid quantum-classical AI architectures, which combine quantum subroutines with classical deep learning models, are examined for their ability to provide computational advantages in optimization and large-scale data processing. Despite the promise of quantum AI, challenges such as qubit noise, error correction, and hardware scalability remain barriers to full-scale implementation. This study provides an in-depth evaluation of quantum-enhanced AI, highlighting existing applications, ongoing research, and future directions in quantum deep learning, autonomous systems, and scientific computing. The findings contribute to the development of scalable quantum machine learning frameworks, offering novel solutions for next-generation AI systems across finance, healthcare, cybersecurity, and robotics. Keywords Quantum machine learning, quantum computing, artificial intelligence, quantum neural networks, quantum kernel methods, hybrid quantum-classical AI, variational quantum algorithms, quantum generative models, reinforcement learning, quantum optimization, quantum advantage, deep learning, quantum circuits, quantum-enhanced AI, quantum deep learning, error correction, quantum-inspired algorithms, quantum annealing, probabilistic computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KAN-ICE’s 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (https://github.com/mferdaus/kanice).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography