To see the other types of publications on this topic, follow the link: Multi-window based ensemble learning.

Journal articles on the topic 'Multi-window based ensemble learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-window based ensemble learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Hu, Ye Wang, Hua Wang, and Bin Zhou. "Multi-window based ensemble learning for classification of imbalanced streaming data." World Wide Web 20, no. 6 (March 8, 2017): 1507–25. http://dx.doi.org/10.1007/s11280-017-0449-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abdillah, Abid Famasya, Cornelius Bagus Purnama Putra, Apriantoni Apriantoni, Safitri Juanita, and Diana Purwitasari. "Ensemble-based Methods for Multi-label Classification on Biomedical Question-Answer Data." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (April 26, 2022): 42–50. http://dx.doi.org/10.20473/jisebi.8.1.42-50.

Full text
Abstract:
Background: Question-answer (QA) is a popular method to seek health-related information and biomedical data. Such questions can refer to more than one medical entity (multi-label) so determining the correct tags is not easy. The question classification (QC) mechanism in a QA system can narrow down the answers we are seeking. Objective: This study develops a multi-label classification using the heterogeneous ensembles method to improve accuracy in biomedical data with long text dimensions. Methods: We used the ensemble method with heterogeneous deep learning and machine learning for multi-label extended text classification. There are 15 various single models consisting of three deep learning (CNN, LSTM, and BERT) and four machine learning algorithms (SVM, kNN, Decision Tree, and Naïve Bayes) with various text representations (TF-IDF, Word2Vec, and FastText). We used the bagging approach with a hard voting mechanism for the decision-making. Results: The result shows that deep learning is more powerful than machine learning as a single multi-label biomedical data classification method. Moreover, we found that top-three was the best number of base learners by combining the ensembles method. Heterogeneous-based ensembles with three learners resulted in an F1-score of 82.3%, which is better than the best single model by CNN with an F1-score of 80%. Conclusion: A multi-label classification of biomedical QA using ensemble models is better than single models. The result shows that heterogeneous ensembles are more potent than homogeneous ensembles on biomedical QA data with long text dimensions. Keywords: Biomedical Question Classification, Ensemble Method, Heterogeneous Ensembles, Multi-Label Classification, Question Answering
APA, Harvard, Vancouver, ISO, and other styles
3

Meng, Jinyu, Zengchuan Dong, Yiqing Shao, Shengnan Zhu, and Shujun Wu. "Monthly Runoff Forecasting Based on Interval Sliding Window and Ensemble Learning." Sustainability 15, no. 1 (December 21, 2022): 100. http://dx.doi.org/10.3390/su15010100.

Full text
Abstract:
In recent years, machine learning, a popular artificial intelligence technique, has been successfully applied to monthly runoff forecasting. Monthly runoff autoregressive forecasting using machine learning models generally uses a sliding window algorithm to construct the dataset, which requires the selection of the optimal time step to make the machine learning tool function as intended. Based on this, this study improved the sliding window algorithm and proposes an interval sliding window (ISW) algorithm based on correlation coefficients, while the least absolute shrinkage and selection operator (LASSO) method was used to combine three machine learning models, Random Forest (RF), LightGBM, and CatBoost, into an ensemble to overcome the preference problem of individual models. Example analyses were conducted using 46 years of monthly runoff data from Jiutiaoling and Zamusi stations in the Shiyang River Basin, China. The results show that the ISW algorithm can effectively handle monthly runoff data and that the ISW algorithm produced a better dataset than the sliding window algorithm in the machine learning models. The forecast performance of the ensemble model combined the advantages of the single models and achieved the best forecast accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Zhiqiang, Zhankui He, and Xiangyang Xue. "MEAL: Multi-Model Ensemble via Adversarial Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4886–93. http://dx.doi.org/10.1609/aaai.v33i01.33014886.

Full text
Abstract:
Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%.
APA, Harvard, Vancouver, ISO, and other styles
5

Koohzadi, Maryam, Nasrollah Moghadam Charkari, and Foad Ghaderi. "Unsupervised representation learning based on the deep multi-view ensemble learning." Applied Intelligence 50, no. 2 (July 31, 2019): 562–81. http://dx.doi.org/10.1007/s10489-019-01526-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shan, Shuo, Chenxi Li, Zhetong Ding, Yiye Wang, Kanjian Zhang, and Haikun Wei. "Ensemble learning based multi-modal intra-hour irradiance forecasting." Energy Conversion and Management 270 (October 2022): 116206. http://dx.doi.org/10.1016/j.enconman.2022.116206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aboneh, Tagel, Abebe Rorissa, and Ramasamy Srinivasagan. "Stacking-Based Ensemble Learning Method for Multi-Spectral Image Classification." Technologies 10, no. 1 (January 26, 2022): 17. http://dx.doi.org/10.3390/technologies10010017.

Full text
Abstract:
Higher dimensionality, Hughes phenomenon, spatial resolution of image data, and presence of mixed pixels are the main challenges in a multi-spectral image classification process. Most of the classical machine learning algorithms suffer from scoring optimal classification performance over multi-spectral image data. In this study, we propose stack-based ensemble-based learning approach to optimize image classification performance. In addition, we integrate the proposed ensemble learning with XGBoost method to further improve its classification accuracy. To conduct the experiment, the Landsat image data has been acquired from Bishoftu town located in the Oromia region of Ethiopia. The current study’s main objective was to assess the performance of land cover and land use analysis using multi-spectral image data. Results from our experiment indicate that, the proposed ensemble learning method outperforms any strong base classifiers with 99.96% classification performance accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwon, Beom, and Sanghoon Lee. "Ensemble Learning for Skeleton-Based Body Mass Index Classification." Applied Sciences 10, no. 21 (November 4, 2020): 7812. http://dx.doi.org/10.3390/app10217812.

Full text
Abstract:
In this study, we performed skeleton-based body mass index (BMI) classification by developing a unique ensemble learning method for human healthcare. Traditionally, anthropometric features, including the average length of each body part and average height, have been utilized for this kind of classification. Average values are generally calculated for all frames because the length of body parts and the subject height vary over time, as a result of the inaccuracy in pose estimation. Thus, traditionally, anthropometric features are measured over a long period. In contrast, we controlled the window used to measure anthropometric features over short/mid/long-term periods. This approach enables our proposed ensemble model to obtain robust and accurate BMI classification results. To produce final results, the proposed ensemble model utilizes multiple k-nearest neighbor classifiers trained using anthropometric features measured over several different time periods. To verify the effectiveness of the proposed model, we evaluated it using a public dataset. The simulation results demonstrate that the proposed model achieves state-of-the-art performance when compared with benchmark methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Krasnopolsky, Vladimir M., and Ying Lin. "A Neural Network Nonlinear Multimodel Ensemble to Improve Precipitation Forecasts over Continental US." Advances in Meteorology 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/649450.

Full text
Abstract:
A novel multimodel ensemble approach based on learning from data using the neural network (NN) technique is formulated and applied for improving 24-hour precipitation forecasts over the continental US. The developed nonlinear approach allowed us to account for nonlinear correlation between ensemble members and to produce “optimal” forecast represented by a nonlinear NN ensemble mean. The NN approach is compared with the conservative multi-model ensemble, with multiple linear regression ensemble approaches, and with results obtained by human forecasters. The NN multi-model ensemble improves upon conservative multi-model ensemble and multiple linear regression ensemble, it (1) significantly reduces high bias at low precipitation level, (2) significantly reduces low bias at high precipitation level, and (3) sharpens features making them closer to the observed ones. The NN multi-model ensemble performs at least as well as human forecasters supplied with the same information. The developed approach is a generic approach that can be applied to other multi-model ensemble fields as well as to single model ensembles.
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Xiangping, Deyu Li, and Suge Wang. "A multi-instance ensemble learning model based on concept lattice." Knowledge-Based Systems 24, no. 8 (December 2011): 1203–13. http://dx.doi.org/10.1016/j.knosys.2011.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Qiangqiang, Dechun Zhao, Yi Wang, and Xiaorong Hou. "Ensemble learning algorithm based on multi-parameters for sleep staging." Medical & Biological Engineering & Computing 57, no. 8 (May 18, 2019): 1693–707. http://dx.doi.org/10.1007/s11517-019-01978-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hussein, Salam Allawi, Alyaa Abduljawad Mahmood, and Emaan Oudah Oraby. "Network Intrusion Detection System Using Ensemble Learning Approaches." Webology 18, SI05 (October 30, 2021): 962–74. http://dx.doi.org/10.14704/web/v18si05/web18274.

Full text
Abstract:
To mitigate modern network intruders in a rapidly growing and fast pattern changing network traffic data, single classifier is not sufficient. In this study Chi-Square feature selection technique is used to select the most important features of network traffic data, then AdaBoost, Random Forest (RF), and XGBoost ensemble classifiers were used to classify data based on binary-classes and multi-classes. The aim of this study is to improve detection rate accuracy for every individual attack types and all types of attacks, which will help us to identify attacks and particular category of attacks. The proposed method is evaluated using k-fold cross validation, and the experimental results of all the three classifiers with and without feature selection are compared together. We used two different datasets in our experiments to evaluate the model performance. The used datasets are NSL-KDD and UNSW-NB15.
APA, Harvard, Vancouver, ISO, and other styles
13

Kondo, Nobuhiko, Toshiharu Hatanaka, and Katsuji Uosaki. "RBF Networks Ensemble Construction based on Evolutionary Multi-objective Optimization." Journal of Advanced Computational Intelligence and Intelligent Informatics 12, no. 3 (May 20, 2008): 297–303. http://dx.doi.org/10.20965/jaciii.2008.p0297.

Full text
Abstract:
The ensemble learning has attracted much attention over the last decade. While constructing RBF network ensemble the generally encountered problems are how to construct diverse RBF networks and how to combine outputs. The construction of RBF network can be considered as a multi-objective optimization problem regarding model complexity. A set of RBF networks which is multi-objectively optimized can be obtained by solving the above-mentioned problem. In this paper the construction of RBF networks by evolutionary multi-objective optimization method and its ensemble are considered, and it is applied to the pattern classification problem. Also some ensemble member selection methods and output combination methods are considered. Experimental study on the benchmark problem of pattern classification is carried out; then it is illustrated that the RBF network ensemble has a performance, which is comparable to that of other ensemble methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Rong, Zihao, Shaofan Wang, Dehui Kong, and Baocai Yin. "A Cascaded Ensemble of Sparse-and-Dense Dictionaries for Vehicle Detection." Applied Sciences 11, no. 4 (February 20, 2021): 1861. http://dx.doi.org/10.3390/app11041861.

Full text
Abstract:
Vehicle detection as a special case of object detection has practical meaning but faces challenges, such as the difficulty of detecting vehicles of various orientations, the serious influence from occlusion, the clutter of background, etc. In addition, existing effective approaches, like deep-learning-based ones, demand a large amount of training time and data, which causes trouble for their application. In this work, we propose a dictionary-learning-based vehicle detection approach which explicitly addresses these problems. Specifically, an ensemble of sparse-and-dense dictionaries (ESDD) are learned through supervised low-rank decomposition; each pair of sparse-and-dense dictionaries (SDD) in the ensemble is trained to represent either a subcategory of vehicle (corresponding to certain orientation range or occlusion level) or a subcategory of background (corresponding to a cluster of background patterns) and only gives good reconstructions to samples of the corresponding subcategory, making the ESDD capable of classifying vehicles from background even though they exhibit various appearances. We further organize ESDD into a two-level cascade (CESDD) to perform coarse-to-fine two-stage classification for better performance and computation reduction. The CESDD is then coupled with a downstream AdaBoost process to generate robust classifications. The proposed CESDD model is used as a window classifier in a sliding-window scan process over image pyramids to produce multi-scale detections, and an adapted mean-shift-like non-maximum suppression process is adopted to remove duplicate detections. Our CESDD vehicle detection approach is evaluated on KITTI dataset and compared with other strong counterparts; the experimental results exhibit the effectiveness of CESDD-based classification and detection, and the training of CESDD only demands small amount of time and data.
APA, Harvard, Vancouver, ISO, and other styles
15

Jiang, Zhen, and Yong-Zhao Zhan. "A Novel Diversity-Based Semi-Supervised Learning Framework with Related Theoretical Analysis." International Journal on Artificial Intelligence Tools 24, no. 03 (June 2015): 1550011. http://dx.doi.org/10.1142/s0218213015500116.

Full text
Abstract:
We present a new co-training style framework and combine it with ensemble learning to further improve the generalization ability. By employing different strategies to combine co-training with ensemble learning, two learning algorithms, Sequential Ensemble Co-Learning (SECL) and Parallel Ensemble Co-Learning (PECL) are developed. Furthermore, we propose a weighted bagging method in PECL to generate an ensemble of diverse classifiers at the end of co-training. Finally, based on the voting margin, an upper bound on the generalization error of multi-classifier voting systems is given in the presence of both classification noise and distribution noise. Experimental results on six datasets show that our method performs better than other compared algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Iyer, V., S. Shetty, and S. S. Iyengar. "STATISTICAL METHODS IN AI: RARE EVENT LEARNING USING ASSOCIATIVE RULES AND HIGHER-ORDER STATISTICS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-4/W2 (July 10, 2015): 119–30. http://dx.doi.org/10.5194/isprsannals-ii-4-w2-119-2015.

Full text
Abstract:
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (<i>t</i>) in our proposed ensemble always yields minimum number (<i>m</i>) of leafs keeping pre-processing computation to <i>n</i> &times; <i>t</i> log <i>m</i> compared to <i>N<sup>2</sup></i> for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
APA, Harvard, Vancouver, ISO, and other styles
17

Ban, Yuseok, and Kyungjae Lee. "Multi-Scale Ensemble Learning for Thermal Image Enhancement." Applied Sciences 11, no. 6 (March 22, 2021): 2810. http://dx.doi.org/10.3390/app11062810.

Full text
Abstract:
In this study, we propose a multi-scale ensemble learning method for thermal image enhancement in different image scale conditions based on convolutional neural networks. Incorporating the multiple scales of thermal images has been a tricky task so that methods have been individually trained and evaluated for each scale. However, this leads to the limitation that a network properly operates on a specific scale. To address this issue, a novel parallel architecture leveraging the confidence maps of multiple scales have been introduced to train a network that operates well in varying scale conditions. The experimental results show that our proposed method outperforms the conventional thermal image enhancement methods. The evaluation is presented both quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
18

Thapa, Niraj, Zhipeng Liu, Addison Shaver, Albert Esterline, Balakrishna Gokaraju, and Kaushik Roy. "Secure Cyber Defense: An Analysis of Network Intrusion-Based Dataset CCD-IDSv1 with Machine Learning and Deep Learning Models." Electronics 10, no. 15 (July 21, 2021): 1747. http://dx.doi.org/10.3390/electronics10151747.

Full text
Abstract:
Anomaly detection and multi-attack classification are major concerns for cyber defense. Several publicly available datasets have been used extensively for the evaluation of Intrusion Detection Systems (IDSs). However, most of the publicly available datasets may not contain attack scenarios based on evolving threats. The development of a robust network intrusion dataset is vital for network threat analysis and mitigation. Proactive IDSs are required to tackle ever-growing threats in cyberspace. Machine learning (ML) and deep learning (DL) models have been deployed recently to detect the various types of cyber-attacks. However, current IDSs struggle to attain both a high detection rate and a low false alarm rate. To address these issues, we first develop a Center for Cyber Defense (CCD)-IDSv1 labeled flow-based dataset in an OpenStack environment. Five different attacks with normal usage imitating real-life usage are implemented. The number of network features is increased to overcome the shortcomings of the previous network flow-based datasets such as CIDDS and CIC-IDS2017. Secondly, this paper presents a comparative analysis on the effectiveness of different ML and DL models on our CCD-IDSv1 dataset. In this study, we consider both cyber anomaly detection and multi-attack classification. To improve the performance, we developed two DL-based ensemble models: Ensemble-CNN-10 and Ensemble-CNN-LSTM. Ensemble-CNN-10 combines 10 CNN models developed from 10-fold cross-validation, whereas Ensemble-CNN-LSTM combines base CNN and LSTM models. This paper also presents feature importance for both anomaly detection and multi-attack classification. Overall, the proposed ensemble models performed well in both the 10-fold cross-validation and independent testing on our dataset. Together, these results suggest the robustness and effectiveness of the proposed IDSs based on ML and DL models on the CCD-IDSv1 intrusion detection dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Ding, Weimin, and Shengli Wu. "A cross-entropy based stacking method in ensemble learning." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 4677–88. http://dx.doi.org/10.3233/jifs-200600.

Full text
Abstract:
Stacking is one of the major types of ensemble learning techniques in which a set of base classifiers contributes their outputs to the meta-level classifier, and the meta-level classifier combines them so as to produce more accurate classifications. In this paper, we propose a new stacking algorithm that defines the cross-entropy as the loss function for the classification problem. The training process is conducted by using a neural network with the stochastic gradient descent technique. One major characteristic of our method is its treatment of each meta instance as a whole with one optimization model, which is different from some other stacking methods such as stacking with multi-response linear regression and stacking with multi-response model trees. In these methods each meta instance is divided into a set of sub-instances. Multiple models apply to those sub-instances and each for a class label. There is no connection between different models. It is very likely that our treatment is a better choice for finding suitable weights. Experiments with 22 data sets from the UCI machine learning repository show that the proposed stacking approach performs well. It outperforms all three base classifiers, several state-of-the-art stacking algorithms, and some other representative ensemble learning methods on average.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Xiaoying, Bin Yu, Anjun Ma, Cheng Chen, Bingqiang Liu, and Qin Ma. "Protein–protein interaction sites prediction by ensemble random forests with synthetic minority oversampling technique." Bioinformatics 35, no. 14 (December 5, 2018): 2395–402. http://dx.doi.org/10.1093/bioinformatics/bty995.

Full text
Abstract:
Abstract Motivation The prediction of protein–protein interaction (PPI) sites is a key to mutation design, catalytic reaction and the reconstruction of PPI networks. It is a challenging task considering the significant abundant sequences and the imbalance issue in samples. Results A new ensemble learning-based method, Ensemble Learning of synthetic minority oversampling technique (SMOTE) for Unbalancing samples and RF algorithm (EL-SMURF), was proposed for PPI sites prediction in this study. The sequence profile feature and the residue evolution rates were combined for feature extraction of neighboring residues using a sliding window, and the SMOTE was applied to oversample interface residues in the feature space for the imbalance problem. The Multi-dimensional Scaling feature selection method was implemented to reduce feature redundancy and subset selection. Finally, the Random Forest classifiers were applied to build the ensemble learning model, and the optimal feature vectors were inserted into EL-SMURF to predict PPI sites. The performance validation of EL-SMURF on two independent validation datasets showed 77.1% and 77.7% accuracy, which were 6.2–15.7% and 6.1–18.9% higher than the other existing tools, respectively. Availability and implementation The source codes and data used in this study are publicly available at http://github.com/QUST-AIBBDRC/EL-SMURF/. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xiaomeng, Jianhong Yang, Fu Chang, Xiaomin Zheng, and Xiaoxia He. "LIBS quantitative analysis for vanadium slags based on selective ensemble learning." Journal of Analytical Atomic Spectrometry 34, no. 6 (2019): 1135–44. http://dx.doi.org/10.1039/c9ja00035f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Yuanyuan, and Zhibin Wang. "Wavelength Selection for NIR Spectroscopy Based on the Binary Dragonfly Algorithm." Molecules 24, no. 3 (January 24, 2019): 421. http://dx.doi.org/10.3390/molecules24030421.

Full text
Abstract:
Wavelength selection is an important preprocessing issue in near-infrared (NIR) spectroscopy analysis and modeling. Swarm optimization algorithms (such as genetic algorithm, bat algorithm, etc.) have been successfully applied to select the most effective wavelengths in previous studies. However, these algorithms suffer from the problem of unrobustness, which means that the selected wavelengths of each optimization are different. To solve this problem, this paper proposes a novel wavelength selection method based on the binary dragonfly algorithm (BDA), which includes three typical frameworks: single-BDA, multi-BDA, ensemble learning-based BDA settings. The experimental results for the public gasoline NIR spectroscopy dataset showed that: (1) By using the multi-BDA and ensemble learning-based BDA methods, the stability of wavelength selection can improve; (2) With respect to the generalized performance of the quantitative analysis model, the model established with the wavelengths selected by using the multi-BDA and the ensemble learning-based BDA methods outperformed the single-BDA method. The results also indicated that the proposed method is not limited to the dragonfly algorithm but can also be combined with other swarm optimization algorithms. In addition, the ensemble learning idea can be applied to other feature selection areas to obtain more robust results.
APA, Harvard, Vancouver, ISO, and other styles
23

Yin, Xiaoyan, Yupeng Fan, Yi Qin, Haojie Jiang, Hao Jiang, and Xiang Ye. "Fault Detection of Wind Turbine Pitch Motors Based on Ensemble Learning Approach." Journal of Physics: Conference Series 2401, no. 1 (December 1, 2022): 012086. http://dx.doi.org/10.1088/1742-6596/2401/1/012086.

Full text
Abstract:
Abstract Machine learning-based condition monitoring of wind turbines’ critical components is an active area of research, especially for pitch systems, which suffer from a high failure rate. In this work, we successfully predicted and detected the high-temperature fault of the electric pitch motor by analyzing SCADA data through the ensemble learning-based approach. For that, normal behavior models to predict pitch motor temperature were constructed respectively for three pitch motors by gradient boosting tree regression. Residual evolution before the reported high-temperature fault was studied by the sliding window approach. A Shewhart control chart was applied to detect the anomalies of temperature. The proposed approach successfully gave an early warning for the potential high-temperature fault of electric pitch motors around ten days prior to the SCADA system.
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, Fangyao, Yong Peng, Wanzeng Kong, and Guojun Dai. "Multi-Scale Frequency Bands Ensemble Learning for EEG-Based Emotion Recognition." Sensors 21, no. 4 (February 10, 2021): 1262. http://dx.doi.org/10.3390/s21041262.

Full text
Abstract:
Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the “SEED IV” data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the “DEAP” data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.
APA, Harvard, Vancouver, ISO, and other styles
25

PENG, Yinghui, Dongbo ZHANG, and Ben SHEN. "Microaneurysm detection based on multi-scale match filtering and ensemble learning." Journal of Computer Applications 33, no. 2 (September 24, 2013): 543–46. http://dx.doi.org/10.3724/sp.j.1087.2013.00543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Pengyun, Xiang Wang, Mingyang Wang, Xiaqing Yang, Shisheng Guo, Chaoshu Jiang, Guolong Cui, and Lingjiang Kong. "Multi-View Real-Time Human Motion Recognition Based on Ensemble Learning." IEEE Sensors Journal 21, no. 18 (September 15, 2021): 20335–47. http://dx.doi.org/10.1109/jsen.2021.3094548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

SIMM, Jaak, Ildefons MAGRANS DE ABRIL, and Masashi SUGIYAMA. "Tree-Based Ensemble Multi-Task Learning Method for Classification and Regression." IEICE Transactions on Information and Systems E97.D, no. 6 (2014): 1677–81. http://dx.doi.org/10.1587/transinf.e97.d.1677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Feng, Yixuan Li, Fanshu Liao, and Hongyang Yan. "An ensemble learning based prediction strategy for dynamic multi-objective optimization." Applied Soft Computing 96 (November 2020): 106592. http://dx.doi.org/10.1016/j.asoc.2020.106592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dai, Yusheng, Hui Li, Yekui Qian, Ruipeng Yang, and Min Zheng. "SMASH: A Malware Detection Method Based on Multi-Feature Ensemble Learning." IEEE Access 7 (2019): 112588–97. http://dx.doi.org/10.1109/access.2019.2934012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xiao, Yawen, Jun Wu, Zongli Lin, and Xiaodong Zhao. "A deep learning-based multi-model ensemble method for cancer prediction." Computer Methods and Programs in Biomedicine 153 (January 2018): 1–9. http://dx.doi.org/10.1016/j.cmpb.2017.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Mahalingam, Sheila, Mohd Faizal Abdollah, and Shahrin bin Sahibuddin. "Designing Ensemble Based Security Framework for M-Learning System." International Journal of Distance Education Technologies 12, no. 2 (April 2014): 66–82. http://dx.doi.org/10.4018/ijdet.2014040104.

Full text
Abstract:
Mobile Learning has a potential to improve efficiency in the education sector and expand educational opportunities to underserved remote area in higher learning institutions. However there are multi challenges in different altitude faced when introducing and implementing m-learning. Despite the evolution of technology changes in education, unfocused issues are security management even though statistically proven; threats are increasing each day on mobile application and ensemble devices. In order to provide a secure guideline for m-learning platform, an ensemble based security framework for mobile learning is proposed and improved. One of the major benefits in the framework is it integrates the security with dependability to provide trustworthiness in learner and providers perspective.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Zhongming, Jiahui Dong, Lianlian Wu, Chong Dai, Jing Wang, Yuqi Wen, Yixin Zhang, Xiaoxi Yang, Song He, and Xiaochen Bo. "DEML: Drug Synergy and Interaction Prediction Using Ensemble-Based Multi-Task Learning." Molecules 28, no. 2 (January 14, 2023): 844. http://dx.doi.org/10.3390/molecules28020844.

Full text
Abstract:
Synergistic drug combinations have demonstrated effective therapeutic effects in cancer treatment. Deep learning methods accelerate identification of novel drug combinations by reducing the search space. However, potential adverse drug–drug interactions (DDIs), which may increase the risks for combination therapy, cannot be detected by existing computational synergy prediction methods. We propose DEML, an ensemble-based multi-task neural network, for the simultaneous optimization of five synergy regression prediction tasks, synergy classification, and DDI classification tasks. DEML uses chemical and transcriptomics information as inputs. DEML adapts the novel hybrid ensemble layer structure to construct higher order representation using different perspectives. The task-specific fusion layer of DEML joins representations for each task using a gating mechanism. For the Loewe synergy prediction task, DEML overperforms the state-of-the-art synergy prediction method with an improvement of 7.8% and 13.2% for the root mean squared error and the R2 correlation coefficient. Owing to soft parameter sharing and ensemble learning, DEML alleviates the multi-task learning ‘seesaw effect’ problem and shows no performance loss on other tasks. DEML has a superior ability to predict drug pairs with high confidence and less adverse DDIs. DEML provides a promising way to guideline novel combination therapy strategies for cancer treatment.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahn, Hanse, Seungwook Son, Heegon Kim, Sungju Lee, Yongwha Chung, and Daihee Park. "EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection." Applied Sciences 11, no. 12 (June 16, 2021): 5577. http://dx.doi.org/10.3390/app11125577.

Full text
Abstract:
Automated pig monitoring is important for smart pig farms; thus, several deep-learning-based pig monitoring techniques have been proposed recently. In applying automated pig monitoring techniques to real pig farms, however, practical issues such as detecting pigs from overexposed regions, caused by strong sunlight through a window, should be considered. Another practical issue in applying deep-learning-based techniques to a specific pig monitoring application is the annotation cost for pig data. In this study, we propose a method for managing these two practical issues. Using annotated data obtained from training images without overexposed regions, we first generated augmented data to reduce the effect of overexposure. Then, we trained YOLOv4 with both the annotated and augmented data and combined the test results from two YOLOv4 models in a bounding box level to further improve the detection accuracy. We propose accuracy metrics for pig detection in a closed pig pen to evaluate the accuracy of the detection without box-level annotation. Our experimental results with 216,000 “unseen” test data from overexposed regions in the same pig pen show that the proposed ensemble method can significantly improve the detection accuracy of the baseline YOLOv4, from 79.93% to 94.33%, with additional execution time.
APA, Harvard, Vancouver, ISO, and other styles
34

Choudhury, Amitava, Tanmay Konnur, P. P. Chattopadhyay, and Snehanshu Pal. "Structure prediction of multi-principal element alloys using ensemble learning." Engineering Computations 37, no. 3 (November 21, 2019): 1003–22. http://dx.doi.org/10.1108/ec-04-2019-0151.

Full text
Abstract:
Purpose The purpose of this paper, is to predict the various phases and crystal structure from multi-component alloys. Nowadays, the concept and strategies of the development of multi-principal element alloys (MPEAs) significantly increase the count of the potential candidate of alloy systems, which demand proper screening of large number of alloy systems based on the nature of their phase and structure. Experimentally obtained data linking elemental properties and their resulting phases for MPEAs is profused; hence, there is a strong scope for categorization/classification of MPEAs based on structural features of the resultant phase along with distinctive connections between elemental properties and phases. Design/methodology/approach In this paper, several machine-learning algorithms have been used to recognize the underlying data pattern using data sets to design MPEAs and classify them based on structural features of their resultant phase such as single-phase solid solution, amorphous and intermetallic compounds. Further classification of MPEAs having single-phase solid solution is performed based on crystal structure using an ensemble-based machine-learning algorithm known as random-forest algorithm. Findings The model developed by implementing random-forest algorithm has resulted in an accuracy of 91 per cent for phase prediction and 93 per cent for crystal structure prediction for single-phase solid solution class of MPEAs. Five input parameters are used in the prediction model namely, valence electron concentration, difference in the pauling negativeness, atomic size difference, mixing enthalpy and mixing entropy. It has been found that the valence electron concentration is the most important feature with respect to prediction of phases. To avoid overfitting problem, fivefold cross-validation has been performed. To understand the comparative performance, different algorithms such as K-nearest Neighbor, support vector machine, logistic regression, naïve-based approach, decision tree and neural network have been used in the data set. Originality/value In this paper, the authors described the phase selection and crystal structure prediction mechanism in MPEA data set and have achieved better accuracy using machine learning.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Ruijin, Bo Tang, and Wenhai Wei. "Ensemble Learning-Based Reactive Power Optimization for Distribution Networks." Energies 15, no. 6 (March 8, 2022): 1966. http://dx.doi.org/10.3390/en15061966.

Full text
Abstract:
Reactive power optimization of distribution networks is of great significance to improve power quality and reduce power loss. However, traditional methods for reactive power optimization of distribution networks either consume a lot of calculation time or have limited accuracy. In this paper, a novel data-driven-based approach is proposed to simultaneously improve the accuracy and reduce calculation time for reactive power optimization using ensemble learning. Specifically, k-fold cross-validation is used to train multiple sub-models, which are merged to obtain high-quality optimization results through the proposed ensemble framework. The simulation results show that the proposed approach outperforms popular baselines, such as light gradient boosting machine, convolutional neural network, case-based reasoning, and multi-layer perceptron. Moreover, the calculation time is much lower than the traditional heuristic methods, such as the genetic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Faber, Kamil, Marcin Pietron, and Dominik Zurek. "Ensemble Neuroevolution-Based Approach for Multivariate Time Series Anomaly Detection." Entropy 23, no. 11 (November 6, 2021): 1466. http://dx.doi.org/10.3390/e23111466.

Full text
Abstract:
Multivariate time series anomaly detection is a widespread problem in the field of failure prevention. Fast prevention means lower repair costs and losses. The amount of sensors in novel industry systems makes the anomaly detection process quite difficult for humans. Algorithms that automate the process of detecting anomalies are crucial in modern failure prevention systems. Therefore, many machine learning models have been designed to address this problem. Mostly, they are autoencoder-based architectures with some generative adversarial elements. This work shows a framework that incorporates neuroevolution methods to boost the anomaly detection scores of new and already known models. The presented approach adapts evolution strategies for evolving an ensemble model, in which every single model works on a subgroup of data sensors. The next goal of neuroevolution is to optimize the architecture and hyperparameters such as the window size, the number of layers, and the layer depths. The proposed framework shows that it is possible to boost most anomaly detection deep learning models in a reasonable time and a fully automated mode. We ran tests on the SWAT and WADI datasets. To the best of our knowledge, this is the first approach in which an ensemble deep learning anomaly detection model is built in a fully automatic way using a neuroevolution strategy.
APA, Harvard, Vancouver, ISO, and other styles
37

ENEMBRECK, FABRÍCIO, CESAR AUGUSTO TACLA, and JEAN-PAUL BARTHÈS. "LEARNING NEGOTIATION POLICIES USING ENSEMBLE-BASED DRIFT DETECTION TECHNIQUES." International Journal on Artificial Intelligence Tools 18, no. 02 (April 2009): 173–96. http://dx.doi.org/10.1142/s021821300900010x.

Full text
Abstract:
In this work we compare drift detection techniques and we show how they can improve the performance of trade agents in multi-issue bilateral dynamic negotiations. In a dynamic negotiation the utility values and functions of trade agents can change on the fly. Intelligent trade agents must identify and take such drift in the competitors into account changing also the offer policies to improve the global utility throughout the negotiation. However, traditional learning mechanisms disregard possible changes in a competitor's offer/counter-offer policy. In that case, the agent performance may decrease drastically. In our approach, a trade agent has a staff of weighted learner agents used to predict interesting offers. The staff uses the Dynamic Weighted Majority (DWM) algorithm to adapt itself creating, deleting and adapting staff members. The results obtained with the IB3 (Instance-based) learners and IB3-DWM learners show that ensemble methods like DWM are suitable for correctly identifying changes in agent negotiations.
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Donghui, Zhongjin Ma, Xiaotian Zhang, Peipei Li, Dengpan Ye, and Baohong Ling. "The Concept Drift Problem in Android Malware Detection and Its Solution." Security and Communication Networks 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/4956386.

Full text
Abstract:
Currently, the Android platform is the most popular mobile platform in the world and holds a dominant share in the mobile device market. With the popularization of the Android platform, large numbers of Android malware programs have begun to emerge on the Internet, and the sophistication of these programs is developing rapidly. While many studies have already investigated Android malware detection through machine learning and have achieved good results, most of these are based on static data sources and fail to consider the concept drift problem resulting from the rapid growth in the number of Android malware programs and normal Android applications, as well as rapid technological advancement in the Android environment. To address this problem, this work proposes a solution based on an ensemble classifier. This ensemble classifier is based on a streaming data-based Naive Bayes classifier. Android malware has identifiable feature utilization tendencies. On this basis, feature selection algorithm is introduced into the ensemble classifier, and a sliding window is maintained inside the ensemble classifier. Based on the performance of the subclassifiers inside the sliding window, the ensemble classifier makes dynamic adjustments to address the concept drift problem in Android malware detection. The experimental results from the proposed method demonstrate that it can effectively address the concept drift problem in Android malware detection in a streaming data environment.
APA, Harvard, Vancouver, ISO, and other styles
39

He, Fang, Wenyu Zhang, and Zhijia Yan. "A novel multi-stage ensemble model for credit scoring based on synthetic sampling and feature transformation." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 2127–42. http://dx.doi.org/10.3233/jifs-211467.

Full text
Abstract:
Credit scoring has become increasingly important for financial institutions. With the advancement of artificial intelligence, machine learning methods, especially ensemble learning methods, have become increasingly popular for credit scoring. However, the problems of imbalanced data distribution and underutilized feature information have not been well addressed sufficiently. To make the credit scoring model more adaptable to imbalanced datasets, the original model-based synthetic sampling method is extended herein to balance the datasets by generating appropriate minority samples to alleviate class overlap. To enable the credit scoring model to extract inherent correlations from features, a new bagging-based feature transformation method is proposed, which transforms features using a tree-based algorithm and selects features using the chi-square statistic. Furthermore, a two-layer ensemble method that combines the advantages of dynamic ensemble selection and stacking is proposed to improve the classification performance of the proposed multi-stage ensemble model. Finally, four standardized datasets are used to evaluate the performance of the proposed ensemble model using six evaluation metrics. The experimental results confirm that the proposed ensemble model is effective in improving classification performance and is superior to other benchmark models.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yaojin, Qinghua Hu, Jinghua Liu, Xingquan Zhu, and Xindong Wu. "MULFE: Multi-Label Learning via Label-Specific Feature Space Ensemble." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–24. http://dx.doi.org/10.1145/3451392.

Full text
Abstract:
In multi-label learning, label correlations commonly exist in the data. Such correlation not only provides useful information, but also imposes significant challenges for multi-label learning. Recently, label-specific feature embedding has been proposed to explore label-specific features from the training data, and uses feature highly customized to the multi-label set for learning. While such feature embedding methods have demonstrated good performance, the creation of the feature embedding space is only based on a single label, without considering label correlations in the data. In this article, we propose to combine multiple label-specific feature spaces, using label correlation, for multi-label learning. The proposed algorithm, mu lti- l abel-specific f eature space e nsemble (MULFE), takes consideration label-specific features, label correlation, and weighted ensemble principle to form a learning framework. By conducting clustering analysis on each label’s negative and positive instances, MULFE first creates features customized to each label. After that, MULFE utilizes the label correlation to optimize the margin distribution of the base classifiers which are induced by the related label-specific feature spaces. By combining multiple label-specific features, label correlation based weighting, and ensemble learning, MULFE achieves maximum margin multi-label classification goal through the underlying optimization framework. Empirical studies on 10 public data sets manifest the effectiveness of MULFE.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Wen, Xinyu Li, Liang Gao, and Weiming Shen. "Improving Computer-Aided Cervical Cells Classification Using Transfer Learning Based Snapshot Ensemble." Applied Sciences 10, no. 20 (October 19, 2020): 7292. http://dx.doi.org/10.3390/app10207292.

Full text
Abstract:
Cervical cells classification is a crucial component of computer-aided cervical cancer detection. Fine-grained classification is of great clinical importance when guiding clinical decisions on the diagnoses and treatment, which remains very challenging. Recently, convolutional neural networks (CNN) provide a novel way to classify cervical cells by using automatically learned features. Although the ensemble of CNN models can increase model diversity and potentially boost the classification accuracy, it is a multi-step process, as several CNN models need to be trained respectively and then be selected for ensemble. On the other hand, due to the small training samples, the advantages of powerful CNN models may not be effectively leveraged. In order to address such a challenging issue, this paper proposes a transfer learning based snapshot ensemble (TLSE) method by integrating snapshot ensemble learning with transfer learning in a unified and coordinated way. Snapshot ensemble provides ensemble benefits within a single model training procedure, while transfer learning focuses on the small sample problem in cervical cells classification. Furthermore, a new training strategy is proposed for guaranteeing the combination. The TLSE method is evaluated on a pap-smear dataset called Herlev dataset and is proved to have some superiorities over the exiting methods. It demonstrates that TLSE can improve the accuracy in an ensemble manner with only one single training process for the small sample in fine-grained cervical cells classification.
APA, Harvard, Vancouver, ISO, and other styles
42

Du, Hui, and Yanning Zhang. "Ensemble Learning-Based Multi-Cues Fusion Object Tracking in Complex Surveillance Environment." Computational Intelligence and Neuroscience 2022 (August 10, 2022): 1–13. http://dx.doi.org/10.1155/2022/9165744.

Full text
Abstract:
The vast majority of currently available kernelized correlation filter (KCF)-based trackers simply make use of a single object feature to define the object of interest. It is impossible to avoid tracking instability while working with a wide variety of complex videos. In this piece of research, an ensemble learning-based multi-cues fusion object tracking method is offered as a potential solution to the issue at hand. Using ensemble learning to train multiple kernelized correlation filters with different features in order to obtain the optimal tracking parameters is the primary concept behind the improved KCF-based tracking algorithm. After that, the peak side lobe ratio and the response consistency of two adjacent frames are used to obtain the fusion weight. In addition, an adaptive weighted fusion technique is applied in order to combine the response findings in order to finish the location estimation; finally, the tracking confidence is applied in order to update the tracking model in order to prevent model deterioration. In order to increase the adaptability of the revised algorithm to size-change, a Bayesian estimate model based on scale pyramid has been presented. This model is able to determine the optimal scale of the object, which is the goal of this endeavor. The tracking results of a number of different benchmark movies demonstrate that the algorithm that we have suggested is able to effectively eliminate the effects of interference elements, and that its overall performance is superior to that of the comparison algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Jingyu, Beiji Zou, Pingbo Ouyang, and Rongchang Zhao. "Retinal microaneurysm detection based on transformation splicing and multi-context ensemble learning." Biomedical Signal Processing and Control 74 (April 2022): 103536. http://dx.doi.org/10.1016/j.bspc.2022.103536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Maurya, Ritesh, Vinay Kumar Pathak, and Malay Kishore Dutta. "Deep learning based microscopic cell images classification framework using multi-level ensemble." Computer Methods and Programs in Biomedicine 211 (November 2021): 106445. http://dx.doi.org/10.1016/j.cmpb.2021.106445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

ZALL, R., and M. R. KEYVANPOUR. "Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation." Advances in Electrical and Computer Engineering 16, no. 2 (2016): 111–24. http://dx.doi.org/10.4316/aece.2016.02015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Galicia, A., R. Talavera-Llames, A. Troncoso, I. Koprinska, and F. Martínez-Álvarez. "Multi-step forecasting for big data time series based on ensemble learning." Knowledge-Based Systems 163 (January 2019): 830–41. http://dx.doi.org/10.1016/j.knosys.2018.10.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Song, Xiangfa, L. C. Jiao, Shuyuan Yang, Xiangrong Zhang, and Fanhua Shang. "Sparse coding and classifier ensemble based multi-instance learning for image categorization." Signal Processing 93, no. 1 (January 2013): 1–11. http://dx.doi.org/10.1016/j.sigpro.2012.07.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chan, Felix T. S., Z. X. Wang, S. Patnaik, M. K. Tiwari, X. P. Wang, and J. H. Ruan. "Ensemble-learning based neural networks for novelty detection in multi-class systems." Applied Soft Computing 93 (August 2020): 106396. http://dx.doi.org/10.1016/j.asoc.2020.106396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Wei, Li Zhang, Mengjun Zhang, and Zhixiong Wang. "Few shot learning for multi-class classification based on nested ensemble DSVM." Ad Hoc Networks 98 (March 2020): 102055. http://dx.doi.org/10.1016/j.adhoc.2019.102055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bai, Bing, Guiling Li, Senzhang Wang, Zongda Wu, and Wenhe Yan. "Time series classification based on multi-feature dictionary representation and ensemble learning." Expert Systems with Applications 169 (May 2021): 114162. http://dx.doi.org/10.1016/j.eswa.2020.114162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography