To see the other types of publications on this topic, follow the link: Incremental learning.

Journal articles on the topic 'Incremental learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Incremental learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tsvetkov, V. Ya. "INCREMENTAL LEARNING." Образовательные ресурсы и технологии, no. 4 (2021): 44–52. http://dx.doi.org/10.21777/2500-2112-2021-4-44-52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sim, Kwee-Bo, Kwang-Seung Heo, Chang-Hyun Park, and Dong-Wook Lee. "The Speaker Identification Using Incremental Learning." Journal of Korean Institute of Intelligent Systems 13, no. 5 (October 1, 2003): 576–81. http://dx.doi.org/10.5391/jkiis.2003.13.5.576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boukli Hacene, Ghouthi, Vincent Gripon, Nicolas Farrugia, Matthieu Arzel, and Michel Jezequel. "Transfer Incremental Learning Using Data Augmentation." Applied Sciences 8, no. 12 (December 6, 2018): 2512. http://dx.doi.org/10.3390/app8122512.

Full text
Abstract:
Deep learning-based methods have reached state of the art performances, relying on a large quantity of available data and computational power. Such methods still remain highly inappropriate when facing a major open machine learning problem, which consists of learning incrementally new classes and examples over time. Combining the outstanding performances of Deep Neural Networks (DNNs) with the flexibility of incremental learning techniques is a promising venue of research. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA is based on pre-trained DNNs as feature extractors, robust selection of feature vectors in subspaces using a nearest-class-mean based technique, majority votes and data augmentation at both the training and the prediction stages. Experiments on challenging vision datasets demonstrate the ability of the proposed method for low complexity incremental learning, while achieving significantly better accuracy than existing incremental counterparts.
APA, Harvard, Vancouver, ISO, and other styles
4

Basu Roy Chowdhury, Somnath, and Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Full text
Abstract:
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
APA, Harvard, Vancouver, ISO, and other styles
5

Rui, Xue, Ziqiang Li, Yang Cao, Ziyang Li, and Weiguo Song. "DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data." Remote Sensing 15, no. 10 (May 12, 2023): 2541. http://dx.doi.org/10.3390/rs15102541.

Full text
Abstract:
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain shift between multi-source remote sensing data from various sensors and regions poses a significant challenge. Segmentation models face difficulty in adapting to incremental domains due to catastrophic forgetting, which can be addressed via incremental learning methods. However, current incremental learning methods mainly focus on class-incremental learning, wherein classes belong to the same remote sensing domain, and neglect investigations into incremental domains in remote sensing. To solve this problem, we propose a domain-incremental learning method for semantic segmentation in multi-source remote sensing data. Specifically, our model aims to incrementally learn a new domain while preserving its performance on previous domains without accessing previous domain data. To achieve this, our model has a unique parameter learning structure that reparametrizes domain-agnostic and domain-specific parameters. We use different optimization strategies to adapt to domain shift in incremental domain learning. Additionally, we adopt multi-level knowledge distillation loss to mitigate the impact of label space shift among domains. The experiments demonstrate that our method achieves excellent performance in domain-incremental settings, outperforming existing methods with only a few parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Shen, Furao, Hui Yu, Youki Kamiya, and Osamu Hasegawa. "An Online Incremental Semi-Supervised Learning Method." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 6 (September 20, 2010): 593–605. http://dx.doi.org/10.20965/jaciii.2010.p0593.

Full text
Abstract:
Using labeled data and large amounts of unlabeled data, our proposed online incremental semisupervised learning automatically learns the topology of input data distribution without prior knowledge of numbers of nodes or network structure. Using labeled data, it labels generated nodes and divides a learned topology into substructures corresponding to classes. Node weights used as prototype vectors enable classification. New labeled or unlabeled data is added incrementally to the system during learning. Experimental results for artificial and real-world data show that this learning efficiently learns online incremental tasks even in noisy and non-stationary environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Madhusudhanan, Sathya, Suresh Jaganathan, and Jayashree L S. "Incremental Learning for Classification of Unstructured Data Using Extreme Learning Machine." Algorithms 11, no. 10 (October 17, 2018): 158. http://dx.doi.org/10.3390/a11100158.

Full text
Abstract:
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

CHALUP, STEPHAN K. "INCREMENTAL LEARNING IN BIOLOGICAL AND MACHINE LEARNING SYSTEMS." International Journal of Neural Systems 12, no. 06 (December 2002): 447–65. http://dx.doi.org/10.1142/s0129065702001308.

Full text
Abstract:
Incremental learning concepts are reviewed in machine learning and neurobiology. They are identified in evolution, neurodevelopment and learning. A timeline of qualitative axon, neuron and synapse development summarizes the review on neurodevelopment. A discussion of experimental results on data incremental learning with recurrent artificial neural networks reveals that incremental learning often seems to be more efficient or powerful than standard learning but can produce unexpected side effects. A characterization of incremental learning is proposed which takes the elaborated biological and machine learning concepts into account.
APA, Harvard, Vancouver, ISO, and other styles
9

LiMin Fu, Hui-Huang Hsu, and J. C. Principe. "Incremental backpropagation learning networks." IEEE Transactions on Neural Networks 7, no. 3 (May 1996): 757–61. http://dx.doi.org/10.1109/72.501732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Zhi, De-Yu Meng, Zong-Ben Xu, and Nan-Nan Gu. "Incremental Alignment Manifold Learning." Journal of Computer Science and Technology 26, no. 1 (January 2011): 153–65. http://dx.doi.org/10.1007/s11390-011-9422-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Jingmei, Di Xue, Weifei Wu, and Jiaxiang Wang. "Incremental Learning for Malware Classification in Small Datasets." Security and Communication Networks 2020 (February 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/6309243.

Full text
Abstract:
Information security is an important research area. As a very special yet important case, malware classification plays an important role in information security. In the real world, the malware datasets are open-ended and dynamic, and new malware samples belonging to old classes and new classes are increasing continuously. This requires the malware classification method to enable incremental learning, which can efficiently learn the new knowledge. However, existing works mainly focus on feature engineering with machine learning as a tool. To solve the problem, we present an incremental malware classification framework, named “IMC,” which consists of opcode sequence extraction, selection, and incremental learning method. We develop an incremental learning method based on multiclass support vector machine (SVM) as the core component of IMC, named “IMCSVM,” which can incrementally improve its classification ability by learning new malware samples. In IMC, IMCSVM adds the new classification planes (if new samples belong to a new class) and updates all old classification planes for new malware samples. As a result, IMC can improve the classification quality of known malware classes by minimizing the prediction error and transfer the old model with known knowledge to classify unknown malware classes. We apply the incremental learning method into malware classification, and the experimental results demonstrate the advantages and effectiveness of IMC.
APA, Harvard, Vancouver, ISO, and other styles
12

Kawewong, Aram, Rapeeporn Pimup, and Osamu Hasegawa. "Incremental Learning Framework for Indoor Scene Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 496–502. http://dx.doi.org/10.1609/aaai.v27i1.8584.

Full text
Abstract:
This paper presents a novel framework for online incremental place recognition in an indoor environment. The framework addresses the scenario in which scene images are gradually obtained during long-term operation in the real-world indoor environment. Multiple users may interact with the classification system and confirm either current or past prediction results; the system then immediately updates itself to improve the classification system. This framework is based on the proposed \emph{n}-value self-organizing and incremental neural network (\emph{n}-SOINN), which has been derived by modifying the original SOINN to be appropriate for use in scene recognition. The evaluation was performed on the standard MIT 67-category indoor scene dataset and shows that the proposed framework achieves the same accuracy as that of the state-of-the-art offline method, while the computation time of the proposed framework is significantly faster and fully incremental update is allowed. Additionally, a small extra set of training samples is incrementally given to the system to simulate the incremental learning situation. The result shows that the proposed framework can leverage such additional samples and achieve the state-of-the-art result.
APA, Harvard, Vancouver, ISO, and other styles
13

Roy, Kaushik, Christian Simon, Peyman Moghadam, and Mehrtash Harandi. "CL3: Generalization of Contrastive Loss for Lifelong Learning." Journal of Imaging 9, no. 12 (November 23, 2023): 259. http://dx.doi.org/10.3390/jimaging9120259.

Full text
Abstract:
Lifelong learning portrays learning gradually in nonstationary environments and emulates the process of human learning, which is efficient, robust, and able to learn new concepts incrementally from sequential experience. To equip neural networks with such a capability, one needs to overcome the problem of catastrophic forgetting, the phenomenon of forgetting past knowledge while learning new concepts. In this work, we propose a novel knowledge distillation algorithm that makes use of contrastive learning to help a neural network to preserve its past knowledge while learning from a series of tasks. Our proposed generalized form of contrastive distillation strategy tackles catastrophic forgetting of old knowledge, and minimizes semantic drift by maintaining a similar embedding space, as well as ensures compactness in feature distribution to accommodate novel tasks in a current model. Our comprehensive study shows that our method achieves improved performances in the challenging class-incremental, task-incremental, and domain-incremental learning for supervised scenarios.
APA, Harvard, Vancouver, ISO, and other styles
14

Ke, Hai-Feng, Cheng-Bo Lu, Xiao-Bo Li, Gao-Yan Zhang, Ying Mei, and Xue-Wen Shen. "An Incremental Optimal Weight Learning Machine of Single-Layer Neural Networks." Scientific Programming 2018 (2018): 1–7. http://dx.doi.org/10.1155/2018/3732120.

Full text
Abstract:
An optimal weight learning machine with growth of hidden nodes and incremental learning (OWLM-GHNIL) is given by adding random hidden nodes to single hidden layer feedforward networks (SLFNs) one by one or group by group. During the growth of the networks, input weights and output weights are updated incrementally, which can implement conventional optimal weight learning machine (OWLM) efficiently. The simulation results and statistical tests also demonstrate that the OWLM-GHNIL has better generalization performance than other incremental type algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Bukovsky, Ivo. "Learning Entropy: Multiscale Measure for Incremental Learning." Entropy 15, no. 12 (September 27, 2013): 4159–87. http://dx.doi.org/10.3390/e15104159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Siddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (August 5, 2021): 1879. http://dx.doi.org/10.3390/electronics10161879.

Full text
Abstract:
In this paper, we present a novel incremental learning technique to solve the catastrophic forgetting problem observed in the CNN architectures. We used a progressive deep neural network to incrementally learn new classes while keeping the performance of the network unchanged on old classes. The incremental training requires us to train the network only for new classes and fine-tune the final fully connected layer, without needing to train the entire network again, which significantly reduces the training time. We evaluate the proposed architecture extensively on image classification task using Fashion MNIST, CIFAR-100 and ImageNet-1000 datasets. Experimental results show that the proposed network architecture not only alleviates catastrophic forgetting but can also leverages prior knowledge via lateral connections to previously learned classes and their features. In addition, the proposed scheme is easily scalable and does not require structural changes on the network trained on the old task, which are highly required properties in embedded systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Yanan, Tengfei Liang, Songhe Feng, Yi Jin, Gengyu Lyu, Haojun Fei, and Yang Wang. "MetaZSCIL: A Meta-Learning Approach for Generalized Zero-Shot Class Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10408–16. http://dx.doi.org/10.1609/aaai.v37i9.26238.

Full text
Abstract:
Generalized zero-shot learning (GZSL) aims to recognize samples whose categories may not have been seen at training. Standard GZSL cannot handle dynamic addition of new seen and unseen classes. In order to address this limitation, some recent attempts have been made to develop continual GZSL methods. However, these methods require end-users to continuously collect and annotate numerous seen class samples, which is unrealistic and hampers the applicability in the real-world. Accordingly, in this paper, we propose a more practical and challenging setting named Generalized Zero-Shot Class Incremental Learning (CI-GZSL). Our setting aims to incrementally learn unseen classes without any training samples, while recognizing all classes previously encountered. We further propose a bi-level meta-learning based method called MetaZSCIL to directly optimize the network to learn how to incrementally learn. Specifically, we sample sequential tasks from seen classes during the offline training to simulate the incremental learning process. For each task, the model is learned using a meta-objective such that it is capable to perform fast adaptation without forgetting. Note that our optimization can be flexibly equipped with most existing generative methods to tackle CI-GZSL. This work introduces a feature generative framework that leverages visual feature distribution alignment to produce replayed samples of previously seen classes to reduce catastrophic forgetting. Extensive experiments conducted on five widely used benchmarks demonstrate the superiority of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
18

Huang, Libo, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, and Yongjun Xu. "eTag: Class-Incremental Learning via Embedding Distillation and Task-Oriented Generation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12591–99. http://dx.doi.org/10.1609/aaai.v38i11.29153.

Full text
Abstract:
Class incremental learning (CIL) aims to solve the notorious forgetting problem, which refers to the fact that once the network is updated on a new task, its performance on previously-learned tasks degenerates catastrophically. Most successful CIL methods store exemplars (samples of learned tasks) to train a feature extractor incrementally, or store prototypes (features of learned tasks) to estimate the incremental feature distribution. However, the stored exemplars would violate the data privacy concerns, while the fixed prototypes might not reasonably be consistent with the incremental feature distribution, hindering the exploration of real-world CIL applications. In this paper, we propose a data-free CIL method with embedding distillation and Task-oriented generation (eTag), which requires neither exemplar nor prototype. Embedding distillation prevents the feature extractor from forgetting by distilling the outputs from the networks' intermediate blocks. Task-oriented generation enables a lightweight generator to produce dynamic features, fitting the needs of the top incremental classifier. Experimental results confirm that the proposed eTag considerably outperforms state-of-the-art methods on several benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
19

Shams, Amin, and Touraj Banirostam. "Incremental Learning for Spam Detection." IJARCCE 6, no. 1 (January 30, 2017): 1–6. http://dx.doi.org/10.17148/ijarcce.2017.6101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zajdel, Roman. "Epoch-incremental reinforcement learning algorithms." International Journal of Applied Mathematics and Computer Science 23, no. 3 (September 1, 2013): 623–35. http://dx.doi.org/10.2478/amcs-2013-0047.

Full text
Abstract:
Abstract In this article, a new class of the epoch-incremental reinforcement learning algorithm is proposed. In the incremental mode, the fundamental TD(0) or TD(λ) algorithm is performed and an environment model is created. In the epoch mode, on the basis of the environment model, the distances of past-active states to the terminal state are computed. These distances and the reinforcement terminal state signal are used to improve the agent policy.
APA, Harvard, Vancouver, ISO, and other styles
21

Rodriguez-Sanchez, Fernando, Pedro Larranaga, and Concha Bielza. "Incremental Learning of Latent Forests." IEEE Access 8 (2020): 224420–32. http://dx.doi.org/10.1109/access.2020.3027064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kang, Dongmin, Yeonsik Jo, Yeongwoo Nam, and Jonghyun Choi. "Confidence Calibration for Incremental Learning." IEEE Access 8 (2020): 126648–60. http://dx.doi.org/10.1109/access.2020.3007234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rosenfeld, Amir, and John K. Tsotsos. "Incremental Learning Through Deep Adaptation." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 3 (March 1, 2020): 651–63. http://dx.doi.org/10.1109/tpami.2018.2884462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ratsaby, J. "Incremental learning with sample queries." IEEE Transactions on Pattern Analysis and Machine Intelligence 20, no. 8 (1998): 883–88. http://dx.doi.org/10.1109/34.709619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hu, Linmei, Chao Shao, Juanzi Li, and Heng Ji. "Incremental learning from news events." Knowledge-Based Systems 89 (November 2015): 618–26. http://dx.doi.org/10.1016/j.knosys.2015.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Barto, Andrew G. "Learning and incremental dynamic programming." Behavioral and Brain Sciences 14, no. 1 (March 1991): 94–95. http://dx.doi.org/10.1017/s0140525x00065456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sugiyama, M., and H. Ogawa. "Properties of incremental projection learning." Neural Networks 14, no. 1 (January 2001): 67–78. http://dx.doi.org/10.1016/s0893-6080(00)00079-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fukushima, Kunihiko. "Neocognitron capable of incremental learning." Neural Networks 17, no. 1 (January 2004): 37–46. http://dx.doi.org/10.1016/s0893-6080(03)00078-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Impoco, G., and L. Tuminello. "Incremental learning to segment micrographs." Computer Vision and Image Understanding 140 (November 2015): 144–52. http://dx.doi.org/10.1016/j.cviu.2015.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fyfe, C. "Structured population-based incremental learning." Soft Computing - A Fusion of Foundations, Methodologies and Applications 2, no. 4 (February 26, 1999): 191–98. http://dx.doi.org/10.1007/s005000050052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Guang-Bin, and Lei Chen. "Convex incremental extreme learning machine." Neurocomputing 70, no. 16-18 (October 2007): 3056–62. http://dx.doi.org/10.1016/j.neucom.2007.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jain, Sanjay, Steffen Lange, Samuel E. Moelius, and Sandra Zilles. "Incremental learning with temporary memory." Theoretical Computer Science 411, no. 29-30 (June 2010): 2757–72. http://dx.doi.org/10.1016/j.tcs.2010.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Haibo He, Sheng Chen, Kang Li, and Xin Xu. "Incremental Learning From Stream Data." IEEE Transactions on Neural Networks 22, no. 12 (December 2011): 1901–14. http://dx.doi.org/10.1109/tnn.2011.2171713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Minku, Fernanda Li, Hirotaka Inoue, and Xin Yao. "Negative correlation in incremental learning." Natural Computing 8, no. 2 (November 9, 2007): 289–320. http://dx.doi.org/10.1007/s11047-007-9063-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lange, Steffen, and Thomas Zeugmann. "Incremental Learning from Positive Data." Journal of Computer and System Sciences 53, no. 1 (August 1996): 88–103. http://dx.doi.org/10.1006/jcss.1996.0051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Peng, Jing, and Ronald J. Williams. "Incremental multi-step Q-learning." Machine Learning 22, no. 1-3 (1996): 283–90. http://dx.doi.org/10.1007/bf00114731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Schlimmer, Jeffrey C., and Richard H. Granger. "Incremental learning from noisy data." Machine Learning 1, no. 3 (September 1986): 317–54. http://dx.doi.org/10.1007/bf00116895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Režnáková, Marta, Lukas Tencer, and Mohamed Cheriet. "Incremental Similarity for real-time on-line incremental learning systems." Pattern Recognition Letters 74 (April 2016): 61–67. http://dx.doi.org/10.1016/j.patrec.2016.01.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Weiyi, Kun Yue, Mingliang Yue, Zidu Yin, and Binbin Zhang. "A Bayesian Network-Based Approach for Incremental Learning of Uncertain Knowledge." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 26, no. 01 (January 31, 2018): 87–108. http://dx.doi.org/10.1142/s021848851850006x.

Full text
Abstract:
Bayesian network (BN) is the well-accepted framework for representing and inferring uncertain knowledge. To learn the BN-based uncertain knowledge incrementally in response to the new data is useful for analysis, prediction, decision making, etc. In this paper, we propose an approach for incremental learning of BNs by focusing on the incremental revision of BN’s graphical structures. First, we give the concept of influence degree to describe the influence of new data on the existing BN by measuring the variation of BN’s probability parameters w.r.t. the likelihood of the new data. Then, for the nodes ordered decreasingly by their influence degrees, we give the scoring-based algorithm for revising BN’s subgraphs iteratively by hill-climbing search for reversing, adding or deleting edges. In the incremental revision, we emphasize the preservation of probabilistic conditional independencies implied in the BN based on the concept and properties of Markov equivalence. Experimental results show the correctness, precision and efficiency of our approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Alzahrani, Mohammad Eid. "Employing Incremental Learning for the Detection of Multiclass New Malware Variants." Indian Journal Of Science And Technology 17, no. 10 (March 1, 2024): 941–48. http://dx.doi.org/10.17485/ijst/v17i10.2862.

Full text
Abstract:
Background/Objectives: The study aims to achieve two main objectives. The first is to reliably identify and categorize malware variations to maintain the security of computer systems. Malware poses a continuous threat to digital information and system integrity, hence the need for effective detection tools. The second objective is to propose a new incremental learning method. This method is designed to adapt over time, continually incorporating new data, which is crucial for identifying and managing multiclass malware variants. Methods: This study utilised an incremental learning technique as the basis of the approach, a type of machine learning whereby a system retains previous knowledge and builds upon the information from the newly acquired data. Particularly, this method is suitable for tackling mutating character of malware dangers. The researchers used various sets of actual world malwares for evaluating the applicability of these ideas which serves as an accurate test environment. Findings: The findings of the research are significant. We utilizing 6 different datasets, which included 158,101 benign and malicious instances, the method demonstrated a high attack detection accuracy of 99.34%. Moreover, the study was successful in identifying a new category of malware variants and distinguishing between 15 different attack categories. These results underscore the effectiveness of the proposed incremental learning method in a real-world scenario. Novelty: This research is unique because of the novel use of a tailored incremental learning technique for dealing with dynamic threat environment of malwares. However, with a new threat they cannot be so well adapted using traditional machine learning methods. On the other hand, the technique put forward in this paper facilitates continuous learning that can be modified to match different types of malicious software as they develop. The ability to evolve and adapt is an important addition to current cybersecurity practices that include malware identification and classification. Keywords: Cybersecurity, Malware Detection, Incremental learning
APA, Harvard, Vancouver, ISO, and other styles
41

Tang, Ke, Minlong Lin, Fernanda L. Minku, and Xin Yao. "Selective negative correlation learning approach to incremental learning." Neurocomputing 72, no. 13-15 (August 2009): 2796–805. http://dx.doi.org/10.1016/j.neucom.2008.09.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cao, Jian, Shi Yu Sun, and Xiu Sheng Duan. "Optimal Boundary SVM Incremental Learning Algorithm." Applied Mechanics and Materials 347-350 (August 2013): 2957–62. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2957.

Full text
Abstract:
Support vectors (SVs) cant be selected completely in support vector machine (SVM) incremental, resulting incremental learning process cant be sustained. In order to solve this problem, the article proposes optimal boundary SVM incremental learning algorithm. Based on in-depth analysis of the trend of the classification surface and make use of the KKT conditions, selecting the border of the vectors include the support vectors to participate SVM incremental learning. The experiment shows that the algorithm can be completely covered the support vectors and have the identical result with the classic support vector machine, it also saves lots of time. Therefore it can provide the conditions for future large sample classification and incremental learning sustainability.
APA, Harvard, Vancouver, ISO, and other styles
43

Dong, Na, Yongqiang Zhang, Mingli Ding, and Gim Hee Lee. "Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 543–51. http://dx.doi.org/10.1609/aaai.v37i1.25129.

Full text
Abstract:
Incremental few-shot object detection aims at detecting novel classes without forgetting knowledge of the base classes with only a few labeled training data from the novel classes. Most related prior works are on incremental object detection that rely on the availability of abundant training samples per novel class that substantially limits the scalability to real-world setting where novel data can be scarce. In this paper, we propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector. To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision from additional object proposals generated using Selective Search as pseudo labels. We further introduce an incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without forgetting the base classes. Extensive experiments conducted on standard incremental object detection and incremental few-shot object detection settings show that our approach significantly outperforms state-of-the-art methods by a large margin. Our source code is available at https://github.com/dongnana777/Incremental-DETR.
APA, Harvard, Vancouver, ISO, and other styles
44

Seenivasan, Lalithkumar, Mobarakol Islam, Chi-Fai Ng, Chwee Ming Lim, and Hongliang Ren. "Biomimetic Incremental Domain Generalization with a Graph Network for Surgical Scene Understanding." Biomimetics 7, no. 2 (May 28, 2022): 68. http://dx.doi.org/10.3390/biomimetics7020068.

Full text
Abstract:
Surgical scene understanding is a key barrier for situation-aware robotic surgeries and the associated surgical training. With the presence of domain shifts and the inclusion of new instruments and tissues, learning domain generalization (DG) plays a pivotal role in expanding instrument–tissue interaction detection to new domains in robotic surgery. Mimicking the ability of humans to incrementally learn new skills without forgetting their old skills in a similar domain, we employ incremental DG on scene graphs to predict instrument–tissue interaction during robot-assisted surgery. To achieve incremental DG, incorporate incremental learning (IL) to accommodate new instruments and knowledge-distillation-based student–teacher learning to tackle domain shifts in the new domain. Additionally, we designed an enhanced curriculum by smoothing (E-CBS) based on Laplacian of Gaussian (LoG) and Gaussian kernels, and integrated it with the feature extraction network (FEN) and graph network to improve the instrument–tissue interaction performance. Furthermore, the FEN’s and graph network’s logits are normalized by temperature normalization (T-Norm), and its effect in model calibration was studied. Quantitative and qualitative analysis proved that our incrementally-domain generalized interaction detection model was able to adapt to the target domain (transoral robotic surgery) while retaining its performance in the source domain (nephrectomy surgery). Additionally, the graph model enhanced by E-CBS and T-Norm outperformed other state-of-the-art models, and the incremental DG technique performed better than the naive domain adaption and DG technique.
APA, Harvard, Vancouver, ISO, and other styles
45

Pham, D. T., and S. S. Dimov. "An algorithm for incremental inductive learning." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 211, no. 3 (March 1, 1997): 239–49. http://dx.doi.org/10.1243/0954405971516239.

Full text
Abstract:
This paper describes RULES-4, a new algorithm for incremental inductive learning from the ‘RULES’ family of automatic rule extraction systems. This algorithm is the first incremental learning system in the family. It has a number of advantages over well-known non-incremental schemes. It allows the stored knowledge to be updated and refined rapidly when new examples are available. The induction of rules for a process planning expert system is used to illustrate the operation of RULES-4 and a bench-mark pattern classification problem employed to test the algorithm. The results obtained have shown that the accuracy of the extracted rule sets is commensurate with the accuracy of the rule set obtained using a non-incremental algorithm.
APA, Harvard, Vancouver, ISO, and other styles
46

Abramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (November 2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.

Full text
Abstract:
The present time is characterized by unprecedented growth in the volume of information flows. Information processing underlies the solution of many practical problems. The intelligent infor-mation systems applications range is extremely extensive: from managing continuous technological processes in real-time to solving commercial and administrative problems. Intelligent information systems should have such a main property, as the ability to quickly process dynamical incoming da-ta in real-time. Also, intelligent information systems should be extracting knowledge from previously solved problems. Incremental neural network training has become one of the topical issues in ma-chine learning in recent years. Compared to traditional machine learning, incremental learning al-lows assimilating new knowledge that comes in gradually and preserving old knowledge gained from previous tasks. Such training should be useful in intelligent systems where data flows dynamically. Aim. Consider the concepts, problems, and methods of incremental neural network training, as well as assess the possibility of using it in intelligent systems development. Materials and methods. The idea of incremental learning, obtained in the analysis of a person's learning during his life, is consid-ered. The terms used in the literature to describe incremental learning are presented. The obstacles that arise in achieving the goal of incremental learning are described. A description of three scenari-os of incremental learning, among which class-incremental learning is distinguished, is given. An analysis of the methods of incremental learning, grouped into a family of techniques by the solution of the catastrophic forgetting problem, is given. The possibilities offered by incremental learning ver-sus traditional machine learning are presented. Results. The article attempts to assess the current state and the possibility of using incremental neural network learning, to identify differences from traditional machine learning. Conclusion. Incremental learning is useful for future intelligent sys-tems, as it allows to maintain existing knowledge in the process of updating, avoid learning from scratch, and dynamically adjust the model's ability to learn according to new data available.
APA, Harvard, Vancouver, ISO, and other styles
47

Luo, Yong, Liancheng Yin, Wenchao Bai, and Keming Mao. "An Appraisal of Incremental Learning Methods." Entropy 22, no. 11 (October 22, 2020): 1190. http://dx.doi.org/10.3390/e22111190.

Full text
Abstract:
As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence technology. However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. This restriction leads to catastrophic forgetting for incremental learning scenarios since the distribution of incoming data is unknown and has a highly different probability from the old data. Therefore, a model must be both plastic to acquire new knowledge and stable to consolidate existing knowledge. This review aims to draw a systematic review of the state of the art of incremental learning methods. Published reports are selected from Web of Science, IEEEXplore, and DBLP databases up to May 2020. Each paper is reviewed according to the types: architectural strategy, regularization strategy and rehearsal and pseudo-rehearsal strategy. We compare and discuss different methods. Moreover, the development trend and research focus are given. It is concluded that incremental learning is still a hot research area and will be for a long period. More attention should be paid to the exploration of both biological systems and computational models.
APA, Harvard, Vancouver, ISO, and other styles
48

R.R, Ade, and Deshmukh P. R. "Methods for Incremental Learning : A Survey." International Journal of Data Mining & Knowledge Management Process 3, no. 4 (July 31, 2013): 119–25. http://dx.doi.org/10.5121/ijdkp.2013.3408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhou, Yang, Yunbai Qin, F. Jiang, Kunkun Zheng, and Mingcan Cen. "Incremental Learning Based on Angle Constraints." Journal of Physics: Conference Series 1880, no. 1 (April 1, 2021): 012030. http://dx.doi.org/10.1088/1742-6596/1880/1/012030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Esposito, Floriana, Stefano Ferilli, Nicola Fanizzi, Teresa M. A. Basile, and Nicola Di Mauro. "Incremental multistrategy learning for document processing." Applied Artificial Intelligence 17, no. 8-9 (September 2003): 859–83. http://dx.doi.org/10.1080/713827255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography