Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Incremental neural network.

Статті в журналах з теми "Incremental neural network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Incremental neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yang, Shuyuan, Min Wang, and Licheng Jiao. "Incremental constructive ridgelet neural network." Neurocomputing 72, no. 1-3 (2008): 367–77. http://dx.doi.org/10.1016/j.neucom.2008.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Siddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (2021): 1879. http://dx.doi.org/10.3390/electronics10161879.

Повний текст джерела
Анотація:
In this paper, we present a novel incremental learning technique to solve the catastrophic forgetting problem observed in the CNN architectures. We used a progressive deep neural network to incrementally learn new classes while keeping the performance of the network unchanged on old classes. The incremental training requires us to train the network only for new classes and fine-tune the final fully connected layer, without needing to train the entire network again, which significantly reduces the training time. We evaluate the proposed architecture extensively on image classification task usin
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ho, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (2020): 1929. http://dx.doi.org/10.3390/electronics9111929.

Повний текст джерела
Анотація:
Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and d
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.

Повний текст джерела
Анотація:
The present time is characterized by unprecedented growth in the volume of information flows. Information processing underlies the solution of many practical problems. The intelligent infor-mation systems applications range is extremely extensive: from managing continuous technological processes in real-time to solving commercial and administrative problems. Intelligent information systems should have such a main property, as the ability to quickly process dynamical incoming da-ta in real-time. Also, intelligent information systems should be extracting knowledge from previously solved problems
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mellado, Diego, Carolina Saavedra, Steren Chabert, Romina Torres, and Rodrigo Salas. "Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning." Algorithms 12, no. 10 (2019): 206. http://dx.doi.org/10.3390/a12100206.

Повний текст джерела
Анотація:
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tomimori, Haruka, Kui-Ting Chen, and Takaaki Baba. "A Convolutional Neural Network with Incremental Learning." Journal of Signal Processing 21, no. 4 (2017): 155–58. http://dx.doi.org/10.2299/jsp.21.155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shiotani, Shigetoshi, Toshio Fukuda, and Takanori Shibata. "A neural network architecture for incremental learning." Neurocomputing 9, no. 2 (1995): 111–30. http://dx.doi.org/10.1016/0925-2312(94)00061-v.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kim, Jonghong, WonHee Lee, Sungdae Baek, Jeong-Ho Hong, and Minho Lee. "Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks." Sensors 23, no. 19 (2023): 8117. http://dx.doi.org/10.3390/s23198117.

Повний текст джерела
Анотація:
Catastrophic forgetting, which means a rapid forgetting of learned representations while learning new data/samples, is one of the main problems of deep neural networks. In this paper, we propose a novel incremental learning framework that can address the forgetting problem by learning new incoming data in an online manner. We develop a new incremental learning framework that can learn extra data or new classes with less catastrophic forgetting. We adopt the hippocampal memory process to the deep neural networks by defining the effective maximum of neural activation and its boundary to represen
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Roy, Kaushik, Christian Simon, Peyman Moghadam, and Mehrtash Harandi. "CL3: Generalization of Contrastive Loss for Lifelong Learning." Journal of Imaging 9, no. 12 (2023): 259. http://dx.doi.org/10.3390/jimaging9120259.

Повний текст джерела
Анотація:
Lifelong learning portrays learning gradually in nonstationary environments and emulates the process of human learning, which is efficient, robust, and able to learn new concepts incrementally from sequential experience. To equip neural networks with such a capability, one needs to overcome the problem of catastrophic forgetting, the phenomenon of forgetting past knowledge while learning new concepts. In this work, we propose a novel knowledge distillation algorithm that makes use of contrastive learning to help a neural network to preserve its past knowledge while learning from a series of ta
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Junhui, Hongying Zan, Shuning Wu, Kunli Zhang, and Jianwei Huo. "Adaptive Graph Neural Network with Incremental Learning Mechanism for Knowledge Graph Reasoning." Electronics 13, no. 14 (2024): 2778. http://dx.doi.org/10.3390/electronics13142778.

Повний текст джерела
Анотація:
Knowledge graphs are extensively utilized in diverse fields such as search engines, recommendation systems, and dialogue systems, and knowledge graph reasoning plays an important role in the aforementioned domains. Graph neural networks demonstrate the capability to effectively capture and process the graph structure inherent in knowledge graphs, leveraging the relationships between nodes and edges to enable efficient reasoning. Current research on graph neural networks relies on predefined propagation paths. The models based on predefined propagation paths overlook the correlation between ent
Стилі APA, Harvard, Vancouver, ISO та ін.
11

CHEN, Xinzhe, Hong LIANG, and Weiyu XU. "Research on a class-incremental learning method based on sonar images." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 41, no. 2 (2023): 303–9. http://dx.doi.org/10.1051/jnwpu/20234120303.

Повний текст джерела
Анотація:
Due to the low resolution and the small number of samples of sonar images, the existing class incremental learning networks have a serious problem of catastrophic forgetting of historical task targets, resulting in a low average recognition rate of all task targets. Based on the framework model of generated replay, an improved class incremental learning network is proposed in this paper, and a new deep convolution generative adversarial network is designed and built to replace the variational autoencoder as the reconstruction model of generated replay incremental network to improve the effect
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Heo, Kwang-Seung, and Kwee-Bo Sim. "Speaker Identification Based on Incremental Learning Neural Network." International Journal of Fuzzy Logic and Intelligent Systems 5, no. 1 (2005): 76–82. http://dx.doi.org/10.5391/ijfis.2005.5.1.076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

HUNG, CHENG-AN, and SHENG-FUU LIN. "AN INCREMENTAL LEARNING NEURAL NETWORK FOR PATTERN CLASSIFICATION." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 06 (1999): 913–28. http://dx.doi.org/10.1142/s0218001499000501.

Повний текст джерела
Анотація:
A neural network architecture that incorporates a supervised mechanism into a fuzzy adaptive Hamming net (FAHN) is presented. The FAHN constructs hyper-rectangles that represent template weights in an unsupervised learning paradigm. Learning in the FAHN consists of creating and adjusting hyper-rectangles in feature space. By aggregating multiple hyper-rectangles into a single class, we can build a classifier, to be henceforth termed as a supervised fuzzy adaptive Hamming net (SFAHN), that discriminates between nonconvex and even discontinuous classes. The SFAHN can operate at a fast-learning r
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhang, Yansheng, Dong Ye, Yuanhong Liu, and Jianjun Xu. "Incremental LLE Based on Back Propagation Neural Network." IOP Conference Series: Earth and Environmental Science 170 (July 2018): 042051. http://dx.doi.org/10.1088/1755-1315/170/4/042051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ciarelli, Patrick Marques, Elias Oliveira, and Evandro O. T. Salles. "An incremental neural network with a reduced architecture." Neural Networks 35 (November 2012): 70–81. http://dx.doi.org/10.1016/j.neunet.2012.08.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Liu, Hao, and Xiao-juan Ban. "Clustering by growing incremental self-organizing neural network." Expert Systems with Applications 42, no. 11 (2015): 4965–81. http://dx.doi.org/10.1016/j.eswa.2015.02.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Hongwei, Xiong Xiao, and Osamu Hasegawa. "A Load-Balancing Self-Organizing Incremental Neural Network." IEEE Transactions on Neural Networks and Learning Systems 25, no. 6 (2014): 1096–105. http://dx.doi.org/10.1109/tnnls.2013.2287884.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Olmez, Tamer, Ertugrul Yazgan, and Okan K. Ersoy. "A multilayer incremental neural network architecture for classification." Neural Processing Letters 2, no. 2 (1995): 5–9. http://dx.doi.org/10.1007/bf02312348.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sarwar, Syed Shakib, Aayush Ankit, and Kaushik Roy. "Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing." IEEE Access 8 (2020): 4615–28. http://dx.doi.org/10.1109/access.2019.2963056.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Attoh-Okine, Nii O. "Modeling incremental pavement roughness using functional network." Canadian Journal of Civil Engineering 32, no. 5 (2005): 899–905. http://dx.doi.org/10.1139/l05-050.

Повний текст джерела
Анотація:
Incremental roughness prediction is a critical component of decision making of any pavement management systems, therefore, proper estimation is of paramount importance. This paper presents the application of functional equations and networks to incremental roughness prediction of flexible pavement. In the functional networks, neuron functions are multivariate and multiargumentative. Functional equations form the basis of functional networks, therefore, established theorem in functional equations are easily applicable in the analysis. The model is developed from validated set of incremental and
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ding, Xue, and Hong Hong Yang. "A Study on the Image Classification Techniques Based on Wavelet Artificial Neural Network Algorithm." Applied Mechanics and Materials 602-605 (August 2014): 3512–14. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3512.

Повний текст джерела
Анотація:
With the ever-changing education information technology, it is a big problem for the universities and college that how to classify the thousands of copies of the image during the art examination marking process. This paper is to explore the application of artificial intelligence techniques, and to do accurate classification of a large number of images within a limited time and under the help of computer. It is can be seen that the proposed method is feasible through the application of the results of the actual work. Artificial neural network training Artificial neural network training methods
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chefrour, Aida, Labiba Souici-Meslati, Iness Difi, and Nesrine Bakkouche. "A Novel Incremental Learning Algorithm Based on Incremental Vector Support Machina and Incremental Neural Network Learn++." Revue d'Intelligence Artificielle 33, no. 3 (2019): 181–88. http://dx.doi.org/10.18280/ria.330303.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gu, Yanan, Cheng Deng, and Kun Wei. "Class-Incremental Instance Segmentation via Multi-Teacher Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1478–86. http://dx.doi.org/10.1609/aaai.v35i2.16238.

Повний текст джерела
Анотація:
Although deep neural networks have achieved amazing results on instance segmentation, they are still ill-equipped when they are required to learn new tasks incrementally. Concretely, they suffer from “catastrophic forgetting”, an abrupt degradation of performance on old classes with the initial training data missing. Moreover, they are subjected to a negative transfer problem on new classes, which renders the model unable to update its knowledge while preserving the previous knowledge. To address these problems, we propose an incremental instance segmentation method that consists of three netw
Стилі APA, Harvard, Vancouver, ISO та ін.
24

ALPAYDIN, ETHEM. "GAL: NETWORKS THAT GROW WHEN THEY LEARN AND SHRINK WHEN THEY FORGET." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 01 (1994): 391–414. http://dx.doi.org/10.1142/s021800149400019x.

Повний текст джерела
Анотація:
Learning when limited to modification of some parameters has a limited scope; capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e. the number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as is usually done, be determined by trial-and-error but should be co
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tsiligaridis, John. "DECISION TREES ALGORITHMS AND CLASSIFICATION WITH INCREMENTAL NEURAL NETWORK." International Journal of Digital Information and Wireless Communications 5, no. 3 (2015): 203–9. http://dx.doi.org/10.17781/p001710.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gupta, Sharad, and Sudip Sanyal. "INNAMP: An incremental neural network architecture with monitor perceptron." AI Communications 31, no. 4 (2018): 339–53. http://dx.doi.org/10.3233/aic-180767.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Nadir Kurnaz, Mehmet, Zümray Dokur, and Tamer Ölmez. "Segmentation of remote-sensing images by incremental neural network." Pattern Recognition Letters 26, no. 8 (2005): 1096–104. http://dx.doi.org/10.1016/j.patrec.2004.10.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Pratama, Mahardhika, Jie Lu, Sreenatha Anavatti, Edwin Lughofer, and Chee-Peng Lim. "An incremental meta-cognitive-based scaffolding fuzzy neural network." Neurocomputing 171 (January 2016): 89–105. http://dx.doi.org/10.1016/j.neucom.2015.06.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Fuangkhon, Piyabute. "An incremental learning preprocessor for feed-forward neural network." Artificial Intelligence Review 41, no. 2 (2012): 183–210. http://dx.doi.org/10.1007/s10462-011-9304-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Su, Mu-Chun, Jonathan Lee, and Kuo-Lung Hsieh. "A new ARTMAP-based neural network for incremental learning." Neurocomputing 69, no. 16-18 (2006): 2284–300. http://dx.doi.org/10.1016/j.neucom.2005.06.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ren, Jing, Lu Liu, Haiduan Huang, et al. "SOINN Intrusion Detection Model Based on Three-Way Attribute Reduction." Electronics 12, no. 24 (2023): 5023. http://dx.doi.org/10.3390/electronics12245023.

Повний текст джерела
Анотація:
With a large number of intrusion detection datasets and high feature dimensionality, the emergent nature of new attack types makes it impossible to collect network traffic data all at once. The modified three-way attribute reduction method is combined with a Self-Organizing Incremental learning Neural Network (SOINN) algorithm to propose a self-organizing incremental neural network intrusion detection model based on three-way attribute reduction. Attribute importance is used to perform attribute reduction, and the data after attribute reduction are fed into a self-organized incremental learnin
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Mei, Liu, Quan Taifan, and Yao Tianbin. "Tracking maneuvering target based on neural fuzzy network with incremental neural leaning." Journal of Systems Engineering and Electronics 17, no. 2 (2006): 343–49. http://dx.doi.org/10.1016/s1004-4132(06)60060-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Deguchi, Toshinori, Toshiki Takahashi, and Naohiro Ishii. "On Temporal Summation in Chaotic Neural Network with Incremental Learning." International Journal of Software Innovation 2, no. 4 (2014): 72–84. http://dx.doi.org/10.4018/ijsi.2014100106.

Повний текст джерела
Анотація:
The incremental learning is a method to compose an associate memory using a chaotic neural network and provides larger capacity than correlative learning in compensation for a large amount of computation. A chaotic neuron has spatiotemporal summation in it and the temporal summation makes the learning stable to input noise. When there is no noise in input, the neuron may not need temporal summation. In this paper, to reduce the computations, a simplified network without temporal summation is introduced and investigated through the computer simulations comparing with the network as in the past,
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Ma’sum, Muhammad Anwar. "Intelligent Clustering and Dynamic Incremental Learning to Generate Multi-Codebook Fuzzy Neural Network for Multi-Modal Data Classification." Symmetry 12, no. 4 (2020): 679. http://dx.doi.org/10.3390/sym12040679.

Повний текст джерела
Анотація:
Classification in multi-modal data is one of the challenges in the machine learning field. The multi-modal data need special treatment as its features are distributed in several areas. This study proposes multi-codebook fuzzy neural networks by using intelligent clustering and dynamic incremental learning for multi-modal data classification. In this study, we utilized intelligent K-means clustering based on anomalous patterns and intelligent K-means clustering based on histogram information. In this study, clustering is used to generate codebook candidates before the training process, while in
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sharma, Himanshu, Prabhat Kumar, and Kavita Sharma. "Recurrent Neural Network based Incremental model for Intrusion Detection System in IoT." Scalable Computing: Practice and Experience 25, no. 5 (2024): 3778–95. http://dx.doi.org/10.12694/scpe.v25i5.3004.

Повний текст джерела
Анотація:
The security of Internet of Things (IoT) networks has become a integral problem in view of the exponential growth of IoT devices. Intrusion detection and prevention is an approach ,used to identify, analyze, and block cyber threats to protect IoT from unauthorized access or attacks. This paper introduces an adaptive and incremental intrusion detection and prevention system based on RNNs, to the ever changing field of IoT security. IoT networks require advanced intrusion detection systems that can identify emerging threats because of their various and dynamic data sources. The complexity of IoT
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yu, Feng, Jinglong Fang, Bin Chen, and Yanli Shao. "An Incremental Learning Based Convolutional Neural Network Model for Large-Scale and Short-Term Traffic Flow." International Journal of Machine Learning and Computing 11, no. 2 (2021): 143–51. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1027.

Повний текст джерела
Анотація:
Traffic flow prediction is very important for smooth road conditions in cities and convenient travel for residents. With the explosive growth of traffic flow data size, traditional machine learning algorithms cannot fit large-scale training data effectively and the deep learning algorithms do not work well because of the huge training and update costs, and the prediction accuracy may need to be further improved when an emergency affecting traffic occurs. In this study, an incremental learning based convolutional neural network model, TF-net, is proposed to achieve the efficient and accurate pr
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kubo, Masao, Akihiro Yamaguchi, Sadayoshi Mikami, and Mitsuo Wada. "Logistic Chaos Protects Evolution against Environment Noise." Journal of Robotics and Mechatronics 10, no. 4 (1998): 350–57. http://dx.doi.org/10.20965/jrm.1998.p0350.

Повний текст джерела
Анотація:
We propose a neuron with noise generator specially for the evolutionary robotics approach to incremental knowledge acquisition. Genetically evolving neural networks are modified continuously by genetic operations. Difficulty in incrementing knowledge when a neural network acts as a robotic controller arises when network operates unlike in the past due to disturbance by neurons added by genetic operators. To evolve a network robust against such internal noise, we propose adding noise generators to neurons. We show the effectiveness of the application of a logistic chaos noise generator to neuro
Стилі APA, Harvard, Vancouver, ISO та ін.
38

HABIBI, MUHAMMAD NIZAR, DIMAS NUR PRAKOSO, NOVIE AYUB WINDARKO, and ANANG TJAHJONO. "Perbaikan MPPT Incremental Conductance menggunakan ANN pada Berbayang Sebagian dengan Hubungan Paralel." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 8, no. 3 (2020): 546. http://dx.doi.org/10.26760/elkomika.v8i3.546.

Повний текст джерела
Анотація:
ABSTRAKAlgoritma IncrementaL Conductance (IC) adalah algoritma yang bisa diimplementasikan pada sistem Maximum Power Point Tracking (MPPT) untuk mendapatkan daya maksimum dari panel surya. Akan tetapi algoritma MPPT IC tidak bisa bekerja dikondisi berbayang sebagian, karena menimbulkan daya maksimum lebih dari satu. Artificial Neural Network (ANN) bisa mengidentifikasi kurva karakteristik pada kondisi berbayang sebagian dan dapat mengetahui posisi daya maksimum yang sebenarnya. Masukan dari ANN merupakan nilai arus hubung singkat serta tegangan buka dari panel surya, dan keluaran dari ANN adal
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Srilakshmi, V., K. Anuradha, and C. Shoba Bindu. "Incremental text categorization based on hybrid optimization-based deep belief neural network." Journal of High Speed Networks 27, no. 2 (2021): 183–202. http://dx.doi.org/10.3233/jhs-210659.

Повний текст джерела
Анотація:
One of the effective text categorization methods for learning the large-scale data and the accumulated data is incremental learning. The major challenge in the incremental learning is improving the accuracy as the text document consists of numerous terms. In this research, a incremental text categorization method is developed using the proposed Spider Grasshopper Crow Optimization Algorithm based Deep Belief Neural network (SGrC-based DBN) for providing optimal text categorization results. The proposed text categorization method has four processes, such as are pre-processing, feature extractio
Стилі APA, Harvard, Vancouver, ISO та ін.
40

MUHAMMED, HAMED HAMID. "UNSUPERVISED FUZZY CLUSTERING USING WEIGHTED INCREMENTAL NEURAL NETWORKS." International Journal of Neural Systems 14, no. 06 (2004): 355–71. http://dx.doi.org/10.1142/s0129065704002121.

Повний текст джерела
Анотація:
A new more efficient variant of a recently developed algorithm for unsupervised fuzzy clustering is introduced. A Weighted Incremental Neural Network (WINN) is introduced and used for this purpose. The new approach is called FC-WINN (Fuzzy Clustering using WINN). The WINN algorithm produces a net of nodes connected by edges, which reflects and preserves the topology of the input data set. Additional weights, which are proportional to the local densities in input space, are associated with the resulting nodes and edges to store useful information about the topological relations in the given inp
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zhang, Guanqin, Zhenya Zhang, H. M. N. Dilum Bandara, Shiping Chen, Jianjun Zhao, and Yulei Sui. "Efficient Incremental Verification of Neural Networks Guided by Counterexample Potentiality." Proceedings of the ACM on Programming Languages 9, OOPSLA1 (2025): 85–112. https://doi.org/10.1145/3720417.

Повний текст джерела
Анотація:
Incremental verification is an emerging neural network verification approach that aims to accelerate the verification of a neural network N * by reusing the existing verification result (called a template ) of a similar neural network N . To date, the state-of-the-art incremental verification approach leverages the problem splitting history produced by branch and bound ( BaB in verification of N , to select only a part of the sub-problems for verification of N * , thus more efficient than verifying N * from scratch. While this approach identifies whether each sub-problem should be re-assessed,
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Oki, Isao, Takeshi Haida, Yoshio Izui, and Seiji Kobayashi. "Incremental Cluster Learning Neural Network Application to GIS Internal Diagnostics." IEEJ Transactions on Power and Energy 116, no. 6 (1996): 731–40. http://dx.doi.org/10.1541/ieejpes1990.116.6_731.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wiwatcharakoses, Chayut, and Daniel Berrar. "A self-organizing incremental neural network for continual supervised learning." Expert Systems with Applications 185 (December 2021): 115662. http://dx.doi.org/10.1016/j.eswa.2021.115662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Meir, R., and V. E. Maiorov. "On the optimality of neural-network approximation using incremental algorithms." IEEE Transactions on Neural Networks 11, no. 2 (2000): 323–37. http://dx.doi.org/10.1109/72.839004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hebboul, Amel, Fella Hachouf, and Amel Boulemnadjel. "A new incremental neural network for simultaneous clustering and classification." Neurocomputing 169 (December 2015): 89–99. http://dx.doi.org/10.1016/j.neucom.2015.02.084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Wang, Jenq-Haur, Hsin-Yang Wang, Yen-Lin Chen, and Chuan-Ming Liu. "A constructive algorithm for unsupervised learning with incremental neural network." Journal of Applied Research and Technology 13, no. 2 (2015): 188–96. http://dx.doi.org/10.1016/j.jart.2015.06.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Kurnaz, Mehmet Nadir, Zümray Dokur, and Tamer Ölmez. "An incremental neural network for tissue segmentation in ultrasound images." Computer Methods and Programs in Biomedicine 85, no. 3 (2007): 187–95. http://dx.doi.org/10.1016/j.cmpb.2006.10.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Dokur, Zümray. "Respiratory sound classification by using an incremental supervised neural network." Pattern Analysis and Applications 12, no. 4 (2008): 309–19. http://dx.doi.org/10.1007/s10044-008-0125-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nathiratul Athriyah, Ahmad Mimi, Abdul Kadir Muhammad Amir, Hasan F. M. Zaki, Zainal Abidin Zulkifli, and Abdul Rahman Hasbullah. "Incremental Learning of Deep Neural Network for Robust Vehicle Classification." Jurnal Kejuruteraan 34, no. 5 (2022): 843–50. http://dx.doi.org/10.17576/jkukm-2022-34(5)-11.

Повний текст джерела
Анотація:
Existing single-lane free flow (SLFF) tolling systems either heavily rely on contact-based treadle sensor to detect the number of vehicle wheels or manual operator to classify vehicles. While the former is susceptible to high maintenance cost due to wear and tear, the latter is prone to human error. This paper proposes a vision-based solution to SLFF vehicle classification by adapting a state-of-the-art object detection model as a backbone of the proposed framework and an incremental training scheme to train our VehicleDetNet in a continual manner to cater the challenging problem of continuous
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tian, Songsong, Weijun Li, Xin Ning, Hang Ran, Hong Qin, and Prayag Tiwari. "Continuous transfer of neural network representational similarity for incremental learning." Neurocomputing 545 (August 2023): 126300. http://dx.doi.org/10.1016/j.neucom.2023.126300.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!