To see the other types of publications on this topic, follow the link: Fusion neural network.

Journal articles on the topic 'Fusion neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fusion neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gong, Wei. "A Neural Networks Pruning and Data Fusion Based Intrusion Detection Model." Applied Mechanics and Materials 651-653 (September 2014): 1772–75. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.1772.

Full text
Abstract:
The abilities of summarization, learning and self-fitting and inner-parallel computing make artificial neural networks suitable for intrusion detection. On the other hand, data fusion based IDS has been used to solve the problem of distorting rate and failing-to-report rate and improve its performance. However, multi-sensor input-data makes the IDS lose its efficiency. The research of neural network based data fusion IDS tries to combine the strong process ability of neural network with the advantages of data fusion IDS. A neural network is designed to realize the data fusion and intrusion analysis and Pruning algorithm of neural networks is used for filtering information from multi-sensors. In the process of intrusion analysis pruning algorithm of neural networks is used for filtering information from multi-sensors so as to increase its performance and save the bandwidth of networks.
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Lili, Jiangping Liu, and Jidong Luo. "Method of Wireless Sensor Network Data Fusion." International Journal of Online Engineering (iJOE) 13, no. 09 (2017): 114. http://dx.doi.org/10.3991/ijoe.v13i09.7589.

Full text
Abstract:
<p style="margin: 1em 0px;"><span lang="EN-US"><span style="font-family: 宋体; font-size: medium;">In order to better deal with large data information in computer networks, a large data fusion method based on wireless sensor networks is designed. Based on the analysis of the structure and learning algorithm of RBF neural networks, a heterogeneous RBF neural network information fusion algorithm in wireless sensor networks is presented. The effectiveness of information fusion processing methods is tested by RBF information fusion algorithm. The proposed algorithm is applied to heterogeneous information fusion of cluster heads or sink nodes in wireless sensor networks. The simulation results show the effectiveness of the proposed algorithm. Based on the above finding, it is concluded that the RBF neural network has good real-time performance and small network delay. In addition, this method can reduce the amount of information transmission and the network conflicts and congestion.</span></span></p>
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Xiao Xiao, Li Cao, Chong Gang Wei, and Ying Gao Yue. "Research on Wireless Sensor Networks Data Fusion Algorithm Based on COA-BP." Applied Mechanics and Materials 539 (July 2014): 247–50. http://dx.doi.org/10.4028/www.scientific.net/amm.539.247.

Full text
Abstract:
To improve the wireless sensor networks data fusion efficiency and reduce network traffic and the energy consumption of sensor networks, combined with chaos optimization algorithm and BP algorithm designed a chaotic BP hybrid algorithm (COA-BP), and establish a WSNs data fusion model. This model overcomes shortcomings of the traditional BP neural network model. Using the optimized BP neural network to efficiently extract WSN data and fusion the features among a small number of original date, then sends the extracted features date to aggregation nodes, thus enhance the efficiency of data fusion and prolong the network lifetime. Simulation results show that, compared with LEACH algorithm, BP neural network and PSO-BP algorithm, this algorithm can effectively reduce network traffic, reducing 19% of the total energy consumption of nodes and prolong the network lifetime.
APA, Harvard, Vancouver, ISO, and other styles
4

Arulkumar, Varatharajan, Ramasamy Poonkodi, Marappan Suguna, Ananthavadivel Devipriya, and Selvi Govardanan Chemmalar. "Energy efficient data fusion approach using squirrel search optimization and recurrent neural network." Energy efficient data fusion approach using squirrel search optimization and recurrent neural network 31, no. 1 (2023): 480–90. https://doi.org/10.11591/ijeecs.v31.i1.pp480-490.

Full text
Abstract:
Sensor networks have helped wireless communication systems. Over the last decade, researchers have focused on energy efficiency in wireless sensor networks. Energy-efficient routing remains unsolved. Because energyconstrained sensors have limited computing capabilities, extending their lifespan is difficult. This work offers a simple, energy-efficient data fusion technique employing zonal node information. Using the witness-based data fusion technique, the evaluated network lifetime, energy consumption, communication overhead, end-to-end delay, and data delivery ratio. Energyefficient data fusion optimizes energy utilization using squirrel search optimization and a recurrent neural network. The method allows the system to recognize a sensor with excessive energy dissipation and relocate data fusion to a more energy-efficient node. The proposed model was compared against artificial neural network-particle swarm optimization (ANN-PSO), cuckoo optimization algorithm-back propagation neural network (COABPNN), Elman neural network-whale optimization algorithm (ENN-WOA), and extreme learning machine-particle swarm optimization (ELM-PSO). The model achieved 94.50% network lifetime, 26.63% communication overhead, 93.85% data delivery ratio, 10.50 ms end-to-end delay, and 282 J energy usage.
APA, Harvard, Vancouver, ISO, and other styles
5

Varatharajan, Arulkumar, Poonkodi Ramasamy, Suguna Marappan, Devipriya Ananthavadivel, and Chemmalar Selvi Govardanan. "Energy efficient data fusion approach using squirrel search optimization and recurrent neural network." Indonesian Journal of Electrical Engineering and Computer Science 31, no. 1 (2023): 480. http://dx.doi.org/10.11591/ijeecs.v31.i1.pp480-490.

Full text
Abstract:
Sensor networks have helped wireless communication systems. Over the last decade, researchers have focused on energy efficiency in wireless sensor networks. Energy-efficient routing remains unsolved. Because energyconstrained sensors have limited computing capabilities, extending their lifespan is difficult. This work offers a simple, energy-efficient data fusion technique employing zonal node information. Using the witness-based data fusion technique, the evaluated network lifetime, energy consumption, communication overhead, end-to-end delay, and data delivery ratio. Energyefficient data fusion optimizes energy utilization using squirrel search optimization and a recurrent neural network. The method allows the system to recognize a sensor with excessive energy dissipation and relocate data fusion to a more energy-efficient node. The proposed model was compared against artificial neural network-particle swarm optimization (ANN-PSO), cuckoo optimization algorithm-back propagation neural network (COABPNN), Elman neural network-whale optimization algorithm (ENN-WOA), and extreme learning machine-particle swarm optimization (ELM-PSO). The model achieved 94.50% network lifetime, 26.63% communication overhead, 93.85% data delivery ratio, 10.50 ms end-to-end delay, and 282 J energy usage.
APA, Harvard, Vancouver, ISO, and other styles
6

Mo, Qiu Yun, Jie Cao, and Feng Gao. "Diagnosis Technique Based on BP and D-S Theory." Advanced Materials Research 179-180 (January 2011): 544–48. http://dx.doi.org/10.4028/www.scientific.net/amr.179-180.544.

Full text
Abstract:
This paper constructs a common data fusion framework of fault diagnosis, by combining local neural networks with dempster-shafer (D-S) evidential theory. The RBF neural network is proposed as a local neural network of the fault pattern recognition, and its input vectors are extracted by the wavelet packet decomposition of various frequency energy. Then, the signal of each sensor separately has a feature level fusion. This method is effective, verified by experiments. The given decision level fusion is based on combining the features of the neural network and the D-S theory, and experiments show the results of the fault diagnosis are more accurate by this method.
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Haijun, Changyan Liu, Kuanming Chen, Jianguo Kong, Qicong Han, and Tiantian Zhao. "Controller Fatigue State Detection Based on ES-DFNN." Aerospace 8, no. 12 (2021): 383. http://dx.doi.org/10.3390/aerospace8120383.

Full text
Abstract:
The fatiguing work of air traffic controllers inevitably threatens air traffic safety. Determining whether eyes are in an open or closed state is currently the main method for detecting fatigue in air traffic controllers. Here, an eye state recognition model based on deep-fusion neural networks is proposed for determination of the fatigue state of controllers. This method uses transfer learning strategies to pre-train deep neural networks and deep convolutional neural networks and performs network fusion at the decision-making layer. The fused network demonstrated an improved ability to classify the target domain dataset. First, a deep-cascaded neural network algorithm was used to realize face detection and eye positioning. Second, according to the eye selection mechanism, the pictures of the eyes to be tested were cropped and passed into the deep-fusion neural network to determine the eye state. Finally, the PERCLOS indicator was combined to detect the fatigue state of the controller. On the ZJU, CEW and ATCE datasets, the accuracy, F1 score and AUC values of different networks were compared, and, on the ZJU and CEW datasets, the recognition accuracy and AUC values among different methods were evaluated based on a comparative experiment. The experimental results show that the deep-fusion neural network model demonstrated better performance than the other assessed network models. When applied to the controller eye dataset, the recognition accuracy was 98.44%, and the recognition accuracy for the test video was 97.30%.
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Shuwen, Xiaoqing Niu, Hongtao Ru, and Xiaolong Chen. "Classification of Small Targets on Sea Surface Based on Improved Residual Fusion Network and Complex Time–Frequency Spectra." Remote Sensing 16, no. 18 (2024): 3387. http://dx.doi.org/10.3390/rs16183387.

Full text
Abstract:
To address the problem that conventional neural networks trained on radar echo data cannot handle the phase of the echoes, resulting in insufficient information utilization and limited performance in detection and classification, we extend neural networks from the real-valued neural networks to the complex-valued neural networks, presenting a novel algorithm for classifying small sea surface targets. The proposed algorithm leverages an improved residual fusion network and complex time–frequency spectra. Specifically, we augment the Deep Residual Network-50 (ResNet50) with a spatial pyramid pooling (SPP) module to fuse feature maps from different receptive fields. Additionally, we enhance the feature extraction and fusion capabilities by replacing the conventional residual block layer with a multi-branch residual fusion (MBRF) module. Furthermore, we construct a complex time–frequency spectrum dataset based on radar echo data from four different types of sea surface targets. We employ a complex-valued improved residual fusion network for learning and training, ultimately yielding the result of small target classification. By incorporating both the real and imaginary parts of the echoes, the proposed complex-valued improved residual fusion network has the potential to extract more comprehensive features and enhance classification performance. Experimental results demonstrate that the proposed method achieves superior classification performance across various evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yuqiang, Wei Chen, Jing Liao, and Chun Liu. "DAMGNN: Deep adaptive multi-channel graph neural networks." Intelligent Data Analysis 26, no. 4 (2022): 873–91. http://dx.doi.org/10.3233/ida-215958.

Full text
Abstract:
Recently, several studies have reported that Graph Convolutional Networks (GCN) exhibit defects in integrating node features and topological structures in graphs. Although the proposal of AMGCN compensates for the drawbacks of GCN to some extent, it still cannot solve GCN’s insufficient fusion abilities fundamentally. Thus it is essential to find a network component with stronger fusion abilities to substitute GCN. Meanwhile, a Deep Adaptive Graph Neural Network (DAGNN) proposed by Liu et al. can adaptively aggregate information from different hops of neighborhoods, which remarkably benefits its fusion abilities. To replace GCN with DAGNN network in AMGCN model and further strengthen the fusion abilities of DAGNN network itself, we make further improvements based on DAGNN model to obtain DAGNN variant. Moreover, experimentally the fusion abilities of the DAGNN variant are verified to be far stronger than GCN. And then build on that, we propose a Deep Adaptive Multi-channel Graph Neural Network (DAMGNN). The results of lots of comparative experiments on multiple benchmark datasets show that the DAMGNN model can extract relevant information from node features and topological structures to the maximum extent for fusion, thus significantly improving the accuracy of node classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Xiaole, Zhihai Wang, Shaohai Hu, and Shichao Kan. "Multi-Focus Image Fusion Based on Multi-Scale Generative Adversarial Network." Entropy 24, no. 5 (2022): 582. http://dx.doi.org/10.3390/e24050582.

Full text
Abstract:
The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Ni, Feng, Junnian Wang, Jialin Tang, Wenjun Yu, and Ruihan Xu. "Side channel analysis based on feature fusion network." PLOS ONE 17, no. 10 (2022): e0274616. http://dx.doi.org/10.1371/journal.pone.0274616.

Full text
Abstract:
Various physical information can be leaked while the encryption algorithm is running in the device. Side-channel analysis exploits these leakages to recover keys. Due to the sensitivity of deep learning to the data features, the efficiency and accuracy of side channel analysis are effectively improved with the application of deep learning algorithms. However, a considerable part of existing reserches are based on traditional neural networks. The effectiveness of key recovery is improved by increasing the size of the network. However, the computational complexity of the algorithm increases accordingly. Problems such as overfitting, low training efficiency, and low feature extraction ability also occur. In this paper, we construct an improved lightweight convolutional neural network based on the feature fusion network. The new network and the traditional neural networks are respectively applied to the side-channel analysis for comparative experiments. The results show that the new network has faster convergence, better robustness and higher accuracy. No overfitting has occurred. A heatmap visualization method was introduced for analysis. The new network has higher heat value and more concentration in the key interval. Side-channel analysis based on feature fusion network has better performance, compared with the ones based on traditional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Jian, Liang Xu, Qi Chen, and Zhihui Ye. "Multi-sensor data fusion path combining fuzzy theory and neural networks." Intelligent Decision Technologies 18, no. 4 (2024): 3365–78. https://doi.org/10.3233/idt-240316.

Full text
Abstract:
In the development of automation and intelligent systems, multi-sensor data fusion technology is crucial. However, due to the uncertainty and incompleteness of sensor data, how to effectively fuse these data has always been a challenge. To solve this problem, the study combines fuzzy theory and neural networks to study the process of multi-sensor data transmission and data fusion. Sensor network clustering algorithms based on whale algorithm optimized fuzzy logic and neural network data fusion algorithms based on sparrow algorithm optimized were designed respectively. The performance test results showed that the first node death time of the data fusion algorithm is delayed to 1122 rounds, which is 391 rounds and 186 rounds later than the comparison algorithm, respectively. In the same round, the remaining energy was always greater than the comparison algorithm, and the difference gradually increased. The results indicate that the proposed multi-sensor data fusion path combining fuzzy theory and neural networks has successfully improved network efficiency and node energy utilization, and extended network lifespan.
APA, Harvard, Vancouver, ISO, and other styles
13

Cui, Dong Yan, and Zai Xing Xie. "Based on Information Fusion Integrated Wavelet Neural Network Fault Diagnosis." Advanced Materials Research 219-220 (March 2011): 1077–80. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.1077.

Full text
Abstract:
In this paper, the integration of wavelet neural network fault diagnosis system is established based on information fusion technology. the effective combination of fault characteristic information proves that integration of wavelet neural networks make better use of a variety of characteristic information than the list of wavelet neural networks to solve difficulties and problems which are difficult to resolve by a single network.
APA, Harvard, Vancouver, ISO, and other styles
14

Nesterov-Mueller, Alexander. "The Combinatorial Fusion Cascade as a Neural Network." AI 6, no. 2 (2025): 23. https://doi.org/10.3390/ai6020023.

Full text
Abstract:
The combinatorial fusion cascade provides a surprisingly simple and complete explanation for the origin of the genetic code based on competing protocodes. Although its molecular basis is only beginning to be uncovered, it represents a natural pattern of information generation from initial signals and has potential applications in designing more-efficient neural networks. By utilizing the properties of the combinatorial fusion cascade, we demonstrate its embedding into deep neural networks with sequential fully connected layers using the dynamic matrix method and compare the resulting modifications. We observe that the Fiedler Laplacian eigenvector of a combinatorial cascade neural network does not reflect the cascade architecture. Instead, eigenvectors associated with the cascade structure exhibit higher Laplacian eigenvalues and are distributed widely across the network. We analyze a text classification model consisting of two sequential transformer layers with an embedded cascade architecture. The cascade shows a significant influence on the classifier’s performance, particularly when trained on a reduced dataset (approximately 3% of the original). The properties of the combinatorial fusion cascade are further examined for their application in training neural networks without relying on traditional error backpropagation.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Haiwang, Bin Wang, Lulu Wu, and Qiang Tang. "Multihydrophone Fusion Network for Modulation Recognition." Sensors 22, no. 9 (2022): 3214. http://dx.doi.org/10.3390/s22093214.

Full text
Abstract:
Deep learning (DL)-based modulation recognition methods of underwater acoustic communication signals are mostly applied to a single hydrophone reception scenario. In this paper, we propose a novel end-to-end multihydrophone fusion network (MHFNet) for multisensory reception scenarios. MHFNet consists of a feature extraction module and a fusion module. The feature extraction module extracts the features of the signals received by the multiple hydrophones. Then, through the neural network, the fusion module fuses and classifies the features of the multiple signals. MHFNet takes full advantage of neural networks and multihydrophone reception to effectively fuse signal features for realizing improved modulation recognition performance. Experimental results on simulation and practical data show that MHFNet is superior to other fusion methods. The classification accuracy is improved by about 16%.
APA, Harvard, Vancouver, ISO, and other styles
16

Fu, Qiang, and Hongbin Dong. "Spiking Neural Network Based on Multi-Scale Saliency Fusion for Breast Cancer Detection." Entropy 24, no. 11 (2022): 1543. http://dx.doi.org/10.3390/e24111543.

Full text
Abstract:
Deep neural networks have been successfully applied in the field of image recognition and object detection, and the recognition results are close to or even superior to those from human beings. A deep neural network takes the activation function as the basic unit. It is inferior to the spiking neural network, which takes the spiking neuron model as the basic unit in the aspect of biological interpretability. The spiking neural network is considered as the third-generation artificial neural network, which is event-driven and has low power consumption. It modulates the process of nerve cells from receiving a stimulus to firing spikes. However, it is difficult to train spiking neural network directly due to the non-differentiable spiking neurons. In particular, it is impossible to train a spiking neural network using the back-propagation algorithm directly. Therefore, the application scenarios of spiking neural network are not as extensive as deep neural network, and a spiking neural network is mostly used in simple image classification tasks. This paper proposed a spiking neural network method for the field of object detection based on medical images using the method of converting a deep neural network to spiking neural network. The detection framework relies on the YOLO structure and uses the feature pyramid structure to obtain the multi-scale features of the image. By fusing the high resolution of low-level features and the strong semantic information of high-level features, the detection precision of the network is improved. The proposed method is applied to detect the location and classification of breast lesions with ultrasound and X-ray datasets, and the results are 90.67% and 92.81%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
17

Yin, Haoran, and Junqin Diao. "Signal automatic modulation based on AMC neural network fusion." PLOS ONE 19, no. 6 (2024): e0304531. http://dx.doi.org/10.1371/journal.pone.0304531.

Full text
Abstract:
With the rapid development of modern communication technology, it has become a core problem in the field of communication to find new ways to effectively modulate signals and to classify and recognize the results of automatic modulation. To further improve the communication quality and system processing efficiency, this study combines two different neural network algorithms to optimize the traditional signal automatic modulation classification method. In this paper, the basic technology involved in the communication process, including automatic signal modulation technology and signal classification technology, is discussed. Then, combining parallel convolution and simple cyclic unit network, three different connection paths of automatic signal modulation classification model are constructed. The performance test results show that the classification model can achieve a stable training and verification state when the two networks are connected. After 20 and 29 iterations, the loss values are 0.13 and 0.18, respectively. In addition, when the signal-to-noise ratio (SNR) is 25dB, the classification accuracy of parallel convolutional neural network and simple cyclic unit network model is as high as 0.99. Finally, the classification models of parallel convolutional neural networks and simple cyclic unit networks have stable correct classification probabilities when Doppler shift conditions are introduced as interference in practical application environment. In summary, the neural network fusion classification model designed can significantly improve the shortcomings of traditional automatic modulation classification methods, and further improve the classification accuracy of modulated signals.
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Cheng-Jian, Min-Su Huang, and Chin-Ling Lee. "Malware Classification Using Convolutional Fuzzy Neural Networks Based on Feature Fusion and the Taguchi Method." Applied Sciences 12, no. 24 (2022): 12937. http://dx.doi.org/10.3390/app122412937.

Full text
Abstract:
The applications of computer networks are increasingly extensive, and networks can be remotely controlled and monitored. Cyber hackers can exploit vulnerabilities and steal crucial data or conduct remote surveillance through malicious programs. The frequency of malware attacks is increasing, and malicious programs are constantly being updated. Therefore, more effective malware detection techniques are being developed. In this paper, a convolutional fuzzy neural network (CFNN) based on feature fusion and the Taguchi method is proposed for malware image classification; this network is referred to as FT-CFNN. Four fusion methods are proposed for the FT-CFNN, namely global max pooling fusion, global average pooling fusion, channel global max pooling fusion, and channel global average pooling fusion. Data are fed into this network architecture and then passed through two convolutional layers and two max pooling layers. The feature fusion layer is used to reduce the feature size and integrate the network information. Finally, a fuzzy neural network is used for classification. In addition, the Taguchi method is used to determine optimal parameter combinations to improve classification accuracy. This study used the Malimg dataset to evaluate the accuracy of the proposed classification method. The accuracy values exhibited by the proposed FT-CFNN, proposed CFNN, and original LeNet model in malware family classification were 98.61%, 98.13%, and 96.68%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Przybyła-Kasperek, Małgorzata, and Kwabena Frimpong Marfo. "Neural Network Used for the Fusion of Predictions Obtained by the K-Nearest Neighbors Algorithm Based on Independent Data Sources." Entropy 23, no. 12 (2021): 1568. http://dx.doi.org/10.3390/e23121568.

Full text
Abstract:
The article concerns the problem of classification based on independent data sets—local decision tables. The aim of the paper is to propose a classification model for dispersed data using a modified k-nearest neighbors algorithm and a neural network. A neural network, more specifically a multilayer perceptron, is used to combine the prediction results obtained based on local tables. Prediction results are stored in the measurement level and generated using a modified k-nearest neighbors algorithm. The task of neural networks is to combine these results and provide a common prediction. In the article various structures of neural networks (different number of neurons in the hidden layer) are studied and the results are compared with the results generated by other fusion methods, such as the majority voting, the Borda count method, the sum rule, the method that is based on decision templates and the method that is based on theory of evidence. Based on the obtained results, it was found that the neural network always generates unambiguous decisions, which is a great advantage as most of the other fusion methods generate ties. Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object.
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Haiyang, Yaozong Pan, Jian Zhang, and Haitao Yang. "Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion." Symmetry 11, no. 6 (2019): 761. http://dx.doi.org/10.3390/sym11060761.

Full text
Abstract:
In this paper, our goal is to improve the recognition accuracy of battlefield target aggregation behavior while maintaining the low computational cost of spatio-temporal depth neural networks. To this end, we propose a novel 3D-CNN (3D Convolutional Neural Networks) model, which extends the idea of multi-scale feature fusion to the spatio-temporal domain, and enhances the feature extraction ability of the network by combining feature maps of different convolutional layers. In order to reduce the computational complexity of the network, we further improved the multi-fiber network, and finally established an architecture—3D convolution Two-Stream model based on multi-scale feature fusion. Extensive experimental results on the simulation data show that our network significantly boosts the efficiency of existing convolutional neural networks in the aggregation behavior recognition, achieving the most advanced performance on the dataset constructed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
21

Cao, Y. B., and X. P. Xie. "Fusion Identification for Wear Particles Based on Dempster-Shafter Evidential Reasoning and Back-Propagation Neural Network." Key Engineering Materials 329 (January 2007): 341–46. http://dx.doi.org/10.4028/www.scientific.net/kem.329.341.

Full text
Abstract:
Based on Back-Propagation neural network and Dempster-Shafter evidential reasoning, a fuse classification method for identifying wear particles is putted forward. Firstly, digital wear debris images are dealt with images processing methods. Then from the wear particles images, wear particles characters can be obtained by means of statistical analysis and Fourier analysis. Later, an integrated neural network made of two sub-neural networks based on statistical analysis and Fourier analysis is established, and some typical wear particles features as training samples are provided. After each sub-BP neural network has been trained successfully, the preliminary diagnosis of each sub-neural network is achieved. By using of the dempster-shafter evidential reasoning, the finial fusion diagnosis results are obtained. In the end, a practical example is given to show that the fusion results are more accurate than those with a single method only.
APA, Harvard, Vancouver, ISO, and other styles
22

Hu, Xiaoran, and Masayuki Yamamura. "Global Local Fusion Neural Network for Multimodal Sentiment Analysis." Applied Sciences 12, no. 17 (2022): 8453. http://dx.doi.org/10.3390/app12178453.

Full text
Abstract:
With the popularity of social networking services, people are increasingly inclined to share their opinions and feelings on social networks, leading to the rapid increase in multimodal posts on various platforms. Therefore, multimodal sentiment analysis has become a crucial research field for exploring users’ emotions. The complex and complementary interactions between images and text greatly heighten the difficulty of sentiment analysis. Previous works conducted rough fusion operations and ignored the study for fine fusion features for the sentiment task, which did not obtain sufficient interactive information for sentiment analysis. This paper proposes a global local fusion neural network (GLFN), which comprehensively considers global and local fusion features, aggregating these features to analyze user sentiment. The model first extracts overall fusion features by attention modules as modality-based global features. Then, coarse-to-fine fusion learning is applied to build local fusion features effectively. Specifically, the cross-modal module is used for rough fusion, and fine-grained fusion is applied to capture the interaction information between objects and words. Finally, we integrate all features to achieve a more reliable prediction. Extensive experimental results, comparisons, and visualization of public datasets demonstrate the effectiveness of the proposed model for multimodal sentiment classification.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhu, Wenkai, Xueying Sun, and Qiang Zhang. "DCG-Net: Enhanced Hyperspectral Image Classification with Dual-Branch Convolutional Neural Network and Graph Convolutional Neural Network Integration." Electronics 13, no. 16 (2024): 3271. http://dx.doi.org/10.3390/electronics13163271.

Full text
Abstract:
In recent years, graph convolutional neural networks (GCNs) and convolutional neural networks (CNNs) have made significant strides in hyperspectral image (HSI) classification. However, existing models often encounter information redundancy and feature mismatch during feature fusion, and they struggle with small-scale refined features. To address these issues, we propose DCG-Net, an innovative classification network integrating CNN and GCN architectures. Our approach includes the development of a double-branch expanding network (E-Net) to enhance spectral features and efficiently extract high-level features. Additionally, we incorporate a GCN with an attention mechanism to facilitate the integration of multi-space scale superpixel-level and pixel-level features. To further improve feature fusion, we introduce a feature aggregation module (FAM) that adaptively learns channel features, enhancing classification robustness and accuracy. Comprehensive experiments on three widely used datasets show that DCG-Net achieves superior classification results compared to other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Qing. "Real-Time Multitarget Tracking for Panoramic Video Based on Dual Neural Networks for Multisensor Information Fusion." Mathematical Problems in Engineering 2022 (June 2, 2022): 1–11. http://dx.doi.org/10.1155/2022/8313471.

Full text
Abstract:
A multitarget real-time tracking system for panoramic video with multisensor information fusion dual neural networks is studied and implemented by combining dual neural networks, fused geometric features, and deep learning-based target real-time tracking algorithms. The motion model of multisensor information fusion is analyzed, the dual neural network perturbation model is introduced, and the state variables of the system are determined. Combined with the data structure model of multisensor information fusion, a least-squares-based optimization function is constructed. In this optimization function, the image sensor is constructed as an observationally constrained residual term based on the pixel plane, the inertial measurement unit is constructed as an observationally constrained residual term based on the motion model, and the motion relationship between sensors is constructed as a motion-constrained residual term to derive a mathematical model for real-time multitarget tracking of panoramic video. The simulation experimental results show that the integrated position estimation accuracy of the multisensor information fusion dual neural network-based panoramic video multitarget real-time tracking fusion algorithm is 10.5% higher than that of the optimal distributed panoramic video multitarget fusion algorithm without feedback. The results of the simulation experiments show that the integrated position estimation accuracy of the multitarget real-time tracking fusion algorithm for panoramic video improves by 10.96%, and the integrated speed estimation accuracy improves by 6.32% compared with the optimal distributed multitarget fusion algorithm without feedback. The proposed panoramic video multitarget real-time tracking algorithm based on the dual neural network can effectively improve the target tracking accuracy of the model on degraded frames (motion blur, target occlusion, out-of-focus, etc.), and the stability of the algorithm for target location and category detection is effectively improved by multiframe feature fusion. The research in this paper provides better technical support and research theory for panoramic video multitarget real-time tracking.
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Shan, Xiaofeng Liao, Yuming Feng, Zilin Gao, and Babatunde Oluwaseun Onasanya. "A new fusion neural network model and credit card fraud identification." PLOS ONE 19, no. 10 (2024): e0311987. http://dx.doi.org/10.1371/journal.pone.0311987.

Full text
Abstract:
Credit card fraud identification is an important issue in risk prevention and control for banks and financial institutions. In order to establish an efficient credit card fraud identification model, this article studied the relevant factors that affect fraud identification. A credit card fraud identification model based on neural networks was constructed, and in-depth discussions and research were conducted. First, the layers of neural networks were deepened to improve the prediction accuracy of the model; second, this paper increase the hidden layer width of the neural network to improve the prediction accuracy of the model. This article proposes a new fusion neural network model by combining deep neural networks and wide neural networks, and applies the model to credit card fraud identification. The characteristic of this model is that the accuracy of prediction and F1 score are relatively high. Finally, use the random gradient descent method to train the model. On the test set, the proposed method has an accuracy of 96.44% and an F1 value of 96.17%, demonstrating good fraud recognition performance. After comparison, the method proposed in this paper is superior to machine learning models, ensemble learning models, and deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
26

Seeland, Marco, and Patrick Mäder. "Multi-view classification with convolutional neural networks." PLOS ONE 16, no. 1 (2021): e0245230. http://dx.doi.org/10.1371/journal.pone.0245230.

Full text
Abstract:
Humans’ decision making process often relies on utilizing visual information from different views or perspectives. However, in machine-learning-based image classification we typically infer an object’s class from just a single image showing an object. Especially for challenging classification problems, the visual information conveyed by a single image may be insufficient for an accurate decision. We propose a classification scheme that relies on fusing visual information captured through images depicting the same object from multiple perspectives. Convolutional neural networks are used to extract and encode visual features from the multiple views and we propose strategies for fusing these information. More specifically, we investigate the following three strategies: (1) fusing convolutional feature maps at differing network depths; (2) fusion of bottleneck latent representations prior to classification; and (3) score fusion. We systematically evaluate these strategies on three datasets from different domains. Our findings emphasize the benefit of integrating information fusion into the network rather than performing it by post-processing of classification scores. Furthermore, we demonstrate through a case study that already trained networks can be easily extended by the best fusion strategy, outperforming other approaches by large margin.
APA, Harvard, Vancouver, ISO, and other styles
27

He, Ling, Yun Xu, and Qi Li. "Online Prediction of Tool Life Based on Multi-source Data Fusion." Academic Journal of Science and Technology 13, no. 3 (2024): 78–85. https://doi.org/10.54097/v8jese64.

Full text
Abstract:
To address the challenges of difficult data acquisition and low prediction accuracy in tool life prediction, a method for data collection from FANUC numerical control machine tools and prediction of tool life based on the SSA-BP neural network is proposed. Data such as tool running time, spindle load rate, spindle load, X-axis load rate, Y-axis load rate, Z-axis load rate, X-axis current, Y-axis current, and Z-axis current are collected. By optimizing the core parameters of the BP neural network with the SSA algorithm, a tool life prediction model is constructed. Compared with traditional BP neural networks and GWO-BP neural networks, experimental results show that this model has the closest tool life prediction values to the actual values and better network stability, making it more suitable for tool life prediction.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Ge, Hu Jing, and Chen Guangsheng. "Fusion Process Neural Networks Classifier Oriented Time Series." Journal of Computational and Theoretical Nanoscience 16, no. 10 (2019): 4059–63. http://dx.doi.org/10.1166/jctn.2019.6946.

Full text
Abstract:
Based on the consideration of complementary advantages, different wavelet, fractal and statistical methods are integrated to complete the classification feature extraction of time series. Combined with the advantage of process neural networks that processing time-varying information, we propose a fusion classifier with process neural network oriented time series. Be taking advantage of the multi-fractal processing nonlinear feature of time series data classification, the strong adaptability of the wavelet technique for time series data and the effect of statistical features on the classification of time series data, we can achieve the classification feature extraction of time series. Additionally, using time-varying input characteristics of process neural networks, the pattern matching of timevarying input information and space-time aggregation operation is realized. The feature extraction of time series with the above three methods is fused to the distance calculation between time-varying inputs and cluster space in process neural networks. We provide the process neural network fusion to the learning algorithm and optimize the calculation process of the time series classifier. Finally, we report the performance of our classification method using Synthetic Control Charts data from the UCI dataset and illustrate the advantage and validity of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
29

Wen, Yu, Xiaomin Yang, Turgay Celik, Olga Sushkova, and Marcelo Keese Albertini. "Multifocus image fusion using convolutional neural network." Multimedia Tools and Applications 79, no. 45-46 (2020): 34531–43. http://dx.doi.org/10.1007/s11042-020-08945-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Frank, Deborah, and J. Bryan Pletta. "Neural Network Sensor Fusion for Security Applications." Journal of Intelligent Material Systems and Structures 5, no. 2 (1994): 279–88. http://dx.doi.org/10.1177/1045389x9400500213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kotwal, Arjun, and Dr Ramesh Kumar. "Medical image fusion using convolutional neural network." International Journal of Computing, Programming and Database Management 4, no. 1 (2023): 40–42. http://dx.doi.org/10.33545/27076636.2023.v4.i1a.79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Deageon. "Text Classification Based on Neural Network Fusion." Tehnički glasnik 17, no. 3 (2023): 359–66. http://dx.doi.org/10.31803/tg-20221228154330.

Full text
Abstract:
The goal of text classification is to identify the category to which the text belongs. Text categorization is widely used in email detection, sentiment analysis, topic marking and other fields. However, good text representation is the point to improve the capability of NLP tasks. Traditional text representation adopts bag-of-words model or vector space model, which loses the context information of the text and faces the problems of high latitude and high sparsity,. In recent years, with the increase of data and the improvement of computing performance, the use of deep learning technology to represent and classify texts has attracted great attention. Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and RNN with attention mechanism are used to represent the text, and then to classify the text and other NLP tasks, all of which have better performance than the traditional methods. In this paper, we design two sentence-level models based on the deep network and the details are as follows: (1) Text representation and classification model based on bidirectional RNN and CNN (BRCNN). BRCNN’s input is the word vector corresponding to each word in the sentence; after using RNN to extract word order information in sentences, CNN is used to extract higher-level features of sentences. After convolution, the maximum pool operation is used to obtain sentence vectors. At last, softmax classifier is used for classification. RNN can capture the word order information in sentences, while CNN can extract useful features. Experiments on eight text classification tasks show that BRCNN model can get better text feature representation, and the classification accuracy rate is equal to or higher than that of the prior art. (2) Attention mechanism and CNN (ACNN) model uses the RNN with attention mechanism to obtain the context vector; Then CNN is used to extract more advanced feature information. The maximum pool operation is adopted to obtain a sentence vector; At last, the softmax classifier is used to classify the text. Experiments on eight text classification benchmark data sets show that ACNN improves the stability of model convergence, and can converge to an optimal or local optimal solution better than BRCNN.
APA, Harvard, Vancouver, ISO, and other styles
33

Manoj, Kumar Singh, Arora Simran, Saini Shobhit, Jafri Faraz, and Ali Wazahat. "MULTI-DISEASE DETECTION WITH NEURAL NETWORK FUSION." International Journal of Engineering Sciences & Emerging Technologies 11, no. 2 (2023): 102–7. https://doi.org/10.5281/zenodo.10435444.

Full text
Abstract:
&nbsp; <em>In this study, we address the challenge of analyzing imbalanced medical data for accurate disease prediction. Specifically, we focus on developing a supervised model that utilizes machine learning and deep learning techniques to detect multiple diseases. In our approach, we focus on chronic kidney disease, malaria, and pneumonia as the specific diseases of interest. Chronic kidney disease is marked by a gradual decline in kidney function, leading to the accumulation of waste products in the blood and the onset of related health complications. On the other hand, malaria is a severe illness caused by Plasmodium parasites, which are transmitted through mosquito bites and can pose a significant threat to human life. Pneumonia, on the other hand, is an infection that inflames the air sacs in the lungs, often resulting in fever, fluid accumulation, and breathing difficulties. To facilitate rapid detection by lab assistants, our model leverages accurate disease prediction based on provided data. We fine-tune the neural network for chronic kidney disease prediction, while adopting convolutional neural networks (CNNs) for analyzing parasite-infected cells and chest X-rays in malaria and pneumonia detection, respectively. Additionally, we outline the potential expansion of this project to include other diseases in the future.</em>
APA, Harvard, Vancouver, ISO, and other styles
34

Wei, Biaowen. "Application of Neural Network Based on Multisource Information Fusion in Production Cost Prediction." Wireless Communications and Mobile Computing 2022 (March 15, 2022): 1–13. http://dx.doi.org/10.1155/2022/5170734.

Full text
Abstract:
Production cost forecasting is an important basis for cost accounting, cost decision-making, and cost planning. It is the scale necessary to reduce product costs and an important way to enhance enterprise competitiveness and improve system benefits. A neural network based on multisource information fusion is a manifestation of integrated internal knowledge. By learning to integrate multiple sources of information, it is easier to understand cognitive thinking and integrate the complex relationships of uncertain regions into regular signals. Fusion prediction does not need to understand the specific mechanism of the process but can fully approximate various nonlinear functional relationships determined by input and output with the continuous update of its internal weights. This paper mainly studies the application of neural network based on multisource information fusion in production cost prediction, analyzes the technology of multisource information fusion, and proposes a method of applying multisource information fusion theory to BP neural network and RBF network. Experiments have proved that through the comparison of the results of the BP neural network and the RBF network, for the six cost categories, compared with the BP neural network, the prediction results of the RBF network are closer to the true value, and they all show higher prediction capabilities. Among them, the error of the RBF network in predicting the total salary of the current month is 0.01004. The performance of the RBF network model is better than that of the BP neural network model.
APA, Harvard, Vancouver, ISO, and other styles
35

Dudekula, Usen, and Purnachand N. "Linear fusion approach to convolutional neural networks for facial emotion recognition." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (2022): 1489. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1489-1500.

Full text
Abstract:
Facial expression recognition is a challenging problem in the scientific field of computer vision. Several face expression recognition (FER) algorithms are proposed in the field of machine learning, and deep learning to extract expression knowledge from facial representations. Even though numerous algorithms have been examined, several issues like lighting changes, rotations and occlusions. We present an efficient approach to enhance recognition accuracy in this study, advocates transfer learning to fine-tune the parameters of the pre-trained model (VGG19 model ) and non-pre-trained model convolutional neural networks (CNNs) for the task of image classification. The VGG19 network and convolutional network derive two channels of expression related characteristics from the facial grayscale images. The linear fusion algorithm calculates the class by taking an average of each classification decision on training samples of both channels. Final recognition is calculated using convolution neural network architecture followed by a softmax classifier. Seven basic facial emotions (BEs): happiness, surprise, anger, sadness, fear, disgust, and neutral facial expressions can be recognized by the proposed algorithm, The average accuracies for standard data set’s “CK+,” and “JAFFE,” 98.3 % and 92.4%, respectively. Using a deep network with one channel, the proposed algorithm can achieve well comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Jin, Yu Gao, Wei Liu, Arun Kumar Sangaiah, and Hye-Jin Kim. "An intelligent data gathering schema with data fusion supported for mobile sink in wireless sensor networks." International Journal of Distributed Sensor Networks 15, no. 3 (2019): 155014771983958. http://dx.doi.org/10.1177/1550147719839581.

Full text
Abstract:
Numerous tiny sensors are restricted with energy for the wireless sensor networks since most of them are deployed in harsh environments, and thus it is impossible for battery re-change. Therefore, energy efficiency becomes a significant requirement for routing protocol design. Recent research introduces data fusion to conserve energy; however, many of them do not present a concrete scheme for the fusion process. Emerging machine learning technology provides a novel direction for data fusion and makes it more available and intelligent. In this article, we present an intelligent data gathering schema with data fusion called IDGS-DF. In IDGS-DF, we adopt a neural network to conduct data fusion to improve network performance. First, we partition the whole sensor fields into several subdomains by virtual grids. Then cluster heads are selected according to the score of nodes and data fusion is conducted in CHs using a pretrained neural network. Finally, a mobile agent is adopted to gather information along a predefined path. Plenty of experiments are conducted to demonstrate that our schema can efficiently conserve energy and enhance the lifetime of the network.
APA, Harvard, Vancouver, ISO, and other styles
37

Mu, Guo, and Liu. "A Multi-Scale and Multi-Level Spectral-Spatial Feature Fusion Network for Hyperspectral Image Classification." Remote Sensing 12, no. 1 (2020): 125. http://dx.doi.org/10.3390/rs12010125.

Full text
Abstract:
Extracting spatial and spectral features through deep neural networks has become an effective means of classification of hyperspectral images. However, most networks rarely consider the extraction of multi-scale spatial features and cannot fully integrate spatial and spectral features. In order to solve these problems, this paper proposes a multi-scale and multi-level spectral-spatial feature fusion network (MSSN) for hyperspectral image classification. The network uses the original 3D cube as input data and does not need to use feature engineering. In the MSSN, using different scale neighborhood blocks as the input of the network, the spectral-spatial features of different scales can be effectively extracted. The proposed 3D–2D alternating residual block combines the spectral features extracted by the three-dimensional convolutional neural network (3D-CNN) with the spatial features extracted by the two-dimensional convolutional neural network (2D-CNN). It not only achieves the fusion of spectral features and spatial features but also achieves the fusion of high-level features and low-level features. Experimental results on four hyperspectral datasets show that this method is superior to several state-of-the-art classification methods for hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
38

Casabianca, Pietro, and Yu Zhang. "Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks." Drones 5, no. 3 (2021): 54. http://dx.doi.org/10.3390/drones5030054.

Full text
Abstract:
Multirotor UAVs have become ubiquitous in commercial and public use. As they become more affordable and more available, the associated security risks further increase, especially in relation to airspace breaches and the danger of drone-to-aircraft collisions. Thus, robust systems must be set in place to detect and deal with hostile drones. This paper investigates the use of deep learning methods to detect UAVs using acoustic signals. Deep neural network models are trained with mel-spectrograms as inputs. In this case, Convolutional Neural Networks (CNNs) are shown to be the better performing network, compared with Recurrent Neural Networks (RNNs) and Convolutional Recurrent Neural Networks (CRNNs). Furthermore, late fusion methods have been evaluated using an ensemble of deep neural networks, where the weighted soft voting mechanism has achieved the highest average accuracy of 94.7%, which has outperformed the solo models. In future work, the developed late fusion technique could be utilized with radar and visual methods to further improve the UAV detection performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Dai, Yanjun, and Lin Su. "Economic Structure Analysis Based on Neural Network and Bionic Algorithm." Complexity 2021 (May 8, 2021): 1–11. http://dx.doi.org/10.1155/2021/9925823.

Full text
Abstract:
In this article, an in-depth study and analysis of economic structure are carried out using a neural network fusion release algorithm. The method system defines the weight space and structure space of neural networks from the perspective of optimization theory, proposes a bionic optimization algorithm under the weight space and structure space, and establishes a neuroevolutionary method with shallow neural network and deep neural network as the research objects. In the shallow neuroevolutionary, the improved genetic algorithm (IGA) based on elite heuristic operation and migration strategy and the improved coyote optimization algorithm (ICOA) based on adaptive influence weights are proposed, and the shallow neuroevolutionary method based on IGA and the shallow neuroevolutionary method based on ICOA are applied to the weight space of backpropagation (BP) neural networks. In deep neuroevolutionary method, the structure space of convolutional neural network is proposed to solve the search space design of neural structure search (NAS), and the GA-based deep neuroevolutionary method under the structure space of convolutional neural network is proposed to solve the problem that numerous hyperparameters and network structure parameters can produce explosive combinations when designing deep learning models. The neural network fusion bionic algorithm used has the application value of exploring the spatial structure and dynamics of the socioeconomic system, improving the perception of the socioeconomic situation, and understanding the development law of society, etc. The idea is also verifiable through the present computer technology.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Xiaojun, Haowen Yan, Weiying Xie, Lu Kang, and Yi Tian. "An Improved Pulse-Coupled Neural Network Model for Pansharpening." Sensors 20, no. 10 (2020): 2764. http://dx.doi.org/10.3390/s20102764.

Full text
Abstract:
Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening.
APA, Harvard, Vancouver, ISO, and other styles
41

MAMUTA, Maryna, and Oleksandr MAMUTA. "PERFORMANCE EVALUATION OF DUAL CHANNEL OPTOELECTRONIC SURVEILLANCE SYSTEM WITH NEURAL NETWORK INFORMATION FUSION." Herald of Khmelnytskyi National University. Technical sciences 309, no. 3 (2022): 229–32. http://dx.doi.org/10.31891/2307-5732-2022-309-3-229-232.

Full text
Abstract:
Optoelectronic surveillance systems are widely used in many areas of human activity: in agriculture, medicine, military systems, in search and rescue operations. These systems are designed for space, airborne, ground and maritime applications. They have to work under all climate conditions with absolute certainty under all light and weather conditions. And on the other hand, they have to be cost effective. That’s why optoelectronic surveillance systems use channels which provide complimentary information about the object and background. And for the most applications channels that operate in visible and thermal ranges of optical spectrum are indispensable element of such systems. Optoelectronic surveillance systems have to continuously meet and even exceed today’s performance and reliability requirements. That’s why there is a need to use not only cutting-age technology, but also adaptive signal processing. Information fusion is state-of-the-art technic to improve overall system performance. Numerous image fusion methods have been proposed during several decades but the most promising are neural networks. Television system, which work in visible range of optical spectrum, and thermal system, which work in long-wave infrared range where chosen for the modeling. Probability of target detection, recognition and identification was used for performance evaluation. Probability of target detection, recognition and identification for separate long-wave infrared and television channels were modeled. Also, probabilities were estimated for the fused data. Information fusion was done with the help of convolutional neural networks. Simulation results showed that probability of target detection, recognition and identification are almost for 6% higher for fused data compared to separate channels.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Ying, Ran Zhang, Yanfeng Li, Jiyuan Xie, Dong Guo, and Laiqiang Song. "Safety Assessment of the Main Beams of Historical Buildings Based on Multisource Data Fusion." Buildings 13, no. 8 (2023): 2022. http://dx.doi.org/10.3390/buildings13082022.

Full text
Abstract:
Taking the main beams of historical buildings as the engineering background, existing theoretical research results related to influencing structural factors were used along with numerical simulation and data fusion methods to examine their integrity. Thus, the application of multifactor data fusion in the safety assessment of the main beams of historical buildings was performed. On the basis of existing structural safety assessment methods, neural networks and rough set theory were combined and applied to the safety assessment of the main beams of historical buildings. The bearing capacity of the main beams was divided into five levels according to the degree to which they met current requirements. The safety assessment database established by a Kohonen neural network was clustered. Thus, the specific evaluation indices corresponding to the five types of safety levels were presented. The rough neural network algorithm, integrating the rough set and neural network, was applied for data fusion with this database. The attribute reduction function of the rough set was used to reduce the input dimension of the neural network, which was trained, underwent a learning process, and then used for predictions. The trained neural network was applied for the safety assessment of the main beams of historical buildings, and six specific attribute index values corresponding to the main beams were directly input to obtain the current safety statuses of the buildings. Corresponding management suggestions were also provided.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Juan, Zonghuan Wu, Shun Zhang, and Peng Wang. "A Fault Diagnosis Method for Power Battery Based on Multiple Model Fusion." Electronics 12, no. 12 (2023): 2724. http://dx.doi.org/10.3390/electronics12122724.

Full text
Abstract:
The widespread adoption and utilization of electric vehicles has been constrained by power battery performance. We proposed a fault diagnosis method for power batteries based on multiple-model fusion. The method effectively fused the advantages of various classification models and avoided the bias of a single model towards certain fault types. Firstly, we collected and sorted parameter information of the power battery during operation. Three common neural networks: back propagation (BP) neural network, convolution neural network (CNN), and long short-term memory (LSTM) neural network, were applied to battery fault diagnosis to output the fault types. Secondly, the fusion algorithm proposed in this paper determined the accurate fault type. Based on the improved voting method, the proposed fusion algorithm, named the multi-level decision algorithm, calculated the voting factors of the diagnostic results of each classification model. According to the set decision thresholds, multi-level decision voting was conducted to avoid neglecting effective classification information from minority models, which can occur with traditional voting methods. Finally, the accuracy and effectiveness of the proposed method were verified by comparing the accuracy of each classification model with the multiple model fusion algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Chung, Daewon, and Insoo Sohn. "Neural Network Optimization Based on Complex Network Theory: A Survey." Mathematics 11, no. 2 (2023): 321. http://dx.doi.org/10.3390/math11020321.

Full text
Abstract:
Complex network science is an interdisciplinary field of study based on graph theory, statistical mechanics, and data science. With the powerful tools now available in complex network theory for the study of network topology, it is obvious that complex network topology models can be applied to enhance artificial neural network models. In this paper, we provide an overview of the most important works published within the past 10 years on the topic of complex network theory-based optimization methods. This review of the most up-to-date optimized neural network systems reveals that the fusion of complex and neural networks improves both accuracy and robustness. By setting out our review findings here, we seek to promote a better understanding of basic concepts and offer a deeper insight into the various research efforts that have led to the use of complex network theory in the optimized neural networks of today.
APA, Harvard, Vancouver, ISO, and other styles
45

Szczęsny, Szymon, and Paweł Pietrzak. "Exocytotic vesicle fusion classification for early disease diagnosis using a mobile GPU microsystem." Neural Computing and Applications 34, no. 6 (2021): 4843–54. http://dx.doi.org/10.1007/s00521-021-06676-2.

Full text
Abstract:
AbstractThis work addresses monitoring vesicle fusions occurring during the exocytosis process, which is the main way of intercellular communication. Certain vesicle behaviors may also indicate certain precancerous conditions in cells. For this purpose we designed a system able to detect two main types of exocytosis: a full fusion and a kiss-and-run fusion, based on data from multiple amperometric sensors at once. It uses many instances of small perceptron neural networks in a massively parallel manner and runs on Jetson TX2 platform, which uses a GPU for parallel processing. Based on performed benchmarking, approximately 140,000 sensors can be processed in real time within the sensor sampling period equal to 10 ms and an accuracy of 99$$\%$$ % . The work includes an analysis of the system performance with varying neural network sizes, input data sizes, and sampling periods of fusion signals.
APA, Harvard, Vancouver, ISO, and other styles
46

Wei, Jingbo, Lei Chen, Zhou Chen, and Yukun Huang. "An Experimental Study of the Accuracy and Change Detection Potential of Blending Time Series Remote Sensing Images with Spatiotemporal Fusion." Remote Sensing 15, no. 15 (2023): 3763. http://dx.doi.org/10.3390/rs15153763.

Full text
Abstract:
Over one hundred spatiotemporal fusion algorithms have been proposed, but convolutional neural networks trained with large amounts of data for spatiotemporal fusion have not shown significant advantages. In addition, no attention has been paid to whether fused images can be used for change detection. These two issues are addressed in this work. A new dataset consisting of nine pairs of images is designed to benchmark the accuracy of neural networks using one-pair spatiotemporal fusion with neural-network-based models. Notably, the size of each image is significantly larger compared to other datasets used to train neural networks. A comprehensive comparison of the radiometric, spectral, and structural losses is made using fourteen fusion algorithms and five datasets to illustrate the differences in the performance of spatiotemporal fusion algorithms with regard to various sensors and image sizes. A change detection experiment is conducted to test if it is feasible to detect changes in specific land covers using the fusion results. The experiment shows that convolutional neural networks can be used for one-pair spatiotemporal fusion if the sizes of individual images are adequately large. It also confirms that the spatiotemporally fused images can be used for change detection in certain scenes.
APA, Harvard, Vancouver, ISO, and other styles
47

He, Baiqing. "Research on Multi-modal Medical Image Fusion Based on Multi-Scale Attention Mechanism." Journal of Physics: Conference Series 2173, no. 1 (2022): 012037. http://dx.doi.org/10.1088/1742-6596/2173/1/012037.

Full text
Abstract:
Abstract Image fusion technology has been widely used and researched in the fields of medicine, aviation, military and civil. Compared with the traditional image fusion technology, the fusion image technology based on intelligent algorithm generates more realistic, clear and reliable images, and has a lot of detailed information. Deep neural networks have rapidly developed into a research hotspot in medical image analysis, which can automatically extract hidden pathological information from medical image data. Multi-scale and attention mechanism are two important modules in neural networks, which can significantly enhance the features extracted by the network. The problems that need to be solved for medical image fusion based on multi-scale attention mechanism include: how to build a multi-scale module, how to build an attention module, and how to combine multi-scale and attention mechanisms. Construct a deep learning network through a multi-scale attention mechanism, and then adjust the parameters and train the network to achieve multi-modal medical image fusion. In the process of building neural networks, multi-scale and attention mechanisms are incorporated, and multi-scale features of multi-modal medical images are extracted and enhanced for fusion. Through a large number of experiments: (1) The edge strength of the fused image is expected to increase by 10%-20% based on the average value of the existing algorithm; (2) The fused image can achieve high color fidelity and rich detailed information; (3) ) The time required for the fusion algorithm is expected to be reduced by 1%-10% based on the average value of the existing algorithm.
APA, Harvard, Vancouver, ISO, and other styles
48

Dudekula, Usen, and Purnachand N. "Linear fusion approach to convolutional neural networks for facial emotion recognition." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (2022): 1489–500. https://doi.org/10.11591/ijeecs.v25.i3.pp1489-1500.

Full text
Abstract:
Facial expression recognition is a challenging problem in the scientific field of computer vision. Several face expression recognition (FER) algorithms are proposed in the field of machine learning, and deep learning to extract expression knowledge from facial representations. Even though numerous algorithms have been examined, several issues like lighting changes, rotations and occlusions. We present an efficient approach to enhance recognition accuracy in this study, advocates transfer learning to fine-tune the parameters of the pre-trained model (VGG19 model) and non-pre-trained model convolutional neural networks (CNNs) for the task of image classification. The VGG19 network and convolutional network derive two channels of expression related characteristics from the facial grayscale images. The linear fusion algorithm calculates the class by taking an average of each classification decision on training samples of both channels. Final recognition is calculated using convolution neural network architecture followed by a softmax classifier. Seven basic facial emotions (BEs): happiness, surprise, anger, sadness, fear, disgust, and neutral facial expressions can be recognized by the proposed algorithm, The average accuracies for standard data set&rsquo;s &ldquo;CK+,&rdquo; and &ldquo;JAFFE,&rdquo; 98.3% and 92.4%, respectively. Using a deep network with one channel, the proposed algorithm can achieve well comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Gang, Pu Yan, Qingwei Tang, Lijuan Yang, and Jie Chen. "Multiscale Feature Fusion for Skin Lesion Classification." BioMed Research International 2023 (January 5, 2023): 1–15. http://dx.doi.org/10.1155/2023/5146543.

Full text
Abstract:
Skin cancer has a high mortality rate, and early detection can greatly reduce patient mortality. Convolutional neural network (CNN) has been widely applied in the field of computer-aided diagnosis. To improve the ability of convolutional neural networks to accurately classify skin lesions, we propose a multiscale feature fusion model for skin lesion classification. We use a two-stream network, which are a densely connected network (DenseNet-121) and improved visual geometry group network (VGG-16). In the feature fusion module, we construct multireceptive fields to obtain multiscale pathological information and use generalized mean pooling (GeM pooling) to reduce the spatial dimensionality of lesion features. Finally, we built and tested a system with the developed skin lesion classification model. The experiments were performed on the dataset ISIC2018, which can achieve a good classification performance with a test accuracy of 91.24% and macroaverages of 95%.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Jung-Hua, Chun-Shun Tseng, Sih-Yin Shen, and Ya-Yun Jheng. "Self-Organizing Fusion Neural Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (2007): 610–19. http://dx.doi.org/10.20965/jaciii.2007.p0610.

Full text
Abstract:
This paper presents a self-organizing fusion neural network (SOFNN) effective in performing fast clustering and segmentation. Based on a counteracting learning scheme, SOFNN employs two parameters that together control the training in a counteracting manner to obviate problems of over-segmentation and under-segmentation. In particular, a simultaneous region-based updating strategy is adopted to facilitate an interesting fusion effect useful for identifying regions comprising an object in a self-organizing way. To achieve reliable merging, a dynamic merging criterion based on both intra-regional and inter-regional local statistics is used. Such extension in adjacency not only helps achieve more accurate segmentation results, but also improves input noise tolerance. Through iterating the three phases of simultaneous updating, self-organizing fusion, and extended merging, the training process converges without manual intervention, thereby conveniently obviating the need of pre-specifying the terminating number of objects. Unlike existing methods that sequentially merge regions, all regions in SOFNN can be processed in parallel fashion, thus providing great potentiality for a fully parallel hardware implementation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!