To see the other types of publications on this topic, follow the link: Deep Learning Fusion.

Journal articles on the topic 'Deep Learning Fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep Learning Fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shetty D S, Radhika. "Multi-Modal Fusion Techniques in Deep Learning." International Journal of Science and Research (IJSR) 12, no. 9 (2023): 526–32. http://dx.doi.org/10.21275/sr23905100554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

P, Jayapal. "Efficient Human-Machine Interface through Deep Learning Fusion." International Journal of Science and Research (IJSR) 13, no. 1 (2024): 680–86. http://dx.doi.org/10.21275/sr24109210845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jianwei Chen, Jianwei Chen, Quan Du Jianwei Chen, and Ling-Ju Hung Quan Du. "A Fusion Algorithm Based on Deep Learning for Panoramic Image." 電腦學刊 35, no. 6 (2024): 097–107. https://doi.org/10.53106/199115992024123506008.

Full text
Abstract:
<p>Traditional image fusion algorithms often struggle with slow processing speeds and suboptimal results, particularly when handling non-planar images. In this paper, we present a novel deep learning-based approach for panoramic image fusion. We begin by detailing our dataset construction and preprocessing techniques. To enhance the model’s capability with non-planar images, we apply the Thin Plate Spline (TPS) deformation algorithm, allowing effective panoramic fusion across complex image structures. The model architecture is based on a convolutional neural network (CNN) frame
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Changqi, Cong Zhang, and Naixue Xiong. "Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review." Electronics 9, no. 12 (2020): 2162. http://dx.doi.org/10.3390/electronics9122162.

Full text
Abstract:
Infrared and visible image fusion technologies make full use of different image features obtained by different sensors, retain complementary information of the source images during the fusion process, and use redundant information to improve the credibility of the fusion image. In recent years, many researchers have used deep learning methods (DL) to explore the field of image fusion and found that applying DL has improved the time-consuming efficiency of the model and the fusion effect. However, DL includes many branches, and there is currently no detailed investigation of deep learning metho
APA, Harvard, Vancouver, ISO, and other styles
5

Zhong, Hongye, and Jitian Xiao. "Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm." Scientific Programming 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/1901876.

Full text
Abstract:
With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves th
APA, Harvard, Vancouver, ISO, and other styles
6

Janani, T., and A. Ramanan. "Feature Fusion for Efficient Object Classification Using Deep and Shallow Learning." International Journal of Machine Learning and Computing 7, no. 5 (2017): 123–27. http://dx.doi.org/10.18178/ijmlc.2017.7.5.633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tu, Wenxuan, Sihang Zhou, Xinwang Liu, et al. "Deep Fusion Clustering Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 9978–87. http://dx.doi.org/10.1609/aaai.v35i11.17198.

Full text
Abstract:
Deep clustering is a fundamental yet challenging task for data analysis. Recently we witness a strong tendency of combining autoencoder and graph neural networks to exploit structure information for clustering performance enhancement. However, we observe that existing literature 1) lacks a dynamic fusion mechanism to selectively integrate and refine the information of graph structure and node attributes for consensus representation learning; 2) fails to extract information from both sides for robust target distribution (i.e., “groundtruth” soft labels) generation. To tackle the above issues, w
APA, Harvard, Vancouver, ISO, and other styles
8

Vielzeuf, Valentin, Alexis Lechervy, Stephane Pateux, and Frederic Jurie. "Multilevel Sensor Fusion With Deep Learning." IEEE Sensors Letters 3, no. 1 (2019): 1–4. http://dx.doi.org/10.1109/lsens.2018.2878908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Haobin, Meng Xu, Kao-Shing Hwang, and Bo-Yin Cai. "Behavior fusion for deep reinforcement learning." ISA Transactions 98 (March 2020): 434–44. http://dx.doi.org/10.1016/j.isatra.2019.08.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Jing, Peng Li, Zhikui Chen, and Jianing Zhang. "A Survey on Deep Learning for Multimodal Data Fusion." Neural Computation 32, no. 5 (2020): 829–64. http://dx.doi.org/10.1162/neco_a_01273.

Full text
Abstract:
With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learnin
APA, Harvard, Vancouver, ISO, and other styles
11

Vasudha, G. S., and Kumari B. M. Kusuma. "A Survey on Deep Learning techniques in Image fusion." International Journal of Human Computations and Intelligence 2, no. 6 (2023): 280–85. https://doi.org/10.5281/zenodo.10444476.

Full text
Abstract:
In the ever-evolving field of image fusion, the integration of deep learning techniques has led to remarkable advancements in the quality and applicability of fused images. This review work provides a comprehensive overview of state of art deep learning based image fusion techniques. We delve into the fundamental concepts, methodologies and challenges that have emerged in this domain. This work covers various aspects of deep learning-based image fusion, including multi-modal, multi-scale fusion, and cross modality fusion. This work offers insights into the practical applications of deep learni
APA, Harvard, Vancouver, ISO, and other styles
12

Hassan, Ehtesham, Yasser Khalil, and Imtiaz Ahmad. "Learning Feature Fusion in Deep Learning-Based Object Detector." Journal of Engineering 2020 (May 22, 2020): 1–11. http://dx.doi.org/10.1155/2020/7286187.

Full text
Abstract:
Object detection in real images is a challenging problem in computer vision. Despite several advancements in detection and recognition techniques, robust and accurate localization of interesting objects in images from real-life scenarios remains unsolved because of the difficulties posed by intraclass and interclass variations, occlusion, lightning, and scale changes at different levels. In this work, we present an object detection framework by learning-based fusion of handcrafted features with deep features. Deep features characterize different regions of interest in a testing image with a ri
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Biao, Chengxi Wu, Yunan Zhu, Mingliang Zhang, Hanqiong Li, and Wei Zhang. "Ship Radiated Noise Recognition Technology Based on ML-DS Decision Fusion." Computational Intelligence and Neuroscience 2021 (October 7, 2021): 1–14. http://dx.doi.org/10.1155/2021/8901565.

Full text
Abstract:
Ship radiated noise is an important information source of underwater acoustic targets, and it is of great significance to the identification and classification of ship targets. However, there are a lot of interference noises in the water, which leads to the reduction of the model recognition rate. Therefore, the recognition results of radiated noise targets are severely affected. This paper proposes a machine learning Dempster–Shafer (ML-DS) decision fusion method. The algorithm combines the recognition results of machine learning and deep learning. It uses evidence-based decision-making theor
APA, Harvard, Vancouver, ISO, and other styles
14

Jyoti, Jain, Vashist Shrey, and Manjhi Diwash. "A Comprehensive Review of Medical Image Fusion Algorithms." Research and Applications: Emerging Technologies 7, no. 1 (2025): 28–45. https://doi.org/10.5281/zenodo.14642943.

Full text
Abstract:
<em>The challenge of manual design can be overcome by deep learning models, which can automatically extract the most useful elements from data. Introducing a deep learning model to the picture fusion field is the aim of this paper. Using supervised deep learning, it aims to create a novel concept for picture fusion. Pattern recognition and image processing are two fields where deep learning technology has been thoroughly investigated. The characteristics of multi- modal medical images, medical diagnostic technology, and practical implementation will be taken into consideration when proposing a
APA, Harvard, Vancouver, ISO, and other styles
15

Sreelekshmi, A. N., and Rojesh Anna. "Multimodal Imaging and Deep Learning Fusion for Enhanced COVID-19 Lung Infection Detection-." Journal of Advance Research in Mobile Computing 7, no. 2 (2025): 32–42. https://doi.org/10.5281/zenodo.15403175.

Full text
Abstract:
<em>The World Health Organization (WHO) classified COVID-19 as a global pandemic due to its rapid and widespread transmission, primarily affecting the respiratory system. As a highly infectious virus, COVID-19 spread worldwide without an immediate cure, making detection and diagnosis a significant challenge. Although laboratory tests have been the primary method for identifying COVID-19, they are often prone to inaccuracies and delays, leading researchers to explore alternative techniques. Medical imaging, particularly through Computed Tomography (CT) and radiological scans, has proven to be a
APA, Harvard, Vancouver, ISO, and other styles
16

Lovino, Marta, Maria Serena Ciaburri, Gianvito Urgese, Santa Di Cataldo, and Elisa Ficarra. "DEEPrior: a deep learning tool for the prioritization of gene fusions." Bioinformatics 36, no. 10 (2020): 3248–50. http://dx.doi.org/10.1093/bioinformatics/btaa069.

Full text
Abstract:
Abstract Summary In the last decade, increasing attention has been paid to the study of gene fusions. However, the problem of determining whether a gene fusion is a cancer driver or just a passenger mutation is still an open issue. Here we present DEEPrior, an inherently flexible deep learning tool with two modes (Inference and Retraining). Inference mode predicts the probability of a gene fusion being involved in an oncogenic process, by directly exploiting the amino acid sequence of the fused protein. Retraining mode allows to obtain a custom prediction model including new data provided by t
APA, Harvard, Vancouver, ISO, and other styles
17

Abtahi, Mansour, David Le, Jennifer I. Lim, and Xincheng Yao. "MF-AV-Net: an open-source deep learning network with multimodal fusion options for artery-vein segmentation in OCT angiography." Biomedical Optics Express 13, no. 9 (2022): 4870. http://dx.doi.org/10.1364/boe.468483.

Full text
Abstract:
This study is to demonstrate the effect of multimodal fusion on the performance of deep learning artery-vein (AV) segmentation in optical coherence tomography (OCT) and OCT angiography (OCTA); and to explore OCT/OCTA characteristics used in the deep learning AV segmentation. We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture yielded competitive performances. For the 6 mm×6 mm and
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Yi, Junli Zhao, Zhihan Lv, and Jinhua Li. "Medical image fusion method by deep learning." International Journal of Cognitive Computing in Engineering 2 (June 2021): 21–29. http://dx.doi.org/10.1016/j.ijcce.2020.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ziraksima, Mahsa, Shahriar Lotfi, and Jafar Razmara. "Deep reinforcement learning in loop fusion problem." Neurocomputing 481 (April 2022): 102–20. http://dx.doi.org/10.1016/j.neucom.2022.01.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tanuja, Nukapeyyi. "Medical Image Fusion Using Deep Learning Mechanism." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (2022): 128–36. http://dx.doi.org/10.22214/ijraset.2022.39809.

Full text
Abstract:
Abstract: Sparse representation(SR) model named convolutional sparsity based morphological component analysis is introduced for pixel-level medical image fusion. The CS-MCA model can achieve multicomponent and global SRs of source images, by integrating MCA and convolutional sparse representation(CSR) into a unified optimization framework. In the existing method, the CSRs of its gradient and texture components are obtained by the CSMCA model using pre-learned dictionaries. Then for each image component, sparse coefficients of all the source images are merged and then fused component is reconst
APA, Harvard, Vancouver, ISO, and other styles
21

Tang, Linfeng, Hao Zhang, Han Xu, and Jiayi Ma. "Deep learning-based image fusion: a survey." Journal of Image and Graphics 28, no. 1 (2023): 3–36. http://dx.doi.org/10.11834/jig.220422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Cai, Wei Zhao, Jinglong Zhao, et al. "Progressive Deep Multi-View Comprehensive Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.

Full text
Abstract:
Multi-view Comprehensive Representation Learning (MCRL) aims to synthesize information from multiple views to learn comprehensive representations of data items. Prevalent deep MCRL methods typically concatenate synergistic view-specific representations or average aligned view-specific representations in the fusion stage. However, the performance of synergistic fusion methods inevitably degenerate or even fail when partial views are missing in real-world applications; the aligned based fusion methods usually cannot fully exploit the complementarity of multi-view data. To eliminate all these dra
APA, Harvard, Vancouver, ISO, and other styles
23

Wei, Mingyang, Mengbo Xi, Yabei Li, Minjun Liang, and Ge Wang. "Multimodal Medical Image Fusion: The Perspective of Deep Learning." Academic Journal of Science and Technology 5, no. 3 (2023): 202–8. http://dx.doi.org/10.54097/ajst.v5i3.8013.

Full text
Abstract:
Multimodal medical image fusion involves the integration of medical images originating from distinct modalities and captured by various sensors, with the aim to enhance image quality, minimize redundant information, and preserve specific features, ultimately leading to increased efficiency and accuracy in clinical diagnoses. In recent years, the emergence of deep learning techniques has propelled significant advancements in image fusion, addressing the limitations of conventional methods that necessitate manual design of activity level measurement and fusion rules. This paper initially present
APA, Harvard, Vancouver, ISO, and other styles
24

Lovino, Marta, Gianvito Urgese, Enrico Macii, Santa Di Cataldo, and Elisa Ficarra. "A Deep Learning Approach to the Screening of Oncogenic Gene Fusions in Humans." International Journal of Molecular Sciences 20, no. 7 (2019): 1645. http://dx.doi.org/10.3390/ijms20071645.

Full text
Abstract:
Gene fusions have a very important role in the study of cancer development. In this regard, predicting the probability of protein fusion transcripts of developing into a cancer is a very challenging and yet not fully explored research problem. To this date, all the available approaches in literature try to explain the oncogenic potential of gene fusions based on protein domain analysis, that is cancer-specific and not easy to adapt to newly developed information. In our work, we choose the raw protein sequences as the input baseline, and propose the use of deep learning, and more specifically
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Shichao, and Shiyu Zou. "Enhancing Bearing Fault Diagnosis With Deep Learning Model Fusion and Semantic Web Technologies." International Journal on Semantic Web and Information Systems 20, no. 1 (2024): 1–20. http://dx.doi.org/10.4018/ijswis.356392.

Full text
Abstract:
Given the limited accuracy of a singular deep learning model in bearing fault diagnosis, this study seeks to investigate and validate the efficacy of a deep learning model fusion strategy. It also aims to enhance the performance of deep learning models in bearing fault diagnosis using semantic web technology. Utilizing a publicly available bearing dataset, we employ semantic web to represent data in a structured format that is easily interpretable by machines. We then train separate convolutional neural network (CNN) and long short-term memory network (LSTM) models, and implement three distinc
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Quan. "Multimodal Biometrics Fusion Algorithm Using Deep Reinforcement Learning." Mathematical Problems in Engineering 2022 (March 24, 2022): 1–9. http://dx.doi.org/10.1155/2022/8544591.

Full text
Abstract:
Multimodal biometrics fusion plays an important role in the field of biometrics. Therefore, this paper presents a multimodal biometrics fusion algorithm using deep reinforcement learning. In order to reduce the influence of user behavior, user’s personal characteristics, and environmental light on image data quality, data preprocessing is realized through data transformation and single-mode biometric image region segmentation. A two-dimensional Gobar filter was used to analyze the texture of local sub-blocks, qualitatively describe the similarity between the filter and the sub-blocks and extra
APA, Harvard, Vancouver, ISO, and other styles
27

Lee, Sanghyun, David K. Han, and Hanseok Ko. "Fusion-ConvBERT: Parallel Convolution and BERT Fusion for Speech Emotion Recognition." Sensors 20, no. 22 (2020): 6688. http://dx.doi.org/10.3390/s20226688.

Full text
Abstract:
Speech emotion recognition predicts the emotional state of a speaker based on the person’s speech. It brings an additional element for creating more natural human–computer interactions. Earlier studies on emotional recognition have been primarily based on handcrafted features and manual labels. With the advent of deep learning, there have been some efforts in applying the deep-network-based approach to the problem of emotion recognition. As deep learning automatically extracts salient features correlated to speaker emotion, it brings certain advantages over the handcrafted-feature-based method
APA, Harvard, Vancouver, ISO, and other styles
28

Yuan, Zilong, Fanyu Qu, Wenbin Xiong, et al. "Deep learning to design Z-FFR device models." Journal of Physics: Conference Series 2558, no. 1 (2023): 012019. http://dx.doi.org/10.1088/1742-6596/2558/1/012019.

Full text
Abstract:
Abstract Z-Pinch fusion centre, encased by a fission envelope, serves as an individual neutron source. It can expeditiously catalyze fission reactions in 238U and 232Th nuclear materials, which are hard to use in current commercial nuclear reactors. This is the essence of the Z-Pinch Driven Fusion-Fission Hybrid Reactor (Z-FFR). The fusion core acts as a stand-alone neutron source, efficiently driving fission reactions in nuclear energy materials that are difficult to use in existing commercial nuclear reactors, such as 238U and 232Th. Then it can deliver enormous amounts of energy in a stable
APA, Harvard, Vancouver, ISO, and other styles
29

Niharika, A., and Prasanna Kumar S.C Dr. "A Study on Deep Learning in Bio-Medical Image Processing." Journal of Advanced Research in Artificial Intelligence & It's Applications 1, no. 3 (2024): 65–77. https://doi.org/10.5281/zenodo.13309547.

Full text
Abstract:
<em>The goal of image fusion is to create a single image that is more instructive and useful for later applications by first extracting and then merging the most significant information from several source photos. Image fusion has advanced significantly as a result of deep learning, and the fused results are promising due to neural networks' strong feature extraction and reconstruction capabilities. Recent advances in deep learning technologies have led to a boom in picture fusion. But there isn't a thorough examination and critique of the most recent deep learning techniques in various fusion
APA, Harvard, Vancouver, ISO, and other styles
30

Kim, Pora, Hua Tan, Jiajia Liu, et al. "FusionGDB 2.0: fusion gene annotation updates aided by deep learning." Nucleic Acids Research 50, no. D1 (2021): D1221—D1230. http://dx.doi.org/10.1093/nar/gkab1056.

Full text
Abstract:
Abstract A knowledgebase of the systematic functional annotation of fusion genes is critical for understanding genomic breakage context and developing therapeutic strategies. FusionGDB is a unique functional annotation database of human fusion genes and has been widely used for studies with diverse aims. In this study, we report fusion gene annotation updates aided by deep learning (FusionGDB 2.0) available at https://compbio.uth.edu/FusionGDB2/. FusionGDB 2.0 has substantial updates of contents such as up-to-date human fusion genes, fusion gene breakage tendency score with FusionAI deep learn
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Wei, Xiaodong Yue, Yufei Chen, and Thierry Denoeux. "Trusted Multi-View Deep Learning with Opinion Aggregation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7585–93. http://dx.doi.org/10.1609/aaai.v36i7.20724.

Full text
Abstract:
Multi-view deep learning is performed based on the deep fusion of data from multiple sources, i.e. data with multiple views. However, due to the property differences and inconsistency of data sources, the deep learning results based on the fusion of multi-view data may be uncertain and unreliable. It is required to reduce the uncertainty in data fusion and implement the trusted multi-view deep learning. Aiming at the problem, we revisit the multi-view learning from the perspective of opinion aggregation and thereby devise a trusted multi-view deep learning method. Within this method, we adopt
APA, Harvard, Vancouver, ISO, and other styles
32

Yuan, Bin. "Deep learning technology for face recognition." Thermal Science 29, no. 3 Part A (2025): 2007–14. https://doi.org/10.2298/tsci2503007y.

Full text
Abstract:
In China, the rapid development of public transportation network construction has been accompanied by a high incidence of traffic accidents caused by sleep-deprived driving. The monitoring of drivers' sleep-deprived driving and the sending out of early warnings has been identified as a field of research with both important theoretical and practical value. This article proposes a fatigue detection algorithm based on facial recognition information fusion. The algorithm extracts facial feature information and head features from the driver's face and fuses them into facial recognition information
APA, Harvard, Vancouver, ISO, and other styles
33

Mohd Ali, Maimunah, Norhashila Hashim, Samsuzana Abd Aziz, and Ola Lasekan. "Utilisation of Deep Learning with Multimodal Data Fusion for Determination of Pineapple Quality Using Thermal Imaging." Agronomy 13, no. 2 (2023): 401. http://dx.doi.org/10.3390/agronomy13020401.

Full text
Abstract:
Fruit quality is an important aspect in determining the consumer preference in the supply chain. Thermal imaging was used to determine different pineapple varieties according to the physicochemical changes of the fruit by means of the deep learning method. Deep learning has gained attention in fruit classification and recognition in unimodal processing. This paper proposes a multimodal data fusion framework for the determination of pineapple quality using deep learning methods based on the feature extraction acquired from thermal imaging. Feature extraction was selected from the thermal images
APA, Harvard, Vancouver, ISO, and other styles
34

.., Ossama, and Mhmed Algrnaodi. "Deep Learning Fusion for Attack Detection in Internet of Things Communications." Fusion: Practice and Applications 9, no. 2 (2022): 27–47. http://dx.doi.org/10.54216/fpa.090203.

Full text
Abstract:
The increasing deep learning techniques used in multimedia and networkIoT solve many problems and increase performance. Securing the deep learning models, multimedia, and networkIoT has become a major area of research in the past few years which is considered to be a challenge during generative adversarial attacks over the multimedia or networkIoT. Many efforts and studies try to provide intelligent forensics techniques to solve security issues. This paper introduces a holistic organization of intelligent multimedia forensics that involve deep learning fusion, multimedia, and networkIoT forens
APA, Harvard, Vancouver, ISO, and other styles
35

Raheem, Fatima, and Manaf K. Hussein. "COVID-19 detection using machine learning and fusion-based deep learning models." Wasit Journal of Engineering Sciences 11, no. 2 (2023): 12–23. http://dx.doi.org/10.31185/ejuow.vol11.iss2.439.

Full text
Abstract:
The COVID-19 pandemic has been one of the most challenging crises attacking the world in the last three years. Many systems have been introduced in the field of COVID-19 detection.&#x0D; In this research, machine learning and deep learning models for the detection of COVID-19 with a probability of the presence of COVID-19 are proposed. In the machine learning scenario, the COVID-19 dataset is split into 70% training and 30% testing, and a segmentation process is applied to the CT images in order to get the lung ROI only. The features of CT images are then extracted using Gabor-Wavelet and deep
APA, Harvard, Vancouver, ISO, and other styles
36

Gayatri Phade, Priyanka Bhatambarekar,. "Exploring the Terrain: An Investigation into Deep Learning-Based Fusion Strategies for Integrating Infrared and Visible Imagery." Journal of Electrical Systems 20, no. 2 (2024): 2316–27. http://dx.doi.org/10.52783/jes.1998.

Full text
Abstract:
Infrared and visible image fusion technologies influence distinct image features acquired from distinct sensors, preserving complementary information from input images throughout the process of fusion, and utilizing redundant data to enhance the quality of the resulting fused image. Recently, deep learning methods (DL) have been employed by numerous researchers to investigate image fusion, revealing that the application of DL significantly enhances the efficiency of the model and the quality of fusion outcomes. Nevertheless, it is very important to note that DL can be implemented in various br
APA, Harvard, Vancouver, ISO, and other styles
37

Nuhu, Abdulhafiz, Anis Farihan Mat Raffei, Mohd Faizal Ab Razak, and Abubakar Ahmad. "Distributed Denial of Service Attack Detection in IoT Networks using Deep Learning and Feature Fusion: A Review." Mesopotamian Journal of CyberSecurity 2024 (April 28, 2024): 47–70. http://dx.doi.org/10.58496/mjcs/2024/004.

Full text
Abstract:
The explosive growth of Internet of Things (IoT) devices has led to escalating threats from distributed denial of service (DDoS) attacks. Moreover, the scale and heterogeneity of IoT environments pose unique security challenges, and intelligent solutions tailored for the IoT are needed to defend critical infrastructure. The deep learning technique shows great promise because automatic feature learning capabilities are well suited for the complex and high-dimensional data of IoT systems. Additionally, feature fusion approaches have gained traction in enhancing the performance of deep learning m
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Chunlei, Longbiao Wang, and Jianwu Dang. "Deep Learning-Based Amplitude Fusion for Speech Dereverberation." Discrete Dynamics in Nature and Society 2020 (July 14, 2020): 1–14. http://dx.doi.org/10.1155/2020/4618317.

Full text
Abstract:
Mapping and masking are two important speech enhancement methods based on deep learning that aim to recover the original clean speech from corrupted speech. In practice, too large recovery errors severely restrict the improvement in speech quality. In our preliminary experiment, we demonstrated that mapping and masking methods had different conversion mechanisms and thus assumed that their recovery errors are highly likely to be complementary. Also, the complementarity was validated accordingly. Based on the principle of error minimization, we propose the fusion between mapping and masking for
APA, Harvard, Vancouver, ISO, and other styles
39

Ansah, Patrick, Sumit Kumar Tetarave, Ezhil Kalaimannan, and Caroline John. "Tomato Disease Fusion and Classification using Deep Learning." International Journal on Cybernetics & Informatics 12, no. 7 (2023): 31–43. http://dx.doi.org/10.5121/ijci.2023.120703.

Full text
Abstract:
Tomato plants' susceptibility to diseases imperils agricultural yields. About 30% of the total crop loss is attributable to plants with disease. Detecting such illnesses in the plant is crucial to avoid significant output losses.This study introduces "data fusion" to enhance disease classification by amalgamating distinct disease-specific traits from leaf halves. Data fusion generates synthetic samples, fortifying a TensorFlow Keras deep learning model using a diverse tomato leaf image dataset. Results illuminate the augmented model's efficacy, particularly for diseases marked by overlapping t
APA, Harvard, Vancouver, ISO, and other styles
40

Ravindran, Vasanthi, Kalaiselvi Thiruvenkadam, and Anitha Thiyagarajan. "MRI Brain Tumor Classification and Extraction using Deep Learning-Based Decision Level Image Fusion Technique." Indian Journal Of Science And Technology 17, no. 27 (2024): 2848–57. http://dx.doi.org/10.17485/ijst/v17i27.1138.

Full text
Abstract:
Objectives: The proposed work emphasizes the tumor region extracted from the multimodal MRI brain scan by deep learning-based decision-level image fusion technique. Methods: Convolutional Neural Network (CNN) architectures such as AlexNet, ResNet50, and VGG16 perform brain tumor classification with multimodal MRI images Flair, T2, and T1c respectively. Flair images are fed to the AlexNet architecture, T2 images are fed to the ResNet50 architecture, and T1c images are fed to the VGG16 architecture to classify brain tumor images. The classification results from these architectures are fused toge
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang Tianfu, 张添福, 钟舜聪 Zhong Shuncong, 连超铭 Lian Chaoming, 周宁 Zhou Ning, and 谢茂松 Xie Maosong. "Deep Learning Feature Fusion-Based Retina Image Classification." Laser & Optoelectronics Progress 57, no. 24 (2020): 241025. http://dx.doi.org/10.3788/lop57.241025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Xu, Qingyong. "Feature Fusion Based Image Retrieval Using Deep Learning." Journal of Information and Computational Science 12, no. 6 (2015): 2361–73. http://dx.doi.org/10.12733/jics20105681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

TSUMURA, Kazuto, Futoshi KOBAYASHI, and Hiroyuki NAKAMOTO. "Vision and Auditory Fusion System Using Deep Learning." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 1P2—L03. http://dx.doi.org/10.1299/jsmermd.2020.1p2-l03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Priyasad, Darshana, Tharindu Fernando, Simon Denman, Sridha Sridharan, and Clinton Fookes. "Memory based fusion for multi-modal deep learning." Information Fusion 67 (March 2021): 136–46. http://dx.doi.org/10.1016/j.inffus.2020.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Khan, Mohibullah, Ata Ullah, Isra Naz, et al. "Alpha Fusion Adversarial Attack Analysis Using Deep Learning." Computer Systems Science and Engineering 46, no. 1 (2023): 461–73. http://dx.doi.org/10.32604/csse.2023.029642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Aili, Haibin Wu, and Yuji Iwahori. "Deep Learning in Image Processing and Pattern Recognition." Electronics 14, no. 10 (2025): 1942. https://doi.org/10.3390/electronics14101942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lian, Hailun, Cheng Lu, Sunan Li, Yan Zhao, Chuangao Tang, and Yuan Zong. "A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face." Entropy 25, no. 10 (2023): 1440. http://dx.doi.org/10.3390/e25101440.

Full text
Abstract:
Multimodal emotion recognition (MER) refers to the identification and understanding of human emotional states by combining different signals, including—but not limited to—text, speech, and face cues. MER plays a crucial role in the human–computer interaction (HCI) domain. With the recent progression of deep learning technologies and the increasing availability of multimodal datasets, the MER domain has witnessed considerable development, resulting in numerous significant research breakthroughs. However, a conspicuous absence of thorough and focused reviews on these deep learning-based MER achi
APA, Harvard, Vancouver, ISO, and other styles
48

Piao, Jingchun, Yunfan Chen, and Hyunchul Shin. "A New Deep Learning Based Multi-Spectral Image Fusion Method." Entropy 21, no. 6 (2019): 570. http://dx.doi.org/10.3390/e21060570.

Full text
Abstract:
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through t
APA, Harvard, Vancouver, ISO, and other styles
49

Bahaa, Bahaa, Ayman S. Selmy, and Wael A. Mohamed. "A Comprehensive Survey on AlexNet improvements and fusion techniques." Fusion: Practice and Applications 17, no. 2 (2025): 123–46. http://dx.doi.org/10.54216/fpa.170210.

Full text
Abstract:
Machine- and deep-learning techniques have been used in numerous real-world applications. One of the famous deep-learning methodologies is the Deep Convolutional Neural Network. AlexNet is a well-known global deep convolutional neural network architecture. AlexNet significantly contributes to solving different classification problems in different applications based on deep learning. Therefore, it is necessary to continuously improve the model to enhance its performance. This survey study formally defined the AlexNet architecture, presented information on current improvement solutions, and revi
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Junfang, Yi Ma, Yabin Hu, et al. "Decision Fusion of Deep Learning and Shallow Learning for Marine Oil Spill Detection." Remote Sensing 14, no. 3 (2022): 666. http://dx.doi.org/10.3390/rs14030666.

Full text
Abstract:
Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which often limit feature characterization using a single classifier and therefore difficult to accurate monitoring of marine oil spills. In this paper, we develop a decision fusion algorithm to integrate deep learning methods and shallow learning methods based on multi-scale features for improving oil spill detection accuracy in the ca
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!