To see the other types of publications on this topic, follow the link: Salient features.

Journal articles on the topic 'Salient features'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Salient features.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gao, Gan, Yuanyuan Wang, Feng Zhou, Shuaiting Chen, Xiaole Ge, and Rugang Wang. "BSEFNet: bidirectional self-attention edge fusion network salient object detection based on deep fusion of edge features." PeerJ Computer Science 10 (December 10, 2024): e2494. https://doi.org/10.7717/peerj-cs.2494.

Full text
Abstract:
Salient object detection aims to identify the most prominent objects within an image. With the advent of fully convolutional networks (FCNs), deep learning-based saliency detection models have increasingly leveraged FCNs for pixel-level saliency prediction. However, many existing algorithms face challenges in accurately delineating target boundaries, primarily due to insufficient utilization of edge information. To address this issue, we propose a novel approach to improve the boundary accuracy of salient target detection by integrating salient target and edge information. Our approach compris
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Sensen, Yue Li, Zhenhong Jia, and Fei Shi. "Salient Object Detection Based on Optimization of Feature Computation by Neutrosophic Set Theory." Sensors 23, no. 20 (2023): 8348. http://dx.doi.org/10.3390/s23208348.

Full text
Abstract:
In recent saliency detection research, too many or too few image features are used in the algorithm, and the processing of saliency map details is not satisfactory, resulting in significant degradation of the salient object detection result. To overcome the above deficiencies and achieve better object detection results, we propose a salient object detection method based on feature optimization by neutrosophic set (NS) theory in this paper. First, prior object knowledge is built using foreground and background models, which include pixel-wise and super-pixel cues. Simultaneously, the feature ma
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Chengzhi. "An Image Multi-scale Feature Recognition Method Based on Image Saliency." International Journal of Circuits, Systems and Signal Processing 15 (April 8, 2021): 280–87. http://dx.doi.org/10.46300/9106.2021.15.32.

Full text
Abstract:
Image recognition refers to the technology which processes, analyzes and understands images with computer so as to recognize various targets and objects of different patterns. To effectively combine image recognition and intelligent algorithm can enhance the efficiency of image feature analysis, improve the detection accuracy and guarantee real-time detection. In image feature recognition, the following problems exist: the description of accurate object features, object blockage, complex and changeable scenes. Whether these problems can be effectively solved has great significance in improving
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Liming, Shuguang Zhao, Rui Sun, et al. "AFI-Net: Attention-Guided Feature Integration Network for RGBD Saliency Detection." Computational Intelligence and Neuroscience 2021 (March 30, 2021): 1–10. http://dx.doi.org/10.1155/2021/8861446.

Full text
Abstract:
This article proposes an innovative RGBD saliency model, that is, attention-guided feature integration network, which can extract and fuse features and perform saliency inference. Specifically, the model first extracts multimodal and level deep features. Then, a series of attention modules are deployed to the multilevel RGB and depth features, yielding enhanced deep features. Next, the enhanced multimodal deep features are hierarchically fused. Lastly, the RGB and depth boundary features, that is, low-level spatial details, are added to the integrated feature to perform saliency inference. The
APA, Harvard, Vancouver, ISO, and other styles
5

Ullah, Inam, Muwei Jian, Kashif Shaheed, et al. "AWANet: Attentive-Aware Wide-Kernels Asymmetrical Network with Blended Contour Information for Salient Object Detection." Sensors 22, no. 24 (2022): 9667. http://dx.doi.org/10.3390/s22249667.

Full text
Abstract:
Although deep learning-based techniques for salient object detection have considerably improved over recent years, estimated saliency maps still exhibit imprecise predictions owing to the internal complexity and indefinite boundaries of salient objects of varying sizes. Existing methods emphasize the design of an exemplary structure to integrate multi-level features by employing multi-scale features and attention modules to filter salient regions from cluttered scenarios. We propose a saliency detection network based on three novel contributions. First, we use a dense feature extraction unit (
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuge, Yunzhi, Yu Zeng, and Huchuan Lu. "Deep Embedding Features for Salient Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9340–47. http://dx.doi.org/10.1609/aaai.v33i01.33019340.

Full text
Abstract:
Benefiting from the rapid development of Convolutional Neural Networks (CNNs), some salient object detection methods have achieved remarkable results by utilizing multi-level convolutional features. However, the saliency training datasets is of limited scale due to the high cost of pixel-level labeling, which leads to a limited generalization of the trained model on new scenarios during testing. Besides, some FCN-based methods directly integrate multi-level features, ignoring the fact that the noise in some features are harmful to saliency detection. In this paper, we propose a novel approach
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Junxiu, Yangyang Tao, and Xian Liu. "Tensor Decomposition for Salient Object Detection in Images." Big Data and Cognitive Computing 3, no. 2 (2019): 33. http://dx.doi.org/10.3390/bdcc3020033.

Full text
Abstract:
The fundamental challenge of salient object detection is to find the decision boundary that separates the salient object from the background. Low-rank recovery models address this challenge by decomposing an image or image feature-based matrix into a low-rank matrix representing the image background and a sparse matrix representing salient objects. This method is simple and efficient in finding salient objects. However, it needs to convert high-dimensional feature space into a two-dimensional matrix. Therefore, it does not take full advantage of image features in discovering the salient object
APA, Harvard, Vancouver, ISO, and other styles
8

Shao, Z. F., W. X. Zhou, and Q. M. Cheng. "Remote Sensing Image Retrieval with Combined Features Of Salient Region." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-6 (April 23, 2014): 83–88. http://dx.doi.org/10.5194/isprsarchives-xl-6-83-2014.

Full text
Abstract:
Low-level features tend to achieve unsatisfactory retrieval results in remote sensing image retrieval community because of the existence of semantic gap. In order to improve retrieval precision, visual attention model is used to extract salient objects from image according to their saliency. Then color and texture features are extracted from salient objects and regarded as feature vectors for image retrieval. Experimental results demonstrate that our method improves retrieval results and obtains higher precision.
APA, Harvard, Vancouver, ISO, and other styles
9

Shevko, Nailya R. "Salient features of cybercrime." Государственная служба и кадры, no. 4 (2022): 254–56. http://dx.doi.org/10.56539/23120444_2022_4_254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kuo, K. H. "Salient features of quasicrystals." Materials Chemistry and Physics 39, no. 1 (1994): 1–11. http://dx.doi.org/10.1016/0254-0584(94)90124-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xingzheng, Songwei Chen, Jiehao Liu, and Guoyao Wei. "High Edge-Quality Light-Field Salient Object Detection Using Convolutional Neural Network." Electronics 11, no. 7 (2022): 1054. http://dx.doi.org/10.3390/electronics11071054.

Full text
Abstract:
The detection result of current light-field salient object detection methods suffers from loss of edge details, which significantly limits the performance of subsequent computer vision tasks. To solve this problem, we propose a novel convolutional neural network to accurately detect salient objects, by digging effective edge information from light-field data. In particular, our method is divided into four steps. Firstly, the network extracts multi-level saliency features from light-field data. Secondly, edge features are extracted from low-level saliency features and optimized by ground-truth
APA, Harvard, Vancouver, ISO, and other styles
12

Peng, Hai, Hua Jun Feng, Ju Feng Zhao, Zhi Hai Xu, Qi Li, and Yueting Chen. "Multi-Frame Image Fusion Method Combining Spatial-Temporal Saliency Detection and NSCT." Advanced Materials Research 403-408 (November 2011): 1927–32. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1927.

Full text
Abstract:
We propose a new image fusion method to fuse the frames of infrared and visual image sequences more effectively. In our method, we introduce an improved salient feature detection algorithm to achieve the saliency map of the original frames. This improved method can detect not only spatially but also temporally salient features using dynamic information of inter-frames. Images are then segmented into target regions and background regions based on saliency distribution. We formulate fusion rules for different regions using a double threshold method and finally fuse the image frames in NSCT multi
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Hai, Lei Dai, Yingfeng Cai, Long Chen, and Yong Zhang. "Saliency Detection by Multilevel Deep Pyramid Model." Journal of Sensors 2018 (August 14, 2018): 1–11. http://dx.doi.org/10.1155/2018/8249180.

Full text
Abstract:
Traditional salient object detection models are divided into several classes based on low-level features and contrast between pixels. In this paper, we propose a model based on a multilevel deep pyramid (MLDP), which involves fusing multiple features on different levels. Firstly, the MLDP uses the original image as the input for a VGG16 model to extract high-level features and form an initial saliency map. Next, the MLDP further extracts high-level features to form a saliency map based on a deep pyramid. Then, the MLDP obtains the salient map fused with superpixels by extracting low-level feat
APA, Harvard, Vancouver, ISO, and other styles
14

Deng, Biao, Di Liu, Yang Cao, Hong Liu, Zhiguo Yan, and Hu Chen. "CFRNet: Cross-Attention-Based Fusion and Refinement Network for Enhanced RGB-T Salient Object Detection." Sensors 24, no. 22 (2024): 7146. http://dx.doi.org/10.3390/s24227146.

Full text
Abstract:
Existing deep learning-based RGB-T salient object detection methods often struggle with effectively fusing RGB and thermal features. Therefore, obtaining high-quality features and fully integrating these two modalities are central research focuses. We developed an illumination prior-based coefficient predictor (MICP) to determine optimal interaction weights. We then designed a saliency-guided encoder (SG Encoder) to extract multi-scale thermal features incorporating saliency information. The SG Encoder guides the extraction of thermal features by leveraging their correlation with RGB features,
APA, Harvard, Vancouver, ISO, and other styles
15

Mu, Nan, Hongyu Wang, Yu Zhang, Hongyu Han, and Jun Yang. "Saliency Detection in Weak Light Images via Optimal Feature Selection-Guided Seed Propagation." Scientific Programming 2021 (September 11, 2021): 1–17. http://dx.doi.org/10.1155/2021/9921831.

Full text
Abstract:
Salient object detection has a wide range of applications in computer vision tasks. Although tremendous progress has been made in recent decades, the weak light image still poses formidable challenges to current saliency models due to its low illumination and low signal-to-noise ratio properties. Traditional hand-crafted features inevitably encounter great difficulties in handling images with weak light backgrounds, while most of the high-level features are unfavorable to highlight visually salient objects in weak light images. In allusion to these problems, an optimal feature selection-guided
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Xiaoli, Yunpeng Liu, and Huaici Zhao. "Saliency Detection Based on Low-Level and High-Level Features via Manifold-Space Ranking." Electronics 12, no. 2 (2023): 449. http://dx.doi.org/10.3390/electronics12020449.

Full text
Abstract:
Saliency detection as an active research direction in image understanding and analysis has been studied extensively. In this paper, to improve the accuracy of saliency detection, we propose an efficient unsupervised salient object detection method. The first step of our method is that we extract local low-level features of each superpixel after segmenting the image into different scale parts, which helps to locate the approximate locations of salient objects. Then, we use convolutional neural networks to extract high-level, semantically rich features as complementary features of each superpixe
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Xing, Zhenfeng Shao, Xiran Zhou, and Jun Liu. "A novel remote sensing image retrieval method based on visual salient point features." Sensor Review 34, no. 4 (2014): 349–59. http://dx.doi.org/10.1108/sr-03-2013-640.

Full text
Abstract:
Purpose – This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information acquisition technologies, more complex objects appear in high-resolution remote sensing images. Traditional visual features are no longer precise enough to describe the images. Design/methodology/approach – A novel remote sensing image retrieval method based on VSP (visual salient point) features is proposed in this paper. A key point detector and descriptor are used to extract the critical features and their desc
APA, Harvard, Vancouver, ISO, and other styles
18

Lang, Congyan, Jiashi Feng, Songhe Feng, Jingdong Wang, and Shuicheng Yan. "Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection." IEEE Transactions on Neural Networks and Learning Systems 27, no. 6 (2016): 1190–200. http://dx.doi.org/10.1109/tnnls.2015.2513393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Wenjun, Tianfei Wang, Xiaoqin Wu, et al. "Salient Object Detection via Fusion of Multi-Visual Perception." Applied Sciences 14, no. 8 (2024): 3433. http://dx.doi.org/10.3390/app14083433.

Full text
Abstract:
Salient object detection aims to distinguish the most visually conspicuous regions, playing an important role in computer vision tasks. However, complex natural scenarios can challenge salient object detection, hindering accurate extraction of objects with rich morphological diversity. This paper proposes a novel method for salient object detection leveraging multi-visual perception, mirroring the human visual system’s rapid identification, and focusing on impressive objects/regions within complex scenes. First, a feature map is derived from the original image. Then, salient object detection r
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Dingyi, and Haishun Du. "Multi-level Salient Feature Mining Network for Person Re-identification." Journal of Physics: Conference Series 2640, no. 1 (2023): 012001. http://dx.doi.org/10.1088/1742-6596/2640/1/012001.

Full text
Abstract:
Abstract Person re-identification (Re-ID) algorithms can retrieve the same pedestrian’s images from an image gallery captured by multiple cameras when given a pedestrian image. Due to changes in pedestrian postures, illuminations, and perspectives, it remains a significant challenge to improve the accuracy of person re-identification. Although the attention mechanism can alleviate some of these issues, it causes attention-based methods to pay excessive attention to features in the most salient areas of images while ignoring discriminant features outside the most salient areas, resulting in the
APA, Harvard, Vancouver, ISO, and other styles
21

Wan, Quan Quan. "Motion Saliency Detection in Videos." Applied Mechanics and Materials 644-650 (September 2014): 4603–6. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4603.

Full text
Abstract:
In order to detect visually salient regions in video sequences, a motion saliency detection method is proposed. The motion vectors of each video frame is used to get two motion saliency features. One represents the uniqueness of the motion,and the other one represents the distribution of the motion in the video scene. Then, the Gaussian filtering is conducted to combine the two feature to make the motion saliency map, in which the salient regions or objects in the video sequences could be detected. The experimental results show that the proposed method could achieve excellent saliency detectio
APA, Harvard, Vancouver, ISO, and other styles
22

Wen, Falin, Qinghui Wang, Ruirui Zou, et al. "A Salient Object Detection Method Based on Boundary Enhancement." Sensors 23, no. 16 (2023): 7077. http://dx.doi.org/10.3390/s23167077.

Full text
Abstract:
Visual saliency refers to the human’s ability to quickly focus on important parts of their visual field, which is a crucial aspect of image processing, particularly in fields like medical imaging and robotics. Understanding and simulating this mechanism is crucial for solving complex visual problems. In this paper, we propose a salient object detection method based on boundary enhancement, which is applicable to both 2D and 3D sensors data. To address the problem of large-scale variation of salient objects, our method introduces a multi-level feature aggregation module that enhances the expres
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Yuzhen, and Wujie Zhou. "Hybrid-Attention Network for RGB-D Salient Object Detection." Applied Sciences 10, no. 17 (2020): 5806. http://dx.doi.org/10.3390/app10175806.

Full text
Abstract:
Depth information has been widely used to improve RGB-D salient object detection by extracting attention maps to determine the position information of objects in an image. However, non-salient objects may be close to the depth sensor and present high pixel intensities in the depth maps. This situation in depth maps inevitably leads to erroneously emphasize non-salient areas and may have a negative impact on the saliency results. To mitigate this problem, we propose a hybrid attention neural network that fuses middle- and high-level RGB features with depth features to generate a hybrid attentio
APA, Harvard, Vancouver, ISO, and other styles
24

Aggrawal, Anil. "Salient Features Regarding Medicolegal Certificate." MAMC Journal of Medical Sciences 1, no. 1 (2015): 45. http://dx.doi.org/10.4103/2394-7438.150068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gautam, C. B. L. "Tachometers and Their Salient Features." IETE Journal of Education 27, no. 1 (1986): 3–9. http://dx.doi.org/10.1080/09747338.1986.11436091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yousaf, Nadeem. "Salient features of Jinnah's politics." International Journal of Public Leadership 11, no. 1 (2015): 46–64. http://dx.doi.org/10.1108/ijpl-07-2014-0007.

Full text
Abstract:
Purpose – Jinnah was, to some extent, a successful leader in obtaining his goals of becoming the only spokesperson for Muslims in India and gaining a piece of land for Pakistan but the main question is whether these achievements can be attributed to transactional or transformational strategies. Has he managed transactional or transformational change in terms of political culture? This point will be discussed in the paper. The paper aims to discuss these issues. Design/methodology/approach – A documentary analysis of behaviors, statements and incidents of Jinnah and other relevant personages. F
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Ming, Vasile Palade, Yan Wang, and Zhicheng Ji. "Word Representation With Salient Features." IEEE Access 7 (2019): 30157–73. http://dx.doi.org/10.1109/access.2019.2892817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fey, M. F. "Salient features of hematological diseases." Annals of Oncology 18 (February 2007): i54—i64. http://dx.doi.org/10.1093/annonc/mdl452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ma, Mingcan, Changqun Xia, and Jia Li. "Pyramidal Feature Shrinking for Salient Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2311–18. http://dx.doi.org/10.1609/aaai.v35i3.16331.

Full text
Abstract:
Recently, we have witnessed the great progress of salient object detection (SOD), which benefits from the effectiveness of various feature aggregation strategies. However, existing methods usually aggregate the low-level features containing details and the high-level features containing semantics over a large span, which introduces noise into the aggregated features and generate inaccurate saliency map. To address this issue, we propose pyramidal feature shrinking network (PFSNet), which aims to aggregate adjacent feature nodes in pairs with layer-by-layer shrinkage, so that the aggregated fea
APA, Harvard, Vancouver, ISO, and other styles
30

Feng, Weijia, Xiaohui Li, Guangshuai Gao, Xingyue Chen, and Qingjie Liu. "Multi-Scale Global Contrast CNN for Salient Object Detection." Sensors 20, no. 9 (2020): 2656. http://dx.doi.org/10.3390/s20092656.

Full text
Abstract:
Salient object detection (SOD) is a fundamental task in computer vision, which attempts to mimic human visual systems that rapidly respond to visual stimuli and locate visually salient objects in various scenes. Perceptual studies have revealed that visual contrast is the most important factor in bottom-up visual attention process. Many of the proposed models predict saliency maps based on the computation of visual contrast between salient regions and backgrounds. In this paper, we design an end-to-end multi-scale global contrast convolutional neural network (CNN) that explicitly learns hierar
APA, Harvard, Vancouver, ISO, and other styles
31

Yoganandan, Dr G. "Textile Export Promotion in India-Salient Features." Bonfring International Journal of Industrial Engineering and Management Science 5, no. 1 (2015): 01–04. http://dx.doi.org/10.9756/bijiems.8023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yu, Longxuan, Xiaofei Zhou, Lingbo Wang, and Jiyong Zhang. "Boundary-Aware Salient Object Detection in Optical Remote-Sensing Images." Electronics 11, no. 24 (2022): 4200. http://dx.doi.org/10.3390/electronics11244200.

Full text
Abstract:
Different from the traditional natural scene images, optical remote-sensing images (RSIs) suffer from diverse imaging orientations, cluttered backgrounds, and various scene types. Therefore, the object-detection methods salient to optical RSIs require effective localization and segmentation to deal with complex scenarios, especially small targets, serious occlusion, and multiple targets. However, the existing models’ experimental results are incapable of distinguishing salient objects and backgrounds using clear boundaries. To tackle this problem, we introduce boundary information to perform s
APA, Harvard, Vancouver, ISO, and other styles
33

Jia, Xing-Zhao, Chang-Lei DongYe, Yan-Jun Peng, Wen-Xiu Zhao, and Tian-De Liu. "MRBENet: A Multiresolution Boundary Enhancement Network for Salient Object Detection." Computational Intelligence and Neuroscience 2022 (October 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/7780756.

Full text
Abstract:
Salient Object Detection (SOD) simulates the human visual perception in locating the most attractive objects in the images. Existing methods based on convolutional neural networks have proven to be highly effective for SOD. However, in some cases, these methods cannot satisfy the need of both accurately detecting intact objects and maintaining their boundary details. In this paper, we present a Multiresolution Boundary Enhancement Network (MRBENet) that exploits edge features to optimize the location and boundary fineness of salient objects. We incorporate a deeper convolutional layer into the
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Zhong, Shengwu Xiong, Qingzhou Mao, Zhixiang Fang, and Xiaohan Yu. "An Improved Saliency Detection Approach for Flying Apsaras in the Dunhuang Grotto Murals, China." Advances in Multimedia 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/625915.

Full text
Abstract:
Saliency can be described as the ability of an item to be detected from its background in any particular scene, and it aims to estimate the probable location of the salient objects. Due to the salient map that computed by local contrast features can extract and highlight the edge parts including painting lines of Flying Apsaras, in this paper, we proposed an improved approach based on a frequency-tuned method for visual saliency detection of Flying Apsaras in the Dunhuang Grotto Murals, China. This improved saliency detection approach comprises three important steps: (1) image color and gray c
APA, Harvard, Vancouver, ISO, and other styles
35

TIAN, MINGHUI, SHOUHONG WAN, and LIHUA YUE. "A VISUAL ATTENTION MODEL FOR NATURAL SCENES BASED ON DYNAMIC FEATURE COMBINATION." International Journal of Software Engineering and Knowledge Engineering 20, no. 08 (2010): 1077–95. http://dx.doi.org/10.1142/s0218194010005043.

Full text
Abstract:
In recent years, many research works indicate that human's visual attention is very helpful in some research areas that are related to computer vision, such as object recognition, scene understanding and object-based image/video retrieval or annotation. This paper presents a visual attention model for natural scenes based on a dynamic feature combination strategy. The model can be divided into three parts, which are feature extraction, dynamic feature combination and salient objects detection. First, the saliency features of color, information entropy and salient boundary are extracted from an
APA, Harvard, Vancouver, ISO, and other styles
36

Hosseinkhani, Jila, and Chris Joslin. "A Biologically Inspired Saliency Priority Extraction Using Bayesian Framework." International Journal of Multimedia Data Engineering and Management 10, no. 2 (2019): 1–20. http://dx.doi.org/10.4018/ijmdem.2019040101.

Full text
Abstract:
In this article, the authors used saliency detection for video streaming problem to be able to transmit regions of video frames in a ranked manner based on their importance. The authors designed an empirically-based study to investigate bottom-up features to achieve a ranking system stating the saliency priority. We introduced a gradual saliency detection model using a Bayesian framework for static scenes under conditions that we had no cognitive bias. To extract color saliency, we used a new feature contrast in Lab color space as well as a k-nearest neighbor search based on k-d tree search te
APA, Harvard, Vancouver, ISO, and other styles
37

Wahid, Maria, Asim Waris, Syed Omer Gilani, and Ramanathan Subramanian. "The Effect of Eye Movements in Response to Different Types of Scenes Using a Graph-Based Visual Saliency Algorithm." Applied Sciences 9, no. 24 (2019): 5378. http://dx.doi.org/10.3390/app9245378.

Full text
Abstract:
Saliency is the quality of an object that makes it stands out from neighbouring items and grabs viewer attention. Regarding image processing, it refers to the pixel or group of pixels that stand out in an image or a video clip and capture the attention of the viewer. Our eye movements are usually guided by saliency while inspecting a scene. Rapid detection of emotive stimuli an ability possessed by humans. Visual objects in a scene are also emotionally salient. As different images and clips can elicit different emotional responses in a viewer such as happiness or sadness, there is a need to me
APA, Harvard, Vancouver, ISO, and other styles
38

Dhara, Gayathri, and Ravi Kant Kumar. "Enhancing Salient Object Detection with Supervised Learning and Multi-prior Integration." Journal of Image and Graphics 12, no. 2 (2024): 186–98. http://dx.doi.org/10.18178/joig.12.2.186-198.

Full text
Abstract:
Salient Object Detection (SOD) can mimic the human vision system by using algorithms that simulate the way how the eye detects and processes visual information. It focuses mainly on the visually distinctive parts of an image, similar to how the human brain processes visual information. The approach proposed in this study is an ensemble approach that incorporates classification algorithm, foreground connectivity and prior calculations. It involves a series of preprocessing, feature generation, selection, training, and prediction using random forest to identify and extract salient objects in an
APA, Harvard, Vancouver, ISO, and other styles
39

Feng, Xia, Zhiyi Hu, Caihua Liu, W. H. Ip, and Huiying Chen. "Text-Image Retrieval With Salient Features." Journal of Database Management 32, no. 4 (2021): 1–13. http://dx.doi.org/10.4018/jdm.2021100101.

Full text
Abstract:
In recent years, deep learning has achieved remarkable results in the text-image retrieval task. However, only global image features are considered, and the vital local information is ignored. This results in a failure to match the text well. Considering that object-level image features can help the matching between text and image, this article proposes a text-image retrieval method that fuses salient image feature representation. Fusion of salient features at the object level can improve the understanding of image semantics and thus improve the performance of text-image retrieval. The experim
APA, Harvard, Vancouver, ISO, and other styles
40

Huo, Lina, Kaidi Guo, and Wei Wang. "An Adaptive Multi-Content Complementary Network for Salient Object Detection." Electronics 12, no. 22 (2023): 4600. http://dx.doi.org/10.3390/electronics12224600.

Full text
Abstract:
Deep learning methods for salient object detection (SOD) have been studied actively and promisingly. However, the existing methods mainly focus on the decoding process and ignore the differences in the contributions of different encoder blocks. To address this problem for SOD, we propose an adaptive multi-content complementary network (PASNet) for salient object detection which aims to exploit the valuable contextual information in the encoder fully. Unlike existing CNN-based methods, we adopt the pyramidal visual transformer (PVTv2) as the backbone network to learn global and local representa
APA, Harvard, Vancouver, ISO, and other styles
41

Maldonado, Jaime, and Lino Antoni Giefer. "A Comparison of Bottom-Up Models for Spatial Saliency Predictions in Autonomous Driving." Sensors 21, no. 20 (2021): 6825. http://dx.doi.org/10.3390/s21206825.

Full text
Abstract:
Bottom-up saliency models identify the salient regions of an image based on features such as color, intensity and orientation. These models are typically used as predictors of human visual behavior and for computer vision tasks. In this paper, we conduct a systematic evaluation of the saliency maps computed with four selected bottom-up models on images of urban and highway traffic scenes. Saliency both over whole images and on object level is investigated and elaborated in terms of the energy and the entropy of the saliency maps. We identify significant differences with respect to the amount,
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Kyungjun, and Jechang Jeong. "Multi-Color Space Network for Salient Object Detection." Sensors 22, no. 9 (2022): 3588. http://dx.doi.org/10.3390/s22093588.

Full text
Abstract:
The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. F
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Qixin, Tie Liu, Yuanyuan Shang, Zhuhong Shao, and Hui Ding. "Salient Object Detection: Integrate Salient Features in the Deep Learning Framework." IEEE Access 7 (2019): 152483–92. http://dx.doi.org/10.1109/access.2019.2948062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Xin, Nan Mu, and Hong Zhang. "Inferring Visual Perceptual Object by Adaptive Fusion of Image Salient Features." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/973241.

Full text
Abstract:
Saliency computational model with active environment perception can be useful for many applications including image retrieval, object recognition, and image segmentation. Previous work on bottom-up saliency computation typically relies on hand-crafted low-level image features. However, the adaptation of saliency computational model towards different kinds of scenes remains a challenge. For a low-level image feature, it can contribute greatly on some images but may be detrimental for saliency computation on other images. In this work, a novel data driven approach is proposed to adaptively selec
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Peicheng, Tingsong Li, and Pengfei Li. "Saliency-enhanced infrared and visible image fusion via sub-window variance filter and weighted least squares optimization." PLOS One 20, no. 7 (2025): e0323285. https://doi.org/10.1371/journal.pone.0323285.

Full text
Abstract:
This paper proposes a novel method for infrared and visible image fusion (IVIF) to address the limitations of existing techniques in enhancing salient features and improving visual clarity. The method employs a sub-window variance filter (SVF) based decomposition technique to separate salient features and texture details into distinct band layers. A saliency map measurement scheme based on weighted least squares optimization (WLSO) is then designed to compute weight maps, enhancing the visibility of important features. Finally, pixel-level summation is used for feature map reconstruction, prod
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Zuyao, Qianqian Xu, Runmin Cong, and Qingming Huang. "Global Context-Aware Progressive Aggregation Network for Salient Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 10599–606. http://dx.doi.org/10.1609/aaai.v34i07.6633.

Full text
Abstract:
Deep convolutional neural networks have achieved competitive performance in salient object detection, in which how to learn effective and comprehensive features plays a critical role. Most of the previous works mainly adopted multiple-level feature integration yet ignored the gap between different features. Besides, there also exists a dilution process of high-level features as they passed on the top-down pathway. To remedy these issues, we propose a novel network named GCPANet to effectively integrate low-level appearance features, high-level semantic features, and global context features thr
APA, Harvard, Vancouver, ISO, and other styles
47

Shao, Fan, Kai Wang, and Yanluo Liu. "Salient object detection algorithm based on diversity features and global guidance information." Innovation & Technology Advances 1, no. 1 (2023): 12–20. http://dx.doi.org/10.61187/ita.v1i1.14.

Full text
Abstract:
Aiming at the problems of traditional salient object detection methods such as fuzzy boundary and insufficient information integrity, a salient object detection network composed of feature diversity enhancement module, global information guidance module and feature fusion module is proposed. Firstly, asymmetric convolution, cavity convolution and common convolution are spliced to form a feature diversity enhancement module to extract different types of spatial features corresponding to each feature layer. Secondly, the global information guidance module transmits the information captured by th
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Hongwei, Jiaxin Wu, Danyang Zhang, and Pingping Liu. "Toward Improving Image Retrieval via Global Saliency Weighted Feature." ISPRS International Journal of Geo-Information 10, no. 4 (2021): 249. http://dx.doi.org/10.3390/ijgi10040249.

Full text
Abstract:
For full description of images’ semantic information, image retrieval tasks are increasingly using deep convolution features trained by neural networks. However, to form a compact feature representation, the obtained convolutional features must be further aggregated in image retrieval. The quality of aggregation affects retrieval performance. In order to obtain better image descriptors for image retrieval, we propose two modules in our method. The first module is named generalized regional maximum activation of convolutions (GR-MAC), which pays more attention to global information at multiple
APA, Harvard, Vancouver, ISO, and other styles
49

Kong, Yuqiu, He Wang, Lingwei Kong, Yang Liu, Cuili Yao, and Baocai Yin. "Absolute and Relative Depth-Induced Network for RGB-D Salient Object Detection." Sensors 23, no. 7 (2023): 3611. http://dx.doi.org/10.3390/s23073611.

Full text
Abstract:
Detecting salient objects in complicated scenarios is a challenging problem. Except for semantic features from the RGB image, spatial information from the depth image also provides sufficient cues about the object. Therefore, it is crucial to rationally integrate RGB and depth features for the RGB-D salient object detection task. Most existing RGB-D saliency detectors modulate RGB semantic features with absolution depth values. However, they ignore the appearance contrast and structure knowledge indicated by relative depth values between pixels. In this work, we propose a depth-induced network
APA, Harvard, Vancouver, ISO, and other styles
50

Amolo, S. J. "Entrepreneurship complexity: Salient features of entrepreneurship." African Journal of Business Management 8, no. 19 (2014): 832–41. http://dx.doi.org/10.5897/ajbm2014.7442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!