To see the other types of publications on this topic, follow the link: Visual saliency.

Journal articles on the topic 'Visual saliency'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual saliency.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wahid, Maria, Asim Waris, Syed Omer Gilani, and Ramanathan Subramanian. "The Effect of Eye Movements in Response to Different Types of Scenes Using a Graph-Based Visual Saliency Algorithm." Applied Sciences 9, no. 24 (2019): 5378. http://dx.doi.org/10.3390/app9245378.

Full text
Abstract:
Saliency is the quality of an object that makes it stands out from neighbouring items and grabs viewer attention. Regarding image processing, it refers to the pixel or group of pixels that stand out in an image or a video clip and capture the attention of the viewer. Our eye movements are usually guided by saliency while inspecting a scene. Rapid detection of emotive stimuli an ability possessed by humans. Visual objects in a scene are also emotionally salient. As different images and clips can elicit different emotional responses in a viewer such as happiness or sadness, there is a need to me
APA, Harvard, Vancouver, ISO, and other styles
2

Joshi ,, Indira. "Visual Saliency Detection using Deep Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32719.

Full text
Abstract:
Salient object discovery models mimic the gesture of human beings and capture the most salient region/ object from the images or scenes. This field has numerous important operations in both computer vision and pattern recognition tasks. Despite hundreds of models proposed in this field, it still has a large room for exploration. This paper demonstrates a detailed overview of the recent progress of saliency discovery models in terms of heuristic- grounded ways and deep literacy- grounded ways. We've bandied and reviewed it’s co-related fields, similar as Eye obsession- vaticination, RGBD salien
APA, Harvard, Vancouver, ISO, and other styles
3

Mevorach, Carmel, Glyn W. Humphreys, and Lilach Shalev. "Reflexive and Preparatory Selection and Suppression of Salient Information in the Right and Left Posterior Parietal Cortex." Journal of Cognitive Neuroscience 21, no. 6 (2009): 1204–14. http://dx.doi.org/10.1162/jocn.2009.21088.

Full text
Abstract:
Attentional cues can trigger activity in the parietal cortex in anticipation of visual displays, and this activity may, in turn, induce changes in other areas of the visual cortex, hence, implementing attentional selection. In a recent TMS study [Mevorach, C., Humphreys, G. W., & Shalev, L. Opposite biases in salience-based selection for the left and right posterior parietal cortex. Nature Neuroscience, 9, 740–742, 2006b], it was shown that the posterior parietal cortex (PPC) can utilize the relative saliency (a nonspatial property) of a target and a distractor to bias visual selection. Fu
APA, Harvard, Vancouver, ISO, and other styles
4

Zeeshan, Muhammad, and Muhammad Majid. "High Efficiency Video Coding Compliant Perceptual Video Coding Using Entropy Based Visual Saliency Model." Entropy 21, no. 10 (2019): 964. http://dx.doi.org/10.3390/e21100964.

Full text
Abstract:
In past years, several visual saliency algorithms have been proposed to extract salient regions from multimedia content in view of practical applications. Entropy is one of the important measures to extract salient regions, as these regions have high randomness and attract more visual attention. In the context of perceptual video coding (PVC), computational visual saliency models that utilize the charactertistics of the human visual system to improve the compression ratio are of paramount importance. To date, only a few PVC schemes have been reported that use the visual saliency model. In this
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Qi. "A Survey on Approaches for Saliency Detection with Visual Attention." MATEC Web of Conferences 232 (2018): 02007. http://dx.doi.org/10.1051/matecconf/201823202007.

Full text
Abstract:
Most existing approaches for detecting salient areas in natural scenes are based on the saliency contrast within the local context of image. Nowadays, a few approaches not only consider the difference between the foreground objects and the surrounding background areas, but also consider the saliency objects as the candidates for the center of attention from the human’s perspective. This article provides a survey of saliency detection with visual attention, which exploit visual cues of foreground salient areas, visual attention based on saliency map, and deep learning based saliency detection.
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Weijia, Xiaohui Li, Guangshuai Gao, Xingyue Chen, and Qingjie Liu. "Multi-Scale Global Contrast CNN for Salient Object Detection." Sensors 20, no. 9 (2020): 2656. http://dx.doi.org/10.3390/s20092656.

Full text
Abstract:
Salient object detection (SOD) is a fundamental task in computer vision, which attempts to mimic human visual systems that rapidly respond to visual stimuli and locate visually salient objects in various scenes. Perceptual studies have revealed that visual contrast is the most important factor in bottom-up visual attention process. Many of the proposed models predict saliency maps based on the computation of visual contrast between salient regions and backgrounds. In this paper, we design an end-to-end multi-scale global contrast convolutional neural network (CNN) that explicitly learns hierar
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Richard, and Danny Crookes. "Visual Saliency Estimation through Manifold Learning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 2003–9. http://dx.doi.org/10.1609/aaai.v26i1.8382.

Full text
Abstract:
Saliency detection has been a desirable way for robotic vision to find the most noticeable objects in a scene. In this paper, a robust manifold-based saliency estimation method has been developed to help capture the most salient objects in front of robotic eyes, namely cameras. In the proposed approach, an image is considered as a manifold of visual signals (stimuli) spreading over a connected grid, and local visual stimuli are compared against the global image variation to model the visual saliency. With this model, manifold learning is then applied to minimize the local variation while keepi
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Zhong, Shengwu Xiong, Qingzhou Mao, Zhixiang Fang, and Xiaohan Yu. "An Improved Saliency Detection Approach for Flying Apsaras in the Dunhuang Grotto Murals, China." Advances in Multimedia 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/625915.

Full text
Abstract:
Saliency can be described as the ability of an item to be detected from its background in any particular scene, and it aims to estimate the probable location of the salient objects. Due to the salient map that computed by local contrast features can extract and highlight the edge parts including painting lines of Flying Apsaras, in this paper, we proposed an improved approach based on a frequency-tuned method for visual saliency detection of Flying Apsaras in the Dunhuang Grotto Murals, China. This improved saliency detection approach comprises three important steps: (1) image color and gray c
APA, Harvard, Vancouver, ISO, and other styles
9

Jiao, Yuzhong, Mark Ping Chan Mok, Kayton Wai Keung Cheung, Man Chi Chan, Tak Wai Shen, and Yiu Kei Li. "Dynamic Zero-Parallax-Setting Techniques for Multi-View Autostereoscopic Display." Electronic Imaging 2020, no. 2 (2020): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-098.

Full text
Abstract:
The objective of this paper is to research a dynamic computation of Zero-Parallax-Setting (ZPS) for multi-view autostereoscopic displays in order to effectively alleviate blurry 3D vision for images with large disparity. Saliency detection techniques can yield saliency map which is a topographic representation of saliency which refers to visually dominant locations. By using saliency map, we can predict what attracts the attention, or region of interest, to viewers. Recently, deep learning techniques have been applied in saliency detection. Deep learning-based salient object detection methods
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yijun, Keren Fu, Zhi Liu, and Jie Yang. "Efficient Saliency-Model-Guided Visual Co-Saliency Detection." IEEE Signal Processing Letters 22, no. 5 (2015): 588–92. http://dx.doi.org/10.1109/lsp.2014.2364896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Chong, Zheng-Jun Zha, Dong Liu, and Hongtao Xie. "Robust Deep Co-Saliency Detection with Group Semantic." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8917–24. http://dx.doi.org/10.1609/aaai.v33i01.33018917.

Full text
Abstract:
High-level semantic knowledge in addition to low-level visual cues is essentially crucial for co-saliency detection. This paper proposes a novel end-to-end deep learning approach for robust co-saliency detection by simultaneously learning highlevel group-wise semantic representation as well as deep visual features of a given image group. The inter-image interaction at semantic-level as well as the complementarity between group semantics and visual features are exploited to boost the inferring of co-salient regions. Specifically, the proposed approach consists of a co-category learning branch a
APA, Harvard, Vancouver, ISO, and other styles
12

Archana, M*1 &. Dr. K. .S. Angel Viji2. "A FUSION OF LOCAL AND GLOBAL SALIENCIES FOR DETECTING IMAGE SALIENT REGION." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 6 (2017): 572–81. https://doi.org/10.5281/zenodo.814426.

Full text
Abstract:
Visual saliency is an important quality that makes an object, person, or pixel that relative to its neighbors and thus captures humans attention. Detecting and segmenting salient objects in natural imageries, also known as salient object detection, has attracted a lot of research focused on computer vision and has resulted in many applications. However, while many such models exist. Saliency detection has gained a lot of attention in image processing. In past few years many saliency detection methods have been proposed. This paper presents various saliency detection methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Ke, Xinbo Zhao, and Rong Mo. "A Bioinspired Visual Saliency Model." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, no. 3 (2019): 503–8. http://dx.doi.org/10.1051/jnwpu/20193730503.

Full text
Abstract:
This paper presents a bioinspired visual saliency model. The end-stopping mechanism in the primary visual cortex is introduced in to extract features that represent contour information of latent salient objects such as corners, line intersections and line endpoints, which are combined together with brightness, color and orientation features to form the final saliency map. This model is an analog for the processing mechanism of visual signals along from retina, lateral geniculate nucleus(LGN)to primary visual cortex V1:Firstly, according to the characteristics of the retina and LGN, an input im
APA, Harvard, Vancouver, ISO, and other styles
14

Rogalska, Anna, and Piotr Napieralski. "The visual attention saliency map for movie retrospection." Open Physics 16, no. 1 (2018): 188–92. http://dx.doi.org/10.1515/phys-2018-0027.

Full text
Abstract:
Abstract The visual saliency map is becoming important and challenging for many scientific disciplines (robotic systems, psychophysics, cognitive neuroscience and computer science). Map created by the model indicates possible salient regions by taking into consideration face presence and motion which is essential in motion pictures. By combining we can obtain credible saliency map with a low computational cost.
APA, Harvard, Vancouver, ISO, and other styles
15

TIAN, MINGHUI, SHOUHONG WAN, and LIHUA YUE. "A VISUAL ATTENTION MODEL FOR NATURAL SCENES BASED ON DYNAMIC FEATURE COMBINATION." International Journal of Software Engineering and Knowledge Engineering 20, no. 08 (2010): 1077–95. http://dx.doi.org/10.1142/s0218194010005043.

Full text
Abstract:
In recent years, many research works indicate that human's visual attention is very helpful in some research areas that are related to computer vision, such as object recognition, scene understanding and object-based image/video retrieval or annotation. This paper presents a visual attention model for natural scenes based on a dynamic feature combination strategy. The model can be divided into three parts, which are feature extraction, dynamic feature combination and salient objects detection. First, the saliency features of color, information entropy and salient boundary are extracted from an
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Xin, Nan Mu, and Hong Zhang. "Inferring Visual Perceptual Object by Adaptive Fusion of Image Salient Features." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/973241.

Full text
Abstract:
Saliency computational model with active environment perception can be useful for many applications including image retrieval, object recognition, and image segmentation. Previous work on bottom-up saliency computation typically relies on hand-crafted low-level image features. However, the adaptation of saliency computational model towards different kinds of scenes remains a challenge. For a low-level image feature, it can contribute greatly on some images but may be detrimental for saliency computation on other images. In this work, a novel data driven approach is proposed to adaptively selec
APA, Harvard, Vancouver, ISO, and other styles
17

Zeng, Jian Qin, Wei Chen, Guang Zheng Zhang, and Kai Guo. "Exploiting Saliency Filters and Domain Knowledge for Saliency Estimation." Applied Mechanics and Materials 462-463 (November 2013): 410–15. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.410.

Full text
Abstract:
Saliency estimation has become a valuable tool in image processing and raised much interest in theory and applications. Despite significant recent progress, the performance of the best available saliency estimation approaches still lags behind human visual systems. In this paper we used saliency filters and domain knowledge in photography to estimate saliency. Experiments show that our method can successfully detect the true salient content from images.
APA, Harvard, Vancouver, ISO, and other styles
18

Mevorach, Carmel, Lilach Shalev, Harriet A. Allen, and Glyn W. Humphreys. "The Left Intraparietal Sulcus Modulates the Selection of Low Salient Stimuli." Journal of Cognitive Neuroscience 21, no. 2 (2009): 303–15. http://dx.doi.org/10.1162/jocn.2009.21044.

Full text
Abstract:
Neuropsychological and functional imaging studies have suggested a general right hemisphere advantage for processing global visual information and a left hemisphere advantage for processing local information. In contrast, a recent transcranial magnetic stimulation study [Mevorach, C., Humphreys, G. W., & Shalev, L. Opposite biases in salience-based selection for the left and right posterior parietal cortex. Nature Neuroscience, 9, 740–742, 2006b] demonstrated that functional lateralization of selection in the parietal cortices on the basis of the relative salience of stimuli might provide
APA, Harvard, Vancouver, ISO, and other styles
19

Takano, Hironobu, Taira Nagashima, and Kiyomi Nakamura. "Characteristics of Visual Saliency Caused by Character Feature for Reconstruction of Saliency Map Model." Vision 5, no. 4 (2021): 49. http://dx.doi.org/10.3390/vision5040049.

Full text
Abstract:
Visual saliency maps have been developed to estimate the bottom-up visual attention of humans. A conventional saliency map represents a bottom-up visual attention using image features such as the intensity, orientation, and color. However, it is difficult to estimate the visual attention using a conventional saliency map in the case of a top-down visual attention. In this study, we investigate the visual saliency for characters by applying still images including both characters and symbols. The experimental results indicate that characters have specific visual saliency independent of the type
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Yuantao, Weihong Xu, Fangjun Kuang, and Shangbing Gao. "The Study of Randomized Visual Saliency Detection Algorithm." Computational and Mathematical Methods in Medicine 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/380245.

Full text
Abstract:
Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scalin
APA, Harvard, Vancouver, ISO, and other styles
21

Pang, Yu, Xiaosheng Yu, Yunhe Wu, Chengdong Wu, and Yang Jiang. "Bagging-based saliency distribution learning for visual saliency detection." Signal Processing: Image Communication 87 (September 2020): 115928. http://dx.doi.org/10.1016/j.image.2020.115928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

White, Brian J., Janis Y. Kan, Ron Levy, Laurent Itti, and Douglas P. Munoz. "Superior colliculus encodes visual saliency before the primary visual cortex." Proceedings of the National Academy of Sciences 114, no. 35 (2017): 9451–56. http://dx.doi.org/10.1073/pnas.1701003114.

Full text
Abstract:
Models of visual attention postulate the existence of a bottom-up saliency map that is formed early in the visual processing stream. Although studies have reported evidence of a saliency map in various cortical brain areas, determining the contribution of phylogenetically older pathways is crucial to understanding its origin. Here, we compared saliency coding from neurons in two early gateways into the visual system: the primary visual cortex (V1) and the evolutionarily older superior colliculus (SC). We found that, while the response latency to visual stimulus onset was earlier for V1 neurons
APA, Harvard, Vancouver, ISO, and other styles
23

Zhu, Lin, Xianzhang Chen, Xiao Wang, and Hua Huang. "Finding Visual Saliency in Continuous Spike Stream." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 7757–65. http://dx.doi.org/10.1609/aaai.v38i7.28610.

Full text
Abstract:
As a bio-inspired vision sensor, the spike camera emulates the operational principles of the fovea, a compact retinal region, by employing spike discharges to encode the accumulation of per-pixel luminance intensity. Leveraging its high temporal resolution and bio-inspired neuromorphic design, the spike camera holds significant promise for advancing computer vision applications. Saliency detection mimic the behavior of human beings and capture the most salient region from the scenes. In this paper, we investigate the visual saliency in the continuous spike stream for the first time. To effecti
APA, Harvard, Vancouver, ISO, and other styles
24

Lopez-Alanis, Alberto, Rocio A. Lizarraga-Morales, Raul E. Sanchez-Yanez, Diana E. Martinez-Rodriguez, and Marco A. Contreras-Cruz. "Visual Saliency Detection Using a Rule-Based Aggregation Approach." Applied Sciences 9, no. 10 (2019): 2015. http://dx.doi.org/10.3390/app9102015.

Full text
Abstract:
In this paper, we propose an approach for salient pixel detection using a rule-based system. In our proposal, rules are automatically learned by combining four saliency models. The learned rules are utilized for the detection of pixels of the salient object in a visual scene. The proposed methodology consists of two main stages. Firstly, in the training stage, the knowledge extracted from outputs of four state-of-the-art saliency models is used to induce an ensemble of rough-set-based rules. Secondly, the induced rules are utilized by our system to determine, in a binary manner, the pixels cor
APA, Harvard, Vancouver, ISO, and other styles
25

Dhanushree, M., R. Priya, P. Aruna, and R. Bhavani. "A Framework for Video Summarization using Visual Attention Technique." Indian Journal Of Science And Technology 17, no. 15 (2024): 1586–95. http://dx.doi.org/10.17485/ijst/v17i15.456.

Full text
Abstract:
Objectives: To develop an efficient Video Summarization technique that aims to utilize the saliency map for mimicking the human way of selecting the important events in the given video. Methods: This paper proposes Histogram based Weighted Fusion (HWF) algorithm that uses spatial and temporal saliency maps to act as guidance in creating the summary of the video. The spatial saliency score and temporal saliency score obtained from the corresponding saliency maps are fused using the proposed HWF algorithm to obtain the frame level importance score. It tries to depict the visual attention of the
APA, Harvard, Vancouver, ISO, and other styles
26

Woo, ChaeEun, SuMin Lee, Soo-Min Park, Serin Choi, Jekyung Ryu, and Byung-Hyung Kim. "Improved Visual Saliency Prediction Based on Video Swin Transformers." Journal of Korea Multimedia Society 27, no. 11 (2024): 1314–25. https://doi.org/10.9717/kmms.2024.27.11.1314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Lai, Tianfei Zhou, Salman Khan, Hanqiu Sun, Jianbing Shen, and Ling Shao. "Weakly Supervised Visual Saliency Prediction." IEEE Transactions on Image Processing 31 (2022): 3111–24. http://dx.doi.org/10.1109/tip.2022.3158064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Jia. "Learning-based visual saliency computation." ACM SIGMultimedia Records 2, no. 4 (2010): 8–9. http://dx.doi.org/10.1145/2039331.2039336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Qi, Yuan Yuan, and Pingkun Yan. "Visual Saliency by Selective Contrast." IEEE Transactions on Circuits and Systems for Video Technology 23, no. 7 (2013): 1150–55. http://dx.doi.org/10.1109/tcsvt.2012.2226528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jian, Meng, Lifang Wu, Cheolkon Jung, Qingtao Fu, and Ting Jia. "Visual saliency estimation using constraints." Neurocomputing 290 (May 2018): 1–11. http://dx.doi.org/10.1016/j.neucom.2018.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Jia, Yonghong Tian, and Tiejun Huang. "Visual Saliency with Statistical Priors." International Journal of Computer Vision 107, no. 3 (2013): 239–53. http://dx.doi.org/10.1007/s11263-013-0678-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, C., and P. Milanfar. "Visual saliency in noisy images." Journal of Vision 13, no. 4 (2013): 5. http://dx.doi.org/10.1167/13.4.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Bing, Weihua Xiong, and Weiming Hu. "Visual Saliency Map from Tensor Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1585–91. http://dx.doi.org/10.1609/aaai.v26i1.8327.

Full text
Abstract:
Modeling visual saliency map of an image provides important information for image semantic understanding in many applications. Most existing computational visual saliency models follow a bottom-up framework that generates independent saliency map in each selected visual feature space and combines them in a proper way. Two big challenges to be addressed explicitly in these methods are (1) which features should be extracted for all pixels of the input image and (2) how to dynamically determine importance of the saliency map generated in each feature space. In order to address these problems, we
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Ao-Tian, and Tian-Ao Chen. "Intelligent Vehicle Visual Navigation Algorithm Based on Visual Saliency Improvement." Journal of Sensors 2022 (September 17, 2022): 1–12. http://dx.doi.org/10.1155/2022/5865703.

Full text
Abstract:
The automobile has gradually developed into an indispensable tool for human daily travel and transportation. Further reducing the traffic accident rate to improve the traffic safety level and improving the road traffic safety performance is a global issue worthy of common concern for human beings, moreover, it is a common concern for political circles, scholars, researchers, and other related workers in the field of transportation all over the world. Therefore, in this paper, by studying a large amount of literature and carrying out relevant model construction, based on the theories of visual
APA, Harvard, Vancouver, ISO, and other styles
35

Pandey, Shilpa, and Gaurav Harit. "Handwritten Annotation Spotting in Printed Documents Using Top-Down Visual Saliency Models." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 3 (2022): 1–25. http://dx.doi.org/10.1145/3485468.

Full text
Abstract:
In this article, we address the problem of localizing text and symbolic annotations on the scanned image of a printed document. Previous approaches have considered the task of annotation extraction as binary classification into printed and handwritten text. In this work, we further subcategorize the annotations as underlines, encirclements, inline text, and marginal text. We have collected a new dataset of 300 documents constituting all classes of annotations marked around or in-between printed text. Using the dataset as a benchmark, we report the results of two saliency formulations—CRF Salie
APA, Harvard, Vancouver, ISO, and other styles
36

Panda, Bishwabara, Manas Ranjan Nayak, Pradeep Kumar Mallick, and Abhishek Basu. "Image Watermarking Framework using Histogram Equalization and Visual Saliency." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 17, no. 4 (2023): 457–68. http://dx.doi.org/10.37936/ecti-cit.2023174.252375.

Full text
Abstract:
This paper proposes a digital image watermarking strategy using histogram equalization and visual Saliency followed by LSB (Least Significant Bit) replacement for better imperceptibility with hiding capacity. With this technique, a saliency map determines lesser-observable parts of the original image and gradually implants with increasing amounts of information based on histogram equalization information. The output from saliency is the perceptible areas within an image, which is the most notable position from the perspective of vision; as a result, any changes made other than those areas will
APA, Harvard, Vancouver, ISO, and other styles
37

Yin, Haohui. "Focus-SLAM: A visual Monocular SLAM." Journal of Physics: Conference Series 2829, no. 1 (2024): 012025. http://dx.doi.org/10.1088/1742-6596/2829/1/012025.

Full text
Abstract:
Abstract This research presents a novel simultaneous localization and mapping algorithm, called Focus-SLAM, which simulates human navigation strategies by synthesizing a optical saliency model, SalNavNet, in a traditional single-shot SLAM paradigm. SalNavNet introduces an innovative design that incorporates a correlation module and an adaptive Exponential Moving Average (EMA) component, effectively addressing the prevalent center bias issue found in contemporary saliency models. As a result, the system enhances target fixation by accentuating salient features more effectively. Extensive experi
APA, Harvard, Vancouver, ISO, and other styles
38

Varga, Domonkos. "Saliency-Guided Local Full-Reference Image Quality Assessment." Signals 3, no. 3 (2022): 483–96. http://dx.doi.org/10.3390/signals3030028.

Full text
Abstract:
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Guoan, Libo Jian, Zhengzhi Lu, Junjie Yang, and Deyang Liu. "A novel image recognition approach using multiscale saliency model and GoogLeNet." Electronic Imaging 2020, no. 10 (2020): 97–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.10.ipas-097.

Full text
Abstract:
It is very good to apply the saliency model in the visual selective attention mechanism to the preprocessing process of image recognition. However, the mechanism of visual perception is still unclear, so this visual saliency model is not ideal. To this end, this paper proposes a novel image recognition approach using multiscale saliency model and GoogLeNet. First, a multi-scale convolutional neural network was taken advantage of constructing multiscale salient maps, which could be used as filters. Second, an original image was combined with the salient maps to generate the filtered image, whic
APA, Harvard, Vancouver, ISO, and other styles
40

Lalonde, Kaylah, and Rachael Frush Holt. "Preschoolers Benefit From Visually Salient Speech Cues." Journal of Speech, Language, and Hearing Research 58, no. 1 (2015): 135–50. http://dx.doi.org/10.1044/2014_jslhr-h-13-0343.

Full text
Abstract:
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visu
APA, Harvard, Vancouver, ISO, and other styles
41

Favorskaya, M. N., and L. C. Jain. "Saliency detection in deep learning era: trends of development." Information and Control Systems, no. 3 (June 21, 2019): 10–36. http://dx.doi.org/10.31799/1684-8853-2019-3-10-36.

Full text
Abstract:
Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep lea
APA, Harvard, Vancouver, ISO, and other styles
42

Sánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi, and Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study." Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.

Full text
Abstract:
When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course of the contribution of visual and auditory information during the process of speech perception. To this end, we designed an audio–visual gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in
APA, Harvard, Vancouver, ISO, and other styles
43

Veale, Richard, Ziad M. Hafed, and Masatoshi Yoshida. "How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160113. http://dx.doi.org/10.1098/rstb.2016.0113.

Full text
Abstract:
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority map
APA, Harvard, Vancouver, ISO, and other styles
44

Yong, Lein de Leon, and Suren Jayasuriya. "Automated Saliency Prediction in Cinema Studies." Projections 17, no. 3 (2023): 43–63. http://dx.doi.org/10.3167/proj.2023.170303.

Full text
Abstract:
Abstract In visual cognition research, saliency refers to the prominence of specific elements in a scene. Moreover, saliency guidance is part of a filmmaker's toolset to build narratives and guide the audience into emotive responses. This article compares two Convolutional Neural Network (CNN) saliency mapping models with viewers’ eye-position mapping to investigate the potentiality of automated saliency mapping in moving image studies by analyzing saliency's role during cinema's transition from one-shot to multiple-shot. Although the exact moment when montage and editing methods appeared cann
APA, Harvard, Vancouver, ISO, and other styles
45

Sun, Zhi Hai, Teng Song, Wen Hui Zhou, and Hua Zhang. "Double Feature Combination: Region Contrast for Visual Salient Object Detection." Applied Mechanics and Materials 239-240 (December 2012): 811–15. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.811.

Full text
Abstract:
Visual saliency detection has become an important step between computer vision and digital image processing. Recent methods almost form a computational model based on color, which are difficult to overcome the shortcoming with cluttered and textured background. This paper proposes a novel salient object detection algorithm integrating with region color contrast and histograms of oriented gradients (HoG). Extensively experiments show that our algorithm outperforms other state-of-art saliency methods, yielding higher precision and better recall rate, even lower mean absolution error.
APA, Harvard, Vancouver, ISO, and other styles
46

Goda, Naokazu, Takuya Harada, Tadashi Ogawa, et al. "Influence of visual saliency in monkey visual cortex." Neuroscience Research 58 (January 2007): S96. http://dx.doi.org/10.1016/j.neures.2007.06.1125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhong, Sheng-hua, Yan Liu, Feifei Ren, Jinghuan Zhang, and Tongwei Ren. "Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 1063–69. http://dx.doi.org/10.1609/aaai.v27i1.8642.

Full text
Abstract:
Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynam
APA, Harvard, Vancouver, ISO, and other styles
48

Fareed, Mian, Qi Chun, Gulnaz Ahmed, Adil Murtaza, Muhammad Asif, and Muhammad Fareed. "Appearance-Based Salient Regions Detection Using Side-Specific Dictionaries." Sensors 19, no. 2 (2019): 421. http://dx.doi.org/10.3390/s19020421.

Full text
Abstract:
Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are helpful for extracting the concise representations of the stimuli and to capture the high-level semantics in visual information with a small number of active coefficients. In this paper, we propose a novel framework for salient region detection that uses appearance-based and regression-based schemes. The
APA, Harvard, Vancouver, ISO, and other styles
49

Sivarajah, Yathunanthan, Eun-Jung Holden, Roberto Togneri, Michael Dentith, and Mark Lindsay. "Visual saliency and potential field data enhancements: Where is your attention drawn?" Interpretation 2, no. 4 (2014): SJ9—SJ21. http://dx.doi.org/10.1190/int-2013-0199.1.

Full text
Abstract:
Interpretation of gravity and magnetic data for exploration applications may be based on pattern recognition in which geophysical signatures of geologic features associated with localized characteristics are sought within data. A crucial control on what comprises noticeable and comparable characteristics in a data set is how images displaying those data are enhanced. Interpreters are provided with various image enhancement and display tools to assist their interpretation, although the effectiveness of these tools to improve geologic feature detection is difficult to measure. We addressed this
APA, Harvard, Vancouver, ISO, and other styles
50

Shen, Gang, Wenjun Ma, Wen Zhai, Xuefei Lv, Guangyao Chen, and Yonghong Tian. "Retina-Inspired Models Enhance Visual Saliency Prediction." Entropy 27, no. 4 (2025): 436. https://doi.org/10.3390/e27040436.

Full text
Abstract:
Biologically inspired retinal preprocessing improves visual perception by efficiently encoding and reducing entropy in images. In this study, we introduce a new saliency prediction framework that combines a retinal model with deep neural networks (DNNs) using information theory ideas. By mimicking the human retina, our method creates clearer saliency maps with lower entropy and supports efficient computation with DNNs by optimizing information flow and reducing redundancy. We treat saliency prediction as an information maximization problem, where important regions have high information and low
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!