To see the other types of publications on this topic, follow the link: Crossmodal attention.

Journal articles on the topic 'Crossmodal attention'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Crossmodal attention.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Driver, Jon, and Charles Spence. "Crossmodal attention." Current Opinion in Neurobiology 8, no. 2 (April 1998): 245–53. http://dx.doi.org/10.1016/s0959-4388(98)80147-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spence, Charles. "Crossmodal attention." Scholarpedia 5, no. 5 (2010): 6309. http://dx.doi.org/10.4249/scholarpedia.6309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spence, Charles. "Crossmodal spatial attention." Annals of the New York Academy of Sciences 1191, no. 1 (March 2010): 182–200. http://dx.doi.org/10.1111/j.1749-6632.2010.05440.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ciaramitaro, Vivian, and Dan Jentzen. "Crossmodal attention alters auditory contrast sensitivity." Seeing and Perceiving 25 (2012): 177. http://dx.doi.org/10.1163/187847612x648062.

Full text
Abstract:
We examined the influence of covert, endogenous, crossmodal attention on auditory contrast sensitivity in a two-interval forced-choice dual-task paradigm. Attending to a visual stimulus has been found to alter the visual contrast response function via a mechanism of contrast gain for sustained visual attention, or a combination of response gain and contrast gain for transient visual attention (Ling and Carrasco, 2006). We examined if and how auditory contrast sensitivity varied as a function of attentional load, the difficulty of a competing visual task, and how such effects compared to those found for the influences of attention on visual processing. In our paradigm, subjects listened to two sequential white noise stimuli, one of which was amplitude modulated. Subjects reported which interval contained the amplitude modulated auditory stimulus. At the same time a sequence of 5 letters was presented, in an rsvp stream at central fixation, for each interval. Subjects judged which interval contained the visual target. For a given block of trials, subjects judged which interval contained white letters (easy visual task) or, in a separate block of trials, which interval had more target letters ‘A’ (difficult visual task). We found that auditory thresholds were lower for the easy compared to the difficult visual task and that the shift in the auditory contrast response function was reminiscent of a contrast gain mechanism for visual contrast. Importantly, we found that the effects of crossmodal attention on the auditory contrast response function diminished with practice.
APA, Harvard, Vancouver, ISO, and other styles
5

Gray, Rob, Rayka Mohebbi, and Hong Z. Tan. "The spatial resolution of crossmodal attention." ACM Transactions on Applied Perception 6, no. 1 (February 2009): 1–14. http://dx.doi.org/10.1145/1462055.1462059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Driver, Jon, and Charles Spence. "Cross–modal links in spatial attention." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 353, no. 1373 (August 29, 1998): 1319–31. http://dx.doi.org/10.1098/rstb.1998.0286.

Full text
Abstract:
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible crossmodal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these crossmodal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Crossmodal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive crossmodal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.
APA, Harvard, Vancouver, ISO, and other styles
7

Kreutzfeldt, Magali, Denise N. Stephan, Walter Sturm, Klaus Willmes, and Iring Koch. "The role of crossmodal competition and dimensional overlap in crossmodal attention switching." Acta Psychologica 155 (February 2015): 67–76. http://dx.doi.org/10.1016/j.actpsy.2014.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Driver, Jon, and Charles Spence. "Attention and the crossmodal construction of space." Trends in Cognitive Sciences 2, no. 7 (July 1998): 254–62. http://dx.doi.org/10.1016/s1364-6613(98)01188-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Convento, Silvia, Chiara Galantini, Nadia Bolognini, and Giuseppe Vallar. "Neuromodulation of crossmodal influences on visual cortex excitability." Seeing and Perceiving 25 (2012): 149. http://dx.doi.org/10.1163/187847612x647810.

Full text
Abstract:
Crossmodal interactions occur not only within brain regions deemed to be heteromodal, but also within primary sensory areas, traditionally considered as modality-specific. So far, mechanisms of crossmodal interactions in primary visual areas remain largely unknown. In the present study, we explored the effect of crossmodal stimuli on phosphene perception, induced by single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital visual cortex. In three experiments, we showed that redundant auditory and/or tactile information facilitated the detection of phosphenes induced by occipital sTMS, applied at sub-threshold intensity, which also increased their level of brightness, with the maximal enhancement occurring for trimodal stimulus combinations. Such crossmodal enhancement can be further boosted by the brain polarization of heteromodal areas mediating crossmodal links in spatial attention. Specifically, anodal transcranial direct current stimulation (tDCS) of both the occipital and the parietal cortices facilitated phosphene detection under unimodal conditions, whereas anodal tDCS of the parietal and temporal cortices enhanced phosphene detection selectively under crossmodal conditions, when auditory or tactile stimuli were combined with occipital sTMS. Overall, crossmodal interactions can enhance neural excitability within low-level visual areas, and tDCS can be used for boosting such crossmodal influences on visual responses, likely affecting mechanisms of crossmodal spatial attention involving feedback modulation from heteromodal areas on sensory-specific cortices. TDCS can effectively facilitate the integration of multisensory signals originating from the external world, hence improving visual perception.
APA, Harvard, Vancouver, ISO, and other styles
10

Hidaka, Souta, and Ayako Yaguchi. "An Investigation of the Relationships Between Autistic Traits and Crossmodal Correspondences in Typically Developing Adults." Multisensory Research 31, no. 8 (2018): 729–51. http://dx.doi.org/10.1163/22134808-20181304.

Full text
Abstract:
Abstract Autism spectrum disorder (ASD) includes characteristics such as social and behavioral deficits that are considered common across the general population rather than unique to people with the diagnosis. People with ASD are reported to have sensory irregularities, including crossmodal perception. Crossmodal correspondences are phenomena in which arbitrary crossmodal inputs affect behavioral performance. Crossmodal correspondences are considered to be established through associative learning, but the learning cues are considered to differ across the types of correspondences. In order to investigate whether and how ASD traits affect crossmodal associative learning, this study examined the relationships between the magnitude of crossmodal correspondences and the degree of ASD traits among non-diagnosed adults. We found that, among three types of crossmodal correspondences (brightness–loudness, visual size–pitch, and visual location–pitch pairs), the brightness–loudness pair was related with total ASD traits and a subtrait (social skill). The magnitude of newly learned crossmodal associations (the visual apparent motion direction–pitch pair) also showed a relationship with an ASD subtrait (attention switching). These findings demonstrate that there are unique relationships between crossmodal associations and ASD traits, indicating that each ASD trait is differently involved in sensory associative learning.
APA, Harvard, Vancouver, ISO, and other styles
11

Van der Stoep, N., T. C. W. Nijboer, and S. Van der Stigchel. "Erratum to: Exogenous orienting of crossmodal attention in 3-D space: Support for a depth-aware crossmodal attentional system." Psychonomic Bulletin & Review 21, no. 6 (April 23, 2014): 1629. http://dx.doi.org/10.3758/s13423-014-0651-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Evans, K. K., and A. Treisman. "Role of attention in visual-auditory crossmodal interactions." Journal of Vision 6, no. 6 (March 18, 2010): 388. http://dx.doi.org/10.1167/6.6.388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hugenschmidt, Christina E., Ann M. Peiffer, Thomas P. McCoy, Satoru Hayasaka, and Paul J. Laurienti. "Preservation of crossmodal selective attention in healthy aging." Experimental Brain Research 198, no. 2-3 (April 29, 2009): 273–85. http://dx.doi.org/10.1007/s00221-009-1816-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Factor, Zekiel Z., Cody W. Polack, and Ralph R. Miller. "Visual Gender Cues Guide Crossmodal Selective Attending to a Gender-Congruent Voice During Dichotic Listening." Experimental Psychology 67, no. 4 (July 2020): 246–54. http://dx.doi.org/10.1027/1618-3169/a000496.

Full text
Abstract:
Abstract. Visual input of a face appears to influence the ability to selectively attend to one voice over another simultaneous voice. We examined this crossmodal effect, specifically the role face gender may have on selective attention to male and female gendered simultaneous voices. Using a within-subjects design, participants were presented with a dynamic male face, female face, or fixation cross, with each condition being paired with a dichotomous audio stream of male and female voices reciting different lists of concrete nouns. In Experiment 1a, the female voice was played in the right ear and the male voice in the left ear. In Experiment 1b, both voices were played in both ears with differences in volume mimicking the interaural intensity difference between disparately localized voices in naturalistic situations. Free recall of words spoken by the two voices immediately following stimulus presentation served as a proxy measure of attention. In both sections of the experiment, crossmodal congruity of face gender enhanced same-gender word recall. This effect indicates that crossmodal interaction between voices and faces guides auditory attention. The results contribute to our understanding of how humans navigate the crossmodal relationship between voices and faces to direct attention in social interactions such as those in the cocktail party scenario.
APA, Harvard, Vancouver, ISO, and other styles
15

Rosli, Roslizawaty Mohd, Hong Z. Tan, Robert W. Proctor, and Rob Gray. "Attentional gradient for crossmodal proximal-distal tactile cueing of visual spatial attention." ACM Transactions on Applied Perception 8, no. 4 (November 2011): 1–12. http://dx.doi.org/10.1145/2043603.2043605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

CHEN, XIAOXI, QI CHEN, DINGGUO GAO, and ZHENZHU YUE. "Interaction between endogenous and exogenous orienting in crossmodal attention." Scandinavian Journal of Psychology 53, no. 4 (June 4, 2012): 303–8. http://dx.doi.org/10.1111/j.1467-9450.2012.00957.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Macaluso, Emiliano, and Jon Driver. "Spatial attention and crossmodal interactions between vision and touch." Neuropsychologia 39, no. 12 (January 2001): 1304–16. http://dx.doi.org/10.1016/s0028-3932(01)00119-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Turatto, Massimo, Veronica Mazza, and Carlo Umiltà. "Crossmodal object-based attention: Auditory objects affect visual processing." Cognition 96, no. 2 (June 2005): B55—B64. http://dx.doi.org/10.1016/j.cognition.2004.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

McDonald, John J., Wolfgang A. Teder-Sälejärvi, Daniel Heraldez, and Steven A. Hillyard. "Electrophysiological evidence for the "missing link" in crossmodal attention." Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 55, no. 2 (2001): 141–49. http://dx.doi.org/10.1037/h0087361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Macaluso, E. "Modulation of Human Visual Cortex by Crossmodal Spatial Attention." Science 289, no. 5482 (August 18, 2000): 1206–8. http://dx.doi.org/10.1126/science.289.5482.1206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lukas, Sarah, Andrea M. Philipp, and Iring Koch. "Crossmodal attention switching: Auditory dominance in temporal discrimination tasks." Acta Psychologica 153 (November 2014): 139–46. http://dx.doi.org/10.1016/j.actpsy.2014.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Green, Jessica J., and John J. McDonald. "An event-related potential study of supramodal attentional control and crossmodal attention effects." Psychophysiology 43, no. 2 (March 2006): 161–71. http://dx.doi.org/10.1111/j.1469-8986.2006.00394.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

KAWAI, NOBUYUKI. "Crossmodal spatial attention shift produced by centrally presented gaze cues." Japanese Psychological Research 50, no. 2 (May 2008): 100–103. http://dx.doi.org/10.1111/j.1468-5884.2008.00366.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, Hao, and Jaideep Sengupta. "The Crossmodal Effect of Attention on Preferences: Facilitation versus Impairment." Journal of Consumer Research 40, no. 5 (February 1, 2014): 885–903. http://dx.doi.org/10.1086/673261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

McDonald, J. J. "Multisensory Integration and Crossmodal Attention Effects in the Human Brain." Science 292, no. 5523 (June 8, 2001): 1791a—1791. http://dx.doi.org/10.1126/science.292.5523.1791a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Convento, Silvia, Md Shoaibur Rahman, and Jeffrey M. Yau. "Selective Attention Gates the Interactive Crossmodal Coupling between Perceptual Systems." Current Biology 28, no. 5 (March 2018): 746–52. http://dx.doi.org/10.1016/j.cub.2018.01.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

McGaughy, Jill, Janita Turchi, and Martin Sarter. "Crossmodal divided attention in rats: effects of chlordiazepoxide and scopolamine." Psychopharmacology 115, no. 1-2 (June 1994): 213–20. http://dx.doi.org/10.1007/bf02244774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mesfin, Gebremariam, Nadia Hussain, Elahe Kani-Zabihi, Alexandra Covaci, Estêvão B. Saleme, and Gheorghita Ghinea. "QoE of cross-modally mapped Mulsemedia: an assessment using eye gaze and heart rate." Multimedia Tools and Applications 79, no. 11-12 (January 3, 2020): 7987–8009. http://dx.doi.org/10.1007/s11042-019-08473-5.

Full text
Abstract:
AbstractA great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition.
APA, Harvard, Vancouver, ISO, and other styles
29

Friedel, Evelyn B. N., Michael Bach, and Sven P. Heinrich. "Attentional Interactions Between Vision and Hearing in Event-Related Responses to Crossmodal and Conjunct Oddballs." Multisensory Research 33, no. 3 (July 1, 2020): 251–75. http://dx.doi.org/10.1163/22134808-20191329.

Full text
Abstract:
Abstract Are alternation and co-occurrence of stimuli of different sensory modalities conspicuous? In a novel audio-visual oddball paradigm, the P300 was used as an index of the allocation of attention to investigate stimulus- and task-related interactions between modalities. Specifically, we assessed effects of modality alternation and the salience of conjunct oddball stimuli that were defined by the co-occurrence of both modalities. We presented (a) crossmodal audio-visual oddball sequences, where both oddballs and standards were unimodal, but of a different modality (i.e., visual oddball with auditory standard, or vice versa), and (b) oddball sequences where standards were randomly of either modality while the oddballs were a combination of both modalities (conjunct stimuli). Subjects were instructed to attend to one of the modalities (whether part of a conjunct stimulus or not). In addition, we also tested specific attention to the conjunct stimuli. P300-like responses occurred even when the oddball was of the unattended modality. The pattern of event-related potential (ERP) responses obtained with the two crossmodal oddball sequences switched symmetrically between stimulus modalities when the task modality was switched. Conjunct oddballs elicited no oddball response if only one modality was attended. However, when conjunctness was specifically attended, an oddball response was obtained. Crossmodal oddballs capture sufficient attention even when not attended. Conjunct oddballs, however, are not sufficiently salient to attract attention when the task is unimodal. Even when specifically attended, the processing of conjunctness appears to involve additional steps that delay the oddball response.
APA, Harvard, Vancouver, ISO, and other styles
30

Wada, Yuichi. "Crossmodal attention between vision and touch in temporal order judgment task." Japanese journal of psychology 74, no. 5 (2003): 420–27. http://dx.doi.org/10.4992/jjpsy.74.420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lloyd, Donna M., Natasha Merat, Francis Mcglone, and Charles Spence. "Crossmodal links between audition and touch in covert endogenous spatial attention." Perception & Psychophysics 65, no. 6 (August 2003): 901–24. http://dx.doi.org/10.3758/bf03194823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mishra, Jyoti, and Adam Gazzaley. "Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging." PLoS ONE 8, no. 11 (November 20, 2013): e81894. http://dx.doi.org/10.1371/journal.pone.0081894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Spence, Charles, Francesco Pavani, and Jon Driver. "Crossmodal links between vision and touch in covert endogenous spatial attention." Journal of Experimental Psychology: Human Perception and Performance 26, no. 4 (2000): 1298–319. http://dx.doi.org/10.1037/0096-1523.26.4.1298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Korzeniowska, A. T., H. Root-Gutteridge, J. Simner, and D. Reby. "Audio–visual crossmodal correspondences in domestic dogs ( Canis familiaris )." Biology Letters 15, no. 11 (November 2019): 20190564. http://dx.doi.org/10.1098/rsbl.2019.0564.

Full text
Abstract:
Crossmodal correspondences are intuitively held relationships between non-redundant features of a stimulus, such as auditory pitch and visual illumination. While a number of correspondences have been identified in humans to date (e.g. high pitch is intuitively felt to be luminant, angular and elevated in space), their evolutionary and developmental origins remain unclear. Here, we investigated the existence of audio–visual crossmodal correspondences in domestic dogs, and specifically, the known human correspondence in which high auditory pitch is associated with elevated spatial position. In an audio–visual attention task, we found that dogs engaged more with audio–visual stimuli that were congruent with human intuitions (high auditory pitch paired with a spatially elevated visual stimulus) compared to incongruent (low pitch paired with elevated visual stimulus). This result suggests that crossmodal correspondences are not a uniquely human or primate phenomenon and they cannot easily be dismissed as merely lexical conventions (i.e. matching ‘high’ pitch with ‘high’ elevation).
APA, Harvard, Vancouver, ISO, and other styles
35

Eimer, Martin, and José van Velzen. "Spatial tuning of tactile attention modulates visual processing within hemifields: an ERP investigation of crossmodal attention." Experimental Brain Research 166, no. 3-4 (July 21, 2005): 402–10. http://dx.doi.org/10.1007/s00221-005-2380-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Eimer, Martin, José van Velzen, Bettina Forster, and Jon Driver. "Shifts of attention in light and in darkness: an ERP study of supramodal attentional control and crossmodal links in spatial attention." Cognitive Brain Research 15, no. 3 (February 2003): 308–23. http://dx.doi.org/10.1016/s0926-6410(02)00203-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Diederich, Adele, and Hans Colonius. "Multisensory Integration and Exogenous Spatial Attention: A Time-window-of-integration Analysis." Journal of Cognitive Neuroscience 31, no. 5 (May 2019): 699–710. http://dx.doi.org/10.1162/jocn_a_01386.

Full text
Abstract:
Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this “exogenous spatial cuing effect” has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal “attention” structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100–200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.
APA, Harvard, Vancouver, ISO, and other styles
38

BEER, A. L., and B. RODER. "Unimodal and crossmodal effects of endogenous attention to visual and auditory motion." Cognitive, Affective, & Behavioral Neuroscience 4, no. 2 (June 1, 2004): 230–40. http://dx.doi.org/10.3758/cabn.4.2.230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Föcker, Julia, Kirsten Hötting, Matthias Gondan, and Brigitte Röder. "Unimodal and Crossmodal Gradients of Spatial Attention: Evidence from Event-related Potentials." Brain Topography 23, no. 1 (October 10, 2009): 1–13. http://dx.doi.org/10.1007/s10548-009-0111-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Poliakoff, E., S. Ashworth, C. Lowe, and C. Spence. "Vision and touch in ageing: Crossmodal selective attention and visuotactile spatial interactions." Neuropsychologia 44, no. 4 (January 2006): 507–17. http://dx.doi.org/10.1016/j.neuropsychologia.2005.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rolfs, Martin, Ralf Engbert, and Reinhold Kliegl. "Crossmodal coupling of oculomotor control and spatial attention in vision and audition." Experimental Brain Research 166, no. 3-4 (July 20, 2005): 427–39. http://dx.doi.org/10.1007/s00221-005-2382-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chica, Ana B., Daniel Sanabria, Juan Lupiáñez, and Charles Spence. "Comparing intramodal and crossmodal cuing in the endogenous orienting of spatial attention." Experimental Brain Research 179, no. 3 (December 8, 2006): 353–64. http://dx.doi.org/10.1007/s00221-006-0798-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chica, Ana B., Daniel Sanabria, Juan Lupiáñez, and Charles Spence. "Comparing intramodal and crossmodal cuing in the endogenous orienting of spatial attention." Experimental Brain Research 179, no. 3 (February 27, 2007): 531. http://dx.doi.org/10.1007/s00221-007-0900-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Macaluso, E., M. Eimer, C. D. Frith, and J. Driver. "Preparatory states in crossmodal spatial attention: spatial specificity and possible control mechanisms." Experimental Brain Research 149, no. 1 (January 9, 2003): 62–74. http://dx.doi.org/10.1007/s00221-002-1335-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Nityananda, Vivek, and Lars Chittka. "Modality-specific attention in foraging bumblebees." Royal Society Open Science 2, no. 10 (October 2015): 150324. http://dx.doi.org/10.1098/rsos.150324.

Full text
Abstract:
Attentional demands can prevent humans and other animals from performing multiple tasks simultaneously. Some studies, however, show that tasks presented in different sensory modalities (e.g. visual and auditory) can be processed simultaneously. This suggests that, at least in these cases, attention might be modality-specific and divided differently between tasks when present in the same modality compared with different modalities. We investigated this possibility in bumblebees ( Bombus terrestris ) using a biologically relevant experimental set-up where they had to simultaneously choose more rewarding flowers and avoid simulated predatory attacks by robotic ‘spiders’. We found that when the tasks had to be performed using visual cues alone, bees failed to perform both tasks simultaneously. However, when highly rewarding flowers were indicated by olfactory cues and predators were indicated by visual cues, bees managed to perform both tasks successfully. Our results thus provide evidence for modality-specific attention in foraging bees and establish a novel framework for future studies of crossmodal attention in ecologically realistic settings.
APA, Harvard, Vancouver, ISO, and other styles
46

Riggs, Sara Lu, and Nadine Sarter. "Tactile, Visual, and Crossmodal Visual-Tactile Change Blindness: The Effect of Transient Type and Task Demands." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 1 (December 19, 2018): 5–24. http://dx.doi.org/10.1177/0018720818818028.

Full text
Abstract:
Objective: The present study examined whether tactile change blindness and crossmodal visual-tactile change blindness occur in the presence of two transient types and whether their incidence is affected by the addition of a concurrent task. Background: Multimodal and tactile displays have been proposed as a promising means to overcome data overload and support attention management. To ensure the effectiveness of these displays, researchers must examine possible limitations of human information processing, such as tactile and crossmodal change blindness. Method: Twenty participants performed a unmanned aerial vehicle (UAV) monitoring task that included visual and tactile cues. They completed four blocks of 70 trials each, one involving visual transients, the other tactile transients. A search task was added to determine whether increased workload leads to a higher risk of change blindness. Results: The findings confirm that tactile change detection suffers in terms of response accuracy, sensitivity, and response bias in the presence of a tactile transient. Crossmodal visual-tactile change blindness was not observed. Also, change detection was not affected by the addition of the search task and helped reduce response bias. Conclusion: Tactile displays can help support multitasking and attention management, but their design needs to account for tactile change blindness. Simultaneous presentation of multiple tactile indications should be avoided as it adversely affects change detection. Application: The findings from this research will help inform the design of multimodal and tactile interfaces in data-rich domains, such as military operations, aviation, and healthcare.
APA, Harvard, Vancouver, ISO, and other styles
47

Magosso, Elisa, Andrea Serino, Giuseppe di Pellegrino, and Mauro Ursino. "Crossmodal Links between Vision and Touch in Spatial Attention: A Computational Modelling Study." Computational Intelligence and Neuroscience 2010 (2010): 1–13. http://dx.doi.org/10.1155/2010/304941.

Full text
Abstract:
Many studies have revealed that attention operates across different sensory modalities, to facilitate the selection of relevant information in the multimodal situations of every-day life. Cross-modal links have been observed either when attention is directed voluntarily (endogenous) or involuntarily (exogenous). The neural basis of cross-modal attention presents a significant challenge to cognitive neuroscience. Here, we used a neural network model to elucidate the neural correlates of visual-tactile interactions in exogenous and endogenous attention. The model includes two unimodal (visual and tactile) areas connected with a bimodal area in each hemisphere and a competition between the two hemispheres. The model is able to explain cross-modal facilitation both in exogenous and endogenous attention, ascribing it to an advantaged activation of the bimodal area on the attended side (via a top-down or bottom-up biasing), with concomitant inhibition towards the opposite side. The model suggests that a competitive/cooperative interaction with biased competition may mediate both forms of cross-modal attention.
APA, Harvard, Vancouver, ISO, and other styles
48

Seeley, W. P. "Hearing How Smooth It Looks: Selective Attention and Crossmodal Perception in the Arts." Essays in Philosophy 13, no. 2 (2012): 498–517. http://dx.doi.org/10.7710/1526-0569.1434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gogate, Lakshmi, and Madhavilatha Maganti. "The Dynamics of Infant Attention: Implications for Crossmodal Perception and Word-Mapping Research." Child Development 87, no. 2 (March 2016): 345–64. http://dx.doi.org/10.1111/cdev.12509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

de Jong, Ritske, Paolo Toffanin, and Marten Harbers. "Dynamic crossmodal links revealed by steady-state responses in auditory–visual divided attention." International Journal of Psychophysiology 75, no. 1 (January 2010): 3–15. http://dx.doi.org/10.1016/j.ijpsycho.2009.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography