Academic literature on the topic 'Crossmodal attention'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Crossmodal attention.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Crossmodal attention"

1

Driver, Jon, and Charles Spence. "Crossmodal attention." Current Opinion in Neurobiology 8, no. 2 (April 1998): 245–53. http://dx.doi.org/10.1016/s0959-4388(98)80147-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spence, Charles. "Crossmodal attention." Scholarpedia 5, no. 5 (2010): 6309. http://dx.doi.org/10.4249/scholarpedia.6309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spence, Charles. "Crossmodal spatial attention." Annals of the New York Academy of Sciences 1191, no. 1 (March 2010): 182–200. http://dx.doi.org/10.1111/j.1749-6632.2010.05440.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ciaramitaro, Vivian, and Dan Jentzen. "Crossmodal attention alters auditory contrast sensitivity." Seeing and Perceiving 25 (2012): 177. http://dx.doi.org/10.1163/187847612x648062.

Full text
Abstract:
We examined the influence of covert, endogenous, crossmodal attention on auditory contrast sensitivity in a two-interval forced-choice dual-task paradigm. Attending to a visual stimulus has been found to alter the visual contrast response function via a mechanism of contrast gain for sustained visual attention, or a combination of response gain and contrast gain for transient visual attention (Ling and Carrasco, 2006). We examined if and how auditory contrast sensitivity varied as a function of attentional load, the difficulty of a competing visual task, and how such effects compared to those found for the influences of attention on visual processing. In our paradigm, subjects listened to two sequential white noise stimuli, one of which was amplitude modulated. Subjects reported which interval contained the amplitude modulated auditory stimulus. At the same time a sequence of 5 letters was presented, in an rsvp stream at central fixation, for each interval. Subjects judged which interval contained the visual target. For a given block of trials, subjects judged which interval contained white letters (easy visual task) or, in a separate block of trials, which interval had more target letters ‘A’ (difficult visual task). We found that auditory thresholds were lower for the easy compared to the difficult visual task and that the shift in the auditory contrast response function was reminiscent of a contrast gain mechanism for visual contrast. Importantly, we found that the effects of crossmodal attention on the auditory contrast response function diminished with practice.
APA, Harvard, Vancouver, ISO, and other styles
5

Gray, Rob, Rayka Mohebbi, and Hong Z. Tan. "The spatial resolution of crossmodal attention." ACM Transactions on Applied Perception 6, no. 1 (February 2009): 1–14. http://dx.doi.org/10.1145/1462055.1462059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Driver, Jon, and Charles Spence. "Cross–modal links in spatial attention." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 353, no. 1373 (August 29, 1998): 1319–31. http://dx.doi.org/10.1098/rstb.1998.0286.

Full text
Abstract:
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible crossmodal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these crossmodal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Crossmodal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive crossmodal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.
APA, Harvard, Vancouver, ISO, and other styles
7

Kreutzfeldt, Magali, Denise N. Stephan, Walter Sturm, Klaus Willmes, and Iring Koch. "The role of crossmodal competition and dimensional overlap in crossmodal attention switching." Acta Psychologica 155 (February 2015): 67–76. http://dx.doi.org/10.1016/j.actpsy.2014.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Driver, Jon, and Charles Spence. "Attention and the crossmodal construction of space." Trends in Cognitive Sciences 2, no. 7 (July 1998): 254–62. http://dx.doi.org/10.1016/s1364-6613(98)01188-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Convento, Silvia, Chiara Galantini, Nadia Bolognini, and Giuseppe Vallar. "Neuromodulation of crossmodal influences on visual cortex excitability." Seeing and Perceiving 25 (2012): 149. http://dx.doi.org/10.1163/187847612x647810.

Full text
Abstract:
Crossmodal interactions occur not only within brain regions deemed to be heteromodal, but also within primary sensory areas, traditionally considered as modality-specific. So far, mechanisms of crossmodal interactions in primary visual areas remain largely unknown. In the present study, we explored the effect of crossmodal stimuli on phosphene perception, induced by single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital visual cortex. In three experiments, we showed that redundant auditory and/or tactile information facilitated the detection of phosphenes induced by occipital sTMS, applied at sub-threshold intensity, which also increased their level of brightness, with the maximal enhancement occurring for trimodal stimulus combinations. Such crossmodal enhancement can be further boosted by the brain polarization of heteromodal areas mediating crossmodal links in spatial attention. Specifically, anodal transcranial direct current stimulation (tDCS) of both the occipital and the parietal cortices facilitated phosphene detection under unimodal conditions, whereas anodal tDCS of the parietal and temporal cortices enhanced phosphene detection selectively under crossmodal conditions, when auditory or tactile stimuli were combined with occipital sTMS. Overall, crossmodal interactions can enhance neural excitability within low-level visual areas, and tDCS can be used for boosting such crossmodal influences on visual responses, likely affecting mechanisms of crossmodal spatial attention involving feedback modulation from heteromodal areas on sensory-specific cortices. TDCS can effectively facilitate the integration of multisensory signals originating from the external world, hence improving visual perception.
APA, Harvard, Vancouver, ISO, and other styles
10

Hidaka, Souta, and Ayako Yaguchi. "An Investigation of the Relationships Between Autistic Traits and Crossmodal Correspondences in Typically Developing Adults." Multisensory Research 31, no. 8 (2018): 729–51. http://dx.doi.org/10.1163/22134808-20181304.

Full text
Abstract:
Abstract Autism spectrum disorder (ASD) includes characteristics such as social and behavioral deficits that are considered common across the general population rather than unique to people with the diagnosis. People with ASD are reported to have sensory irregularities, including crossmodal perception. Crossmodal correspondences are phenomena in which arbitrary crossmodal inputs affect behavioral performance. Crossmodal correspondences are considered to be established through associative learning, but the learning cues are considered to differ across the types of correspondences. In order to investigate whether and how ASD traits affect crossmodal associative learning, this study examined the relationships between the magnitude of crossmodal correspondences and the degree of ASD traits among non-diagnosed adults. We found that, among three types of crossmodal correspondences (brightness–loudness, visual size–pitch, and visual location–pitch pairs), the brightness–loudness pair was related with total ASD traits and a subtrait (social skill). The magnitude of newly learned crossmodal associations (the visual apparent motion direction–pitch pair) also showed a relationship with an ASD subtrait (attention switching). These findings demonstrate that there are unique relationships between crossmodal associations and ASD traits, indicating that each ASD trait is differently involved in sensory associative learning.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Crossmodal attention"

1

Bullock, Thomas. "Crossmodal load and selective attention." Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2771/.

Full text
Abstract:
This thesis explores a current dominant theory of attention - the load theory of selective attention and cognitive control (Lavie et al., 2004). Load theory has been posited as a potential resolution to the long-running debate over the locus of selection in attention. Numerous studies confirm that high visual perceptual load in a relevant task leads to reduced interference from task-relevant distractors; whereas high working memory load leads to increased interference from task-irrelevant distractors in a relevant task. However, very few studies have directly tested perceptual and working memory load effects on the processing of task-relevant stimuli, and even fewer studies have tested the impact of load on processing both within and between different sensory modalities. This thesis details several novel experiments that test both visual and auditory perceptual and working memory load effects on task-relevant change detection in a change-blindness “flicker” task. Results indicate that both high visual and auditory perceptual load can impact on change detection, which implies that the perceptual load model can account for load effects on change detection, both within and between different sensory modalities. Results also indicate that high visual working memory load can impact on change detection. By contrast, high auditory working memory load did not appear to impact change detection. These findings do not directly challenge load theory per-se, but instead highlight how working memory load can have markedly different effects in different experimental paradigms. The final part of this thesis explores whether high perceptual load can attenuate distraction from highly emotionally salient stimuli. The findings suggest that potent emotional stimuli can “breakthrough” and override the effects of high perceptual load - a result that presents a challenge to load theory. All findings are discussed with reference to new challenges to load theory, particularly the “dilution” argument.
APA, Harvard, Vancouver, ISO, and other styles
2

Velasco, Carlos. "Crossmodal correspondences and attention in the context of multisensory (product) packaging design : applied crossmodal correspondences." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:b7e58b56-7f82-482d-92b4-269158242204.

Full text
Abstract:
The term 'crossmodal correspondence' refers to the tendency for people to match information across the senses. In this thesis, the associations between taste/flavour (tastants and words) information with shapes and colours is investigated. Furthermore, such correspondences are addressed in the context of multisensory packaging design. The focus in this thesis is on the way in which taste/flavour information can be communicated by means of the visual elements of product packaging. Through a series of experiments, I demonstrate that people associate tastes and the roundness/angularity of shapes, and that taste quality, hedonics, and intensity influence such correspondences. However, packaging roundness/angularity does not seem to drive these associations. Additionally, I demonstrate that culture and context systematically influence colour/flavour associations. Importantly, the results reported in this thesis suggest that taste/shape correspondences can influence taste expectations as a function of the visual attributes of product packaging. The results reported here also reveal that colour can influence the classification of, and search for, flavour information on a product’s packaging. It turns out that the strength of the association between a flavour category and a colour is crucial to such an effect. The implications of these findings are discussed in light of the theories of crossmodal correspondences, its applications, and directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Rolfs, Martin, Ralf Engbert, and Reinhold Kliegl. "Crossmodal coupling of oculomotor controland spatial attention in vision and audition." Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2011/5680/.

Full text
Abstract:
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention [e.g., Engbert & Kliegl (2003), Vision Res 43:1035-1045]. Here,we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e., an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets,microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. Thus, microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Yi-Chuan. "The facilitatory crossmodal effect of auditory stimuli on visual perception." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:36dcc0ec-d655-423d-8191-a83d9fd76886.

Full text
Abstract:
The aim of the experiments reported in this thesis was to investigate the multisensory interactions taking place between vision and audition. The focus is on the modulatory role of the temporal coincidence and semantic congruency of pairs of auditory and visual stimuli. With regards to the temporal coincidence factor, whether, and how, the presentation of a simultaneous sound facilitates visual target perception was tested using the equivalent noise paradigm (Chapter 3) and the backward masking paradigm (Chapter 4). The results demonstrate that crossmodal facilitation can be observed in both visual detection and identification tasks. Importantly, however, the results also reveal that the sound not only had to be presented simultaneously, but also reliably, with the visual target. The suggestion is made that the reliable co-occurrence of the auditory and visual stimuli provides observers with the statistical regularity needed to assume that the visual and auditory stimuli likely originate from the same perceptual event (i.e., that they in some sense 'belong together'). The experiments reported in Chapters 5 through 8 were designed to investigate the role of semantic congruency on audiovisual interactions. The results of the experiments reported in Chapter 5 revealed that the semantic context provided by the soundtrack that a person happens to be listening to can modulate his/her visual conscious perception in the binocular rivalry situation. In Chapters 6-8, the timecourse of audiovisual semantic interactions were investigated using categorization, detection, and identification tasks on visual pictures. The results suggested that when the presentation of the sound leads the presentation of a picture by more than 240 ms, it induces a crossmodal semantic priming effect. In addition, when the presentation of the sound lags a semantically-congruent picture by about 300 ms, it enhances performance, presumably by helping to maintain the visual representation in short-term memory. The results indicate that audiovisual semantic interactions constitute a heterogeneous group of phenomena. A crossmodal type-token binding framework is proposed to account for the parallel processing of the spatiotemporal and semantic interactions of multisensory inputs. The suggestion is that the congruent information in the type and token representation systems would integrate, and they finally bind into a unified multisensory object representation.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Jae Won. "Auditory cuing of visual attention : spatial and sound parameters." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:83efb40d-f77d-420e-9372-623ebae3224c.

Full text
Abstract:
The experiments reported in this thesis investigate whether the current understanding of crossmodal spatial attention can be applied to rear space, and how sound parameters can modulate crossmodal spatial cuing effects. It is generally accepted that the presentation of a brief auditory cue can exogenously orient spatial attention to the cued region of space so that reaction times (RTs) to visual targets presented there are faster than those presented elsewhere. Unlike the conventional belief in such crossmodal spatial cuing effects, RTs to visual targets were equally facilitated from the presentation of an auditory cue in the front or in the rear, as long as the stimuli were presented ipsilaterally. Moreover, when an auditory cue and a visual target were presented from one of two lateral positions on each side in front, the spatial co-location of the two stimuli did not always lead to the fastest target RTs. Although contrasting with the traditional view on the importance of cue-target spatial co-location in exogenous crossmodal cuing effects, such findings are consistent with the evidence concerning multisensory integration in the superior colliculus (SC). Further investigation revealed that the presentation of an auditory cue with an exponential intensity change might be able to exogenously orient crossmodal spatial attention narrowly to the cued region of space. Taken together, the findings reported in this thesis suggest that not only the location but also sound parameters (e.g., intensity change) of auditory cues can modulate the crossmodal exogenous orienting of spatial attention.
APA, Harvard, Vancouver, ISO, and other styles
6

McDonald, John J. "Crossmodal interactions in stimulus-driven spatial attention and inhibition of return, evidence from behavioural and electrophysiological measures." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0026/NQ38936.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marsja, Erik. "Attention capture by sudden and unexpected changes : a multisensory perspective." Doctoral thesis, Umeå universitet, Institutionen för psykologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-141852.

Full text
Abstract:
The main focus for this thesis was cross-modal attention capture by sudden and unexpected sounds and vibrations, known as deviants, presented in a stream the same to-be-ignored stimulus. More specifically, the thesis takes a multisensory perspective and examines the possible similarities and differences in how deviant vibrations and sounds affect visual task performance (Study I), and whether the deviant and standard stimuli have to be presented within the same modality to capture attention away from visual tasks (Study II). Furthermore, by presenting spatial deviants (changing the source of the stimuli from one side of the body to the other) in audiotactile (bimodal), tactile, and auditory to-be-ignored, it explores whether bimodal stimuli are more salient compared to unimodal (Study III). In addition, Study III tested the claims that short-term memory is domain-specific. In line with previous research, Study I found that both auditory and tactile deviants captured attention away from the visual task. However, the temporal dynamics between the two modalities seem to differ. That is, it seems like practice causes the effect of vibratory deviants to reduce, whereas this is not the case for auditory deviants. This suggests that there are central mechanisms (detection of the change) and sensory-specific mechanisms. Study II found that the deviant and standard stimuli must be presented within the same modality. If attention capture by deviants is produced by a mismatch within a neural model predicting upcoming stimuli, the neural model is likely built on stimuli within each modality separately. The results of Study III revealed that spatial and verbal short-term memory are negatively affected by a spatial change in to-be-ignored sequences, but only when the change is within a bimodal sequence. These results can be taken as evidence for a unitary account of short-term memory (verbal and spatial information stored in the same storage) and that bimodal stimuli may be integrated into a unitary percept that make any change in the stream more salient.
APA, Harvard, Vancouver, ISO, and other styles
8

Kvasova, Daria 1989. "The Role of cross-modal semantic interactions in real-world visuo-spatial attention." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/668665.

Full text
Abstract:
In our everyday life we must effectively orient attention to relevant objects and events in multisensory environments. The impact of cross-modal links for attention orienting to spatial and temporal cues has been widely described. However, real-life scenarios provide a rich web of semantic information through the different sensory modalities. Despite some previous studies have revealed an impact of crossmodal sematic correspondences, the results are mixed with regard to the conditions in which audiovisual semantic congruence can influence attention orienting. Furthermore, the vast majority of the research on crossmodal semantics used simple, stereotyped displays that are far from achieving ecological validity. The present thesis attempts to close this gap by addressing the role of identity-based crossmodal relationships on attention orienting in scenarios closer to real-world conditions. To this end, the experiments presented here attempt to extrapolate and generalize previous findings in more realistic environments by using naturalistic and dynamic stimuli, and address the theoretical questions of task relevance and perceptual load. The outcome of the three empirical studies in this thesis lead to several conclusions. First, that the effect of audio-visual semantic congruence on attention is not strictly automatic. Instead, they suggest that some top-down processing is necessary for audio-visual semantic congruence to trigger spatial orienting. The second conclusion to emerge is that crossmodal semantic congruence can guide attention under goal-directed conditions in visual search, and also under free observation in complex and dynamic scenes. Third, that perceptual load is a limiting factor for these interactions. These findings extend previous knowledge on object-based crossmodal interactions with simple stimuli and clarify how audio-visual semantically congruent relationships play out in realistic scenarios.
En nuestra vida cotidiana debemos orientar efectivamente la atención a objetos y eventos relevantes en entornos multisensoriales. El impacto que tienen los enlaces intermodales en la orientación de la atención a señales espaciales y temporales ha sido ampliamente descrito. Sin embargo, los escenarios de la vida real proporcionan una rica red de información semántica a través de las diferentes modalidades sensoriales. A pesar de que algunos estudios previos han revelado un impacto de las correspondencias semánticas entre modalidades, los resultados se mezclan con respecto a las condiciones en que la congruencia semántica audiovisual puede influir en la orientación de la atención. Además, la gran mayoría de la investigación sobre semántica intermodal utilizó representaciones simples y estereotipadas que están lejos de alcanzar la validez ecológica. La presente tesis intenta llenar esta brecha al abordar el papel que las relaciones intermodales basadas en la identidad tienen en la orientación de la atención en escenarios más cercanos a las condiciones del mundo real. Con este fin, los experimentos presentados aquí intentan extrapolar y generalizar hallazgos previos en entornos más realistas mediante el uso de estímulos naturales y dinámicos, y abordar cuestiones teóricas como la relevancia de la tarea y la carga perceptiva. El resultado de los tres estudios empíricos de esta tesis condujo a varias conclusiones. Primero, que el efecto de la congruencia semántica audiovisual en la atención no es estrictamente automático. En cambio, sugieren que es necesario un procesamiento de arriba hacia abajo para que la congruencia semántica audiovisual desencadene en la orientación espacial. La segunda conclusión que surge es que la congruencia semántica intermodal puede guiar la atención en condiciones de búsqueda visual dirigida a un objetivo, y también bajo observación libre en escenas complejas y dinámicas. Tercero, la carga perceptiva es un factor limitante para estas interacciones. Estos hallazgos amplían el conocimiento previo sobre las interacciones intermodales basadas en objetos usando estímulos simples y aclaran cómo las relaciones audiovisuales semánticamente congruentes se desarrollan en escenarios realista
APA, Harvard, Vancouver, ISO, and other styles
9

Nordmark, Anton. "Designing Multimodal Warning Signals for Cyclists of the Future." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74884.

Full text
Abstract:
Traffic is a complex environment in which many actors take part; several new technologies bring promises of reducing this complexity. However, cyclists—a particularly vulnerable road user group—have so far been somewhat put aside in these new developments, among them being Cooperative Intelligent Traffic Systems (C-ITS) and their aspects of human–computer interaction. This master’s thesis of industrial design engineering presents five multimodal collision warning signals for cyclists—future ones in these supposed C-ITS—using a novel application of bone conduction headphones (BCH) via sensations of both sound and touch. The thesis project was conducted as a complementary subset of the larger research project ‘V2Cyclist’ orchestrated by RISE Interactive. V2Cyclist set out to adapt the wireless V2X-protocol for cyclists by developing a physical prototype in the form of a bicycle helmet and corresponding human–computer interface. A significant part of the theoretical framework for this thesis was multiple resource theory: tasks in a different modality can be performed more effectively than in one already taxed attentively. Literature on human factors was also applied, particularly with regards to the perception of sound; evidence suggests that humans evolved a perceptual bias for threatening and ‘looming’ sounds that appear to encroach our peripersonal space; ethological findings point toward the association with low-frequency sounds to largeness. Sound design techniques usually applied to more artistic ends, such as synthesis and mixing, were repurposed for the novel, audiotactile context of this thesis. The thesis process was rooted in design thinking and consisted of four stages: context immersion, ideation, concept development, and lastly evaluation; converging and diverging the novel design space of using BCH in an audiotactile, i.e. bimodal way. The divergent approach generated a wide range of ideas. The later convergent approach did not result in one, definite design as further evaluation is required but also due to unknowns in terms of future hardware and network constraints. Given the plurality and diversity of cyclists, it may well follow that there is no optimal collision warning design in the singular. Hence, a range of five different solutions is presented. Each of the five multimodal collision warnings presents a different approach to conveying a sense of danger and urgency. Some warning signals are static in type, while others are more dynamic. Given the presumed rarity of collision warnings, multiple design techniques and rationales were applied separately, as well as in combination, to create different warning stimuli that signaled high urgency in an intuitive way. Namely, the use of: conventions in design and culture; explicitness in the form of speech; visceral appeal via threatening and animalistic timbres; dynamic and procedurally generated feedback; multimodal salience; crossmodal evocation of ‘roughness;’ size-sound symbolism to imply largeness; and innately activating characteristics of looming sounds.
Trafiken är en komplex miljö med många deltagare; diverse ny teknik gör anspråk på att underlätta denna komplexitet. Men, cyklister—en särskilt utsatt grupp av trafikanter—har hittills hamnat i skymundan för sådana utvecklingar. Vidare, aspekten av användbara gränssnitt för cyklister inom sådana uppkopplade och samverkande trafiksystem (C-ITS) har utforskats desto mindre. Det här examensarbetet inom Teknisk design presenterar fem multimodala kollisionsvarningar avsedda för cyklister—framtida sådana i dessa C-ITS—genom en ny och bimodal användning av benledande hörlurar via både ljud och vibrationer. Examensarbetet genomfördes i koppling till forskningsprojektet V2Cyclist, orkestrerat av RISE Interactive, vars projektmål var att anpassa det trådlösa kommunikationsprotokollet V2X för cyklister via en fysisk prototyp i form av en cykelhjälm och parallellt utveckla ett tillhörande användargränssnitt. En viktig del av det teoretiska ramverket för det här examensarbetet grundar sig på multiple resource theory: uppgifter kan utföras mer effektivt i en annan modalitet än i en som redan är belastad med uppmärksamhet. Mänskliga faktorer och teori om vår uppfattning användes; bevis pekar på att människor har evolutionärt utvecklat en bias för hotande ljud som upplevs inkräkta på vårt närmsta personliga revir; etologiska rön visar på en koppling mellan lågfrekventa ljud och ‘storhet.’ Tekniker inom ljuddesign vanligtvis använda till mer artistiska ändamål, såsom syntes och mixning, användes här till godo för att utforska den nya och bimodala designrymden. Processen för arbetet grundade sig i design thinking och bestod av fyra faser: kontextfördjupning, idégenerering, konceptutveckling, och utvärdering. En ny och tidigare outforskad designrymd beståendes av en bimodal, ljudtaktil användning av benledande hörlurar divergerades och konvergerades. Ett initialt utforskande angreppssätt gav upphov till en bred mängd av idéer. Ett senare renodlande angreppssätt gick, dock, inte hela vägen till endast en optimal lösning, då vidare utvärdering krävs men också på grund av okända teknologiska begränsningar. Dessutom, givet cyklisters stora mångfald, kan det möjligtvis följa att det inte finns någon enskild design av den optimala kollisionsvarningen. Ett spann på fem olika lösningar presenteras därmed. Fem koncept för multimodala kollisionsvarningar presenteras där varje variant uttrycker fara och kritiskhet på olika sätt. Vissa är statiska i typ, medan andra verkar mer kontinuerligt och dynamiskt. Det antogs att kollisionsvarningar sker sällan. Olika designtekniker och motiveringar har använts, ibland i kombination med varandra, för att skapa kollisionsvarningar vars avsikter omedelbart förstås: normer inom design och kultur gällande ljud; uttalad kommunikation i form av tal; anspråk på människors biologiska intuition via hotfulla och djurliknande klangfärger; dynamisk och procedurellt genererad feedback; multimodal effektfullhet; korsmodal känsla av grova texturer; size-sound symbolism för att antyda ‘storhet;’ samt de naturligt aktiverande egenskaperna hos looming sounds.
APA, Harvard, Vancouver, ISO, and other styles
10

McIlhagga, William H., J. Baert, C. Bundesen, and A. Larsen. "Seeing or hearing? Perceptual independence, modality confusions, and crossmodal congruity effects with focused and divided attention." 2003. http://hdl.handle.net/10454/2673.

Full text
Abstract:
No
Observers were given brief presentations of pairs of simultaneous stimuli consisting of a visual and a spoken letter. In the visual focused-attention condition, only the visual letter should be reported; in the auditory focused-attention condition, only the spoken letter should be reported; in the divided-attention condition, both letters, as well as their respective modalities, should be reported (forced choice). The proportions of correct reports were nearly the same in the three conditions (no significant divided-attention decrement), and in the divided-attention condition, the probability that the visual letter was correctly reported was independent of whether the auditory letter Was correctly reported. However, with a probability much higher than chance, the observers reportedihearing the visual stimulus letter or seeing the spoken stimulus letter (modality confusions). The strength of the effect was nearly the same with focused as with divided attention. We also discovered a crossmodal congruity effect: Performance was better when the two letters in a stimulus pair were the same than when they differed in type.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Crossmodal attention"

1

(Editor), Charles Spence, and Jon Driver (Editor), eds. Crossmodal Space and Crossmodal Attention. Oxford University Press, USA, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Charles, Spence, and Driver Jon, eds. Crossmodal space and crossmodal attention. Oxford: Oxford University Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

(Editor), Charles Spence, and Jon Driver (Editor), eds. Crossmodal Space and Crossmodal Attention. Oxford University Press, USA, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Spence, Charles, and Jon Driver, eds. Crossmodal Space and Crossmodal Attention. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.001.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spence, Charles. Orienting Attention. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.015.

Full text
Abstract:
The last 30 years or so have seen a rapid rise in research on attentional orienting from a crossmodal perspective. The majority of this research has tended to focus on the consequences of the covert orienting of attention (either to a sensory modality or spatial location) for both perception and neural information processing. The results of numerous studies have now highlighted the robust crossmodal links that exist in the case of both overt and covert, and both exogenous and endogenous spatial orienting. Neuroimaging studies have started to highlight the neural circuits underlying such crossmodal effects. Researchers are increasingly using transcranial magnetic stimulation in order to lesion temporarily putative areas within these networks; the aim of such research often being to determine whether attentional orienting is controlled by supramodal versus modality-specific neural systems that are somehow linked (this is known as the ‘separable-but-linked’ hypothesis). The available research demonstrates that crossmodal attentional orienting (and multisensory integration—from which it is sometimes hard to distinguish) can affect the very earliest stages of information processing in the human brain.
APA, Harvard, Vancouver, ISO, and other styles
6

Ganeri, Jonardon. Orienting Attention. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198757405.003.0007.

Full text
Abstract:
A puzzle about attention with a long history is addressed, the puzzle that attention can be captured by events and in such cases does not appear to be required for conscious experience. One might argue that there is still conscious attention in such cases, though of a global sort; but the view this chapter defends is rather that attention has a subliminal as well as a conscious form. Subliminally attention is the mode of activity of cognitive modules which are responsible for the orienting towards and processing of stimuli and their deliverance into awareness, as well as their crossmodal interconnections.
APA, Harvard, Vancouver, ISO, and other styles
7

Eimer, Martin. The Time Course of Spatial Attention. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.006.

Full text
Abstract:
Event-related brain potential (ERP) measures have made important contributions to our understanding of the mechanisms of selective attention. This chapter provides a selective and non-technical review of some of these contributions. It will concentrate mainly on research that has studied spatially selective attentional processing in vision, although research on crossmodal links in spatial attention will also be discussed. The main purpose of this chapter is to illustrate how ERP methods have helped to provide answers to major theoretical questions that have shaped research on selective attention in the past 40 years.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Crossmodal attention"

1

Haarslev, Frederik, David Docherty, Stefan-Daniel Suvei, William Kristian Juel, Leon Bodenhagen, Danish Shaikh, Norbert Krüger, and Poramate Manoonpong. "Towards Crossmodal Learning for Smooth Multimodal Attention Orientation." In Social Robotics, 318–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05204-1_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

KING, ANDREW J. "Development of Multisensory Spatial Integration." In Crossmodal Space and Crossmodal Attention, 1–24. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

STEIN, BARRY E., TERRENCE R. STANFORD, MARK T. WALLACE, J. WILLIAM VAUGHAN, and WAN JIANG. "Crossmodal Spatial Interactions in Subcortical and Cortical Circuits." In Crossmodal Space and Crossmodal Attention, 25–50. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

GRAZIANO, MICHAEL S. A., CHARLES G. GROSS, CHARLOTTE S. R. TAYLOR, and TIRIN MOORE. "A System of Multimodal Areas in the Primate Brain." In Crossmodal Space and Crossmodal Attention, 51–67. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

LADAVAS, ELISABETTA, and ALESSANDRO FARNÈ. "Neuropsychological Evidence for Multimodal Representations of Space near Specific Body Parts." In Crossmodal Space and Crossmodal Attention, 68–98. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

COHEN, YALE E., and RICHARD A. ANDERSEN. "Multimodal Spatial Representations in the Primate Parietal Lobe." In Crossmodal Space and Crossmodal Attention, 99–121. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

POUGET, ALEXANDRE, SOPHIE DENEVE, and JEAN-RENÉ DUHAMEL. "A Computational Neural Theory of Multisensory Spatial Representations." In Crossmodal Space and Crossmodal Attention, 122–40. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

BERTELSON, PAUL, and BÉATRICE DE GELDER. "The Psychology of Multimodal Perception." In Crossmodal Space and Crossmodal Attention, 141–77. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DRIVER, JON, and CHARLES SPENCE. "Crossmodal Spatial Attention: Evidence from Human Performance." In Crossmodal Space and Crossmodal Attention, 178–220. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

EIMER, MARTIN. "Electrophysiolgy of Human Crossmodal Spatial Attention." In Crossmodal Space and Crossmodal Attention, 221–45. Oxford University Press, 2004. http://dx.doi.org/10.1093/acprof:oso/9780198524861.003.0009.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Crossmodal attention"

1

Spence, Charles. "Crossmodal attention and multisensory integration." In the 5th international conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/958432.958435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yao, Zhuxi, Liang Zhang, and Kan Zhang. "The crossmodal spatial attention affects face processing." In 2014 10th International Conference on Natural Computation (ICNC). IEEE, 2014. http://dx.doi.org/10.1109/icnc.2014.6975971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Beltran-Gonzalez, C., and G. Sandini. "Visual attention priming based on crossmodal expectations." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography