To see the other types of publications on this topic, follow the link: Multimodal stimuli.

Journal articles on the topic 'Multimodal stimuli'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multimodal stimuli.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sinke, Christopher, Janina Neufeld, Daniel Wiswede, Hinderk M. Emrich, Stefan Bleich, and Gregor R. Szycik. "Multisensory processing in synesthesia — differences in the EEG signal during uni- and multimodal processing." Seeing and Perceiving 25 (2012): 53. http://dx.doi.org/10.1163/187847612x646749.

Full text
Abstract:
Synesthesia is a condition in which stimulation in one processing stream (e.g., letters or music) leads to perception in an unstimulated processing stream (e.g., colors). Behavioral differences in mutisensory processing have been shown for multimodal illusions, but the differences in neural processing are still unclear. In the present study, we examined uni- and multimodal processing in 14 people with synesthesia and 13 controls using EEG recordings and a simple detection task. Stimuli were either presented acoustically, visually or multimodaly (simultaneous visual and auditory stimulation). I
APA, Harvard, Vancouver, ISO, and other styles
2

Binns, K. E., and T. E. Salt. "Importance of NMDA receptors for multimodal integration in the deep layers of the cat superior colliculus." Journal of Neurophysiology 75, no. 2 (1996): 920–30. http://dx.doi.org/10.1152/jn.1996.75.2.920.

Full text
Abstract:
1. Many sensory events contain multimodal information, yet most sensory nuclei are devoted to the analysis of single-modality information. In the deep superior colliculus (DSC), visual, auditory, and somatosensory information converges on individual multimodal neurons. The responses of multimodal neurons are determined by the temporal and spatial correspondence properties of the converging inputs such that stimuli arising from the same event elicit a facilitated multimodal response. 2. N-methyl-D-aspartate (NMDA) receptors may underlie the detection of spatial and temporal coincidence and coul
APA, Harvard, Vancouver, ISO, and other styles
3

Amerineni, Rajesh, Resh S. Gupta, and Lalit Gupta. "Multimodal Object Classification Models Inspired by Multisensory Integration in the Brain." Brain Sciences 9, no. 1 (2019): 3. http://dx.doi.org/10.3390/brainsci9010003.

Full text
Abstract:
Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classi
APA, Harvard, Vancouver, ISO, and other styles
4

Redhead, Edward S. "Multimodal Discrimination Learning in Humans: Evidence for Configural Theory." Quarterly Journal of Experimental Psychology 60, no. 11 (2007): 1477–95. http://dx.doi.org/10.1080/17470210601154560.

Full text
Abstract:
Human contingency learning was used to compare the predictions of configural and elemental theories. In three experiments, participants were required to learn which indicators were associated with an increase in core temperature of a fictitious nuclear plant. Experiments 1 and 2 investigated the rate at which a triple-element stimulus (ABC) could be discriminated from either single-element stimuli (A, B, and C) or double-element stimuli (AB, BC, and AC). Experiment 1 used visual stimuli, whilst Experiment 2 used visual, auditory, and tactile stimuli. In both experiments the participants took l
APA, Harvard, Vancouver, ISO, and other styles
5

Portnova, Galina V., and Daria A. Stebakova. "The multimodal emotion perception in codependent individuals." Neuroscience Research Notes 6, no. 1 (2023): 210. http://dx.doi.org/10.31117/neuroscirn.v6i1.210.

Full text
Abstract:
The emotional disturbances of individuals with codependency are often ignored. This study aimed to investigate the emotional perception of codependent individuals in four modalities – visual, auditory, tactile and olfactory. An EEG study was performed and presented pleasant and unpleasant stimuli selected by a panel of experts for each modality. Participants (fifteen codependent individuals and fifteen healthy volunteers) were instructed to assess the emotional impact and pleasantness of stimuli. The method of EEG spaces was used to visualize how close perceived stimuli were according to EEG d
APA, Harvard, Vancouver, ISO, and other styles
6

Lecker, Caitlin A., Michael H. Parsons, Daniel R. Lecker, Ronald J. Sarno, and Faith E. Parsons. "The temporal multimodal influence of optical and auditory cues on the repellent behavior of ring-billed gulls (Larus delewarensis)." Wildlife Research 42, no. 3 (2015): 232. http://dx.doi.org/10.1071/wr15001.

Full text
Abstract:
Context A generation of new animal repellents is based on the premise that threat stimuli are best interpreted through multiple sensory pathways. Ring-billed gulls (RBG; Larus delawarensis) offer a unique opportunity to assess the efficacy of multimodal repellents over time. This pest species is repelled by both auditory and optical cues and persists in stable populations, often remaining in the same colony for life. This distinctive attribute makes it possible to assess colonies independently over time and space. Aims We assessed the unimodal (single-cue treatment) and multimodal (paired-cue)
APA, Harvard, Vancouver, ISO, and other styles
7

Teder-Sälejärvi, W. A., F. Di Russo, J. J. McDonald, and S. A. Hillyard. "Effects of Spatial Congruity on Audio-Visual Multimodal Integration." Journal of Cognitive Neuroscience 17, no. 9 (2005): 1396–409. http://dx.doi.org/10.1162/0898929054985383.

Full text
Abstract:
Spatial constraints on multisensory integration of auditory (A) and visual (V) stimuli were investigated in humans using behavioral and electrophysiological measures. The aim was to find out whether cross-modal interactions between A and V stimuli depend on their spatial congruity, as has been found for multisensory neurons in animal studies (Stein & Meredith, 1993). Randomized sequences of unimodal (A or V) and simultaneous bimodal (AV) stimuli were presented to right-or left-field locations while subjects made speeded responses to infrequent targets of greater intensity that occurred in
APA, Harvard, Vancouver, ISO, and other styles
8

Drewes, Asbjørn Mohr, Klaus-Peter Schipper, Georg Dimcevski, et al. "Multimodal assessment of pain in the esophagus: a new experimental model." American Journal of Physiology-Gastrointestinal and Liver Physiology 283, no. 1 (2002): G95—G103. http://dx.doi.org/10.1152/ajpgi.00496.2001.

Full text
Abstract:
A new multimodal pain assessment model was developed integrating electrical, mechanical, cold, and warmth stimuli into the same device. The device, with a bag and electrodes for electrical stimulation, was positioned in the lower part of the esophagus in 11 healthy subjects. Mechanical stimuli were delivered with an impedance planimetric system. Thermal stimuli were performed by circulating water of different temperatures (5–50°C) inside the bag. All subjects reported both nonpainful and painful local and referred sensations to all stimuli. Temporal summation to repeated electrical stimuli cou
APA, Harvard, Vancouver, ISO, and other styles
9

Burr, David, Ottavia Silva, Guido Marco Cicchini, Martin S. Banks, and Maria Concetta Morrone. "Temporal mechanisms of multimodal binding." Proceedings of the Royal Society B: Biological Sciences 276, no. 1663 (2009): 1761–69. http://dx.doi.org/10.1098/rspb.2008.1899.

Full text
Abstract:
The simultaneity of signals from different senses—such as vision and audition—is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory–visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic ‘dipper function’ in which the lowest thresholds occurred when discriminating against a non-z
APA, Harvard, Vancouver, ISO, and other styles
10

Neurauter, M. Lucas. "Multimodal Warnings: Curve-Warning Design." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 22 (2005): 1945–49. http://dx.doi.org/10.1177/154193120504902213.

Full text
Abstract:
This proof-of-concept study evaluated appropriate modalities and respective stimuli for a curvewarning application. Objective and subjective measurements were collected in a simulator environment to compare conditions comprised of multiple stimuli from auditory (icon, tone, or speech), visual (Heads Down Display [HDD] or Heads Up Display [HUD]), and haptic (throttle push-back) modalities. The speech stimulus was shown to be the most appropriate auditory stimulus both objectively and subjectively. The HDD and HUD were found to be comparable in terms of objective performance, although the HDD ra
APA, Harvard, Vancouver, ISO, and other styles
11

Glover, Isabel S., and Stuart N. Baker. "Multimodal stimuli modulate rapid visual responses during reaching." Journal of Neurophysiology 122, no. 5 (2019): 1894–908. http://dx.doi.org/10.1152/jn.00158.2019.

Full text
Abstract:
The reticulospinal tract plays an important role in primate upper limb function, but methods for assessing its activity are limited. One promising approach is to measure rapid visual responses (RVRs) in arm muscle activity during a visually cued reaching task; these may arise from a tecto-reticulospinal pathway. We investigated whether changes in reticulospinal excitability can be assessed noninvasively using RVRs, by pairing the visual stimuli of the reaching task with electrical stimulation of the median nerve, galvanic vestibular stimulation, or loud sounds, all of which are known to activa
APA, Harvard, Vancouver, ISO, and other styles
12

Harrar, Vanessa, and Laurence R. Harris. "Multimodal Ternus: Visual, Tactile, and Visuo — Tactile Grouping in Apparent Motion." Perception 36, no. 10 (2007): 1455–64. http://dx.doi.org/10.1068/p5844.

Full text
Abstract:
Gestalt rules that describe how visual stimuli are grouped also apply to sounds, but it is unknown if the Gestalt rules also apply to tactile or uniquely multimodal stimuli. To investigate these rules, we used lights, touches, and a combination of lights and touches, arranged in a classic Ternus configuration. Three stimuli (A, B, C) were arranged in a row across three fingers. A and B were presented for 50 ms and, after a delay, B and C were presented for 50 ms. Subjects were asked whether they perceived AB moving to BC (group motion) or A moving to C (element motion). For all three types of
APA, Harvard, Vancouver, ISO, and other styles
13

Kumakura, Erika, and Kazuhiko Yokosawa. "Crossmodal correspondence by perceptual learning among multimodal stimuli." Proceedings of the Annual Convention of the Japanese Psychological Association 78 (September 10, 2014): 3AM—1–086–3AM—1–086. http://dx.doi.org/10.4992/pacjpa.78.0_3am-1-086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Czigler, István, and László Balázs. "Event-related potentials and audiovisual stimuli: multimodal interactions." Neuroreport 12, no. 2 (2001): 223–26. http://dx.doi.org/10.1097/00001756-200102120-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Улмасова, Хилолахон. "Simultaneous interpreting as a multimodal task." Арабский язык в эпоху глобализации: инновационные подходы и методы обучения 1, no. 1 (2023): 544–48. http://dx.doi.org/10.47689/atgd:iyom-vol1-iss1-pp544-548-id28645.

Full text
Abstract:
The paper examines the processing of Simultaneous Interpreting, its difficulties and multimodal task, the importance of Visual Information in Simultaneous Interpreting, Simultaneous Interpreting as a Complex Task and visual, auditory and audiovisual stimuli in the brain
APA, Harvard, Vancouver, ISO, and other styles
16

Kobus, David A., John D. Moses, and Eaye Alvarado Bloom. "Effect of Multimodal Stimulus Presentation on Recall." Perceptual and Motor Skills 78, no. 1 (1994): 320–22. http://dx.doi.org/10.2466/pms.1994.78.1.320.

Full text
Abstract:
This study was conducted to investigate how the mode of stimulus presentation affects recall in the classroom environment. 289 undergraduates were randomly assigned to one of 7 experimental groups. All subjects were presented the same stimuli in one of 7 possible modes: (1) Printed Word, (2) Spoken Word, (3) Picture, (4) Printed Word + Spoken Word, (5) Picture + Spoken Word, (6) Picture + Printed Word, and (7) Printed Word, Picture + Spoken Word. 30 words, 6 from each of 5 categories, were presented to each group. A new stimulus was presented every 5 sec. Subjects were to recall (in writing) a
APA, Harvard, Vancouver, ISO, and other styles
17

Katus, Tobias, Anna Grubert, and Martin Eimer. "Intermodal Attention Shifts in Multimodal Working Memory." Journal of Cognitive Neuroscience 29, no. 4 (2017): 628–36. http://dx.doi.org/10.1162/jocn_a_01072.

Full text
Abstract:
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top–down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously
APA, Harvard, Vancouver, ISO, and other styles
18

Caron, Daniel P., Martha Rimniceanu, Anthony E. Scibelli, and Barry A. Trimmer. "Nociceptive neurons respond to multimodal stimuli in Manduca sexta." Journal of Experimental Biology 223, no. 3 (2020): jeb218859. http://dx.doi.org/10.1242/jeb.218859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sun, Ying, Liansheng Yao, and Qiufang Fu. "Crossmodal Correspondence Mediates Crossmodal Transfer from Visual to Auditory Stimuli in Category Learning." Journal of Intelligence 12, no. 9 (2024): 80. http://dx.doi.org/10.3390/jintelligence12090080.

Full text
Abstract:
This article investigated whether crossmodal correspondence, as a sensory translation phenomenon, can mediate crossmodal transfer from visual to auditory stimuli in category learning and whether multimodal category learning can influence the crossmodal correspondence between auditory and visual stimuli. Experiment 1 showed that the category knowledge acquired from elevation stimuli affected the categorization of pitch stimuli when there were robust crossmodal correspondence effects between elevation and size, indicating that crossmodal transfer occurred between elevation and pitch stimuli. Exp
APA, Harvard, Vancouver, ISO, and other styles
20

Smith, Kimberly G., and Daniel Fogerty. "Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition." Journal of Speech, Language, and Hearing Research 58, no. 6 (2015): 1805–17. http://dx.doi.org/10.1044/2015_jslhr-h-14-0272.

Full text
Abstract:
PurposeThis study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions.MethodTwenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sente
APA, Harvard, Vancouver, ISO, and other styles
21

Gao, Chenguang, Hirotaka Uchitomi, and Yoshihiro Miyake. "Influence of Multimodal Emotional Stimulations on Brain Activity: An Electroencephalographic Study." Sensors 23, no. 10 (2023): 4801. http://dx.doi.org/10.3390/s23104801.

Full text
Abstract:
This study aimed to reveal the influence of emotional valence and sensory modality on neural activity in response to multimodal emotional stimuli using scalp EEG. In this study, 20 healthy participants completed the emotional multimodal stimulation experiment for three stimulus modalities (audio, visual, and audio-visual), all of which are from the same video source with two emotional components (pleasure or unpleasure), and EEG data were collected using six experimental conditions and one resting state. We analyzed power spectral density (PSD) and event-related potential (ERP) components in r
APA, Harvard, Vancouver, ISO, and other styles
22

You, Xiaolu, Jianxin He, Nan Nan, et al. "Stretchable capacitive fabric electronic skin woven by electrospun nanofiber coated yarns for detecting tactile and multimodal mechanical stimuli." Journal of Materials Chemistry C 6, no. 47 (2018): 12981–91. http://dx.doi.org/10.1039/c8tc03631d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cosca, Charlotte M., Justin A. Haggard, Halli M. Kato, Eleni M. Sklavenitis, and Daniel T. Blumstein. "Do environmental stimuli modify sensitive plant (Mimosa pudica L.) risk assessment?" PLOS ONE 18, no. 12 (2023): e0294971. http://dx.doi.org/10.1371/journal.pone.0294971.

Full text
Abstract:
Although plants and animals both assess their environment and respond to stimuli, this reaction is considered a behavior in animals and a response in plants. Responses in plants are seen within various timescales- from the nanosecond stimuli is presented to a lifelong progression. Within this study, we bridge the gap between animal behavioral studies and plant response. Sensitive plants (Mimosa pudica L.) are an ideal subject for this due to the rapid closure of their primary leaflets when touched. We designed a multimodal, or stress combination, experiment to test two hypotheses with sensitiv
APA, Harvard, Vancouver, ISO, and other styles
24

Wayand, Joseph F., Daniel T. Levin, and D. Alexander Varakin. "Inattentional Blindness for a Noxious Multimodal Stimulus." American Journal of Psychology 118, no. 3 (2005): 339–52. http://dx.doi.org/10.2307/30039070.

Full text
Abstract:
Abstract Previous research has shown that people can miss salient stimuli outside the focus of their attention. This phenomenon, called inattentional blindness, typically is observed when people are given a task requiring them to focus their attention on one aspect of a complex visual scene. While participants are doing this task, an unexpected stimulus appears, and participants’ awareness of it is tested shortly thereafter. In the present experiments, noxious bimodal stimuli were used as a test case to measure the strength of inattentional blindness. We tested whether participants would notic
APA, Harvard, Vancouver, ISO, and other styles
25

Bergerbest, Dafna, Dara G. Ghahremani, and John D. E. Gabrieli. "Neural Correlates of Auditory Repetition Priming: Reduced fMRI Activation in the Auditory Cortex." Journal of Cognitive Neuroscience 16, no. 6 (2004): 966–77. http://dx.doi.org/10.1162/0898929041502760.

Full text
Abstract:
Repetition priming refers to enhanced or biased performance with repeatedly presented stimuli. Modality-specific perceptual repetition priming has been demonstrated behaviorally for both visually and auditorily presented stimuli. In functional neuroimaging studies, repetition of visual stimuli has resulted in reduced activation in the visual cortex, as well as in multimodal frontal and temporal regions. The reductions in sensory cortices are thought to reflect plasticity in modality-specific neocortex. Unexpectedly, repetition of auditory stimuli has resulted in reduced activation in multimoda
APA, Harvard, Vancouver, ISO, and other styles
26

Honegger, Fabian, Yuan Feng, and Matthias Rauterberg. "Multimodality for Passive Experience: Effects of Visual, Auditory, Vibration and Draught Stimuli on Sense of Presence." JUCS - Journal of Universal Computer Science 27, no. (6) (2021): 582–608. https://doi.org/10.3897/jucs.68384.

Full text
Abstract:
Adequate use of multimodal stimuli plays a crucial role in help forming the sense of presence within a virtual environment. While most of the presence research attempts to engage more sensory modalities to induce a higher sense of presence, this paper investigates the relevance of each sensory modality and different combinations on the subjective sense of presence using a specifically designed scenario of a passive experience. We chose a neutral test scenario of “waiting at a train station while a train is passing by” to avoid the potential influence of story narrative on mental pr
APA, Harvard, Vancouver, ISO, and other styles
27

Kraut, Michael A., Sarah Kremen, Lauren R. Moo, Jessica B. Segal, Vincent Calhoun, and John Hart. "Object Activation in Semantic Memory from Visual Multimodal Feature Input." Journal of Cognitive Neuroscience 14, no. 1 (2002): 37–47. http://dx.doi.org/10.1162/089892902317205302.

Full text
Abstract:
The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture–word). A previous study using word–word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture–word pairs th
APA, Harvard, Vancouver, ISO, and other styles
28

Sambo, Chiara F., and Bettina Forster. "Sustained Spatial Attention in Touch: Modality-Specific and Multimodal Mechanisms." Scientific World JOURNAL 11 (2011): 199–213. http://dx.doi.org/10.1100/tsw.2011.34.

Full text
Abstract:
Sustained attention to a body location results in enhanced processing of tactile stimuli presented at that location compared to another unattended location. In this paper, we review studies investigating the neural correlates of sustained spatial attention in touch. These studies consistently show that activity within modality-specific somatosensory areas (SI and SII) is modulated by sustained tactile-spatial attention. Recent evidence suggests that these somatosensory areas may be recruited as part of a larger cortical network,also including higher-level multimodal regions involved in spatial
APA, Harvard, Vancouver, ISO, and other styles
29

He, Zhipeng, Zina Li, Fuzhou Yang, et al. "Advances in Multimodal Emotion Recognition Based on Brain–Computer Interfaces." Brain Sciences 10, no. 10 (2020): 687. http://dx.doi.org/10.3390/brainsci10100687.

Full text
Abstract:
With the continuous development of portable noninvasive human sensor technologies such as brain–computer interfaces (BCI), multimodal emotion recognition has attracted increasing attention in the area of affective computing. This paper primarily discusses the progress of research into multimodal emotion recognition based on BCI and reviews three types of multimodal affective BCI (aBCI): aBCI based on a combination of behavior and brain signals, aBCI based on various hybrid neurophysiology modalities and aBCI based on heterogeneous sensory stimuli. For each type of aBCI, we further review sever
APA, Harvard, Vancouver, ISO, and other styles
30

Capato, Tamine T. C., Nienke M. de Vries, Joanna IntHout, et al. "Multimodal Balance Training Supported by Rhythmic Auditory Stimuli in Parkinson Disease: Effects in Freezers and Nonfreezers." Physical Therapy 100, no. 11 (2020): 2023–34. http://dx.doi.org/10.1093/ptj/pzaa146.

Full text
Abstract:
Abstract Objective To fulfill the potential of nonpharmacological interventions for people with Parkinson disease (PD), individually tailored treatment is needed. Multimodal balance training supported by rhythmic auditory stimuli (RAS) can improve balance and gait in people with PD. The purpose of this study was to determine whether both freezers and nonfreezers benefit. Methods A secondary analysis was conducted on a large randomized controlled trial that included 154 patients with PD (Hoehn & Yahr Stages 1–3 while ON-medication) who were assigned randomly to 3 groups: (1) multimodal bala
APA, Harvard, Vancouver, ISO, and other styles
31

Mouraux, A., and G. D. Iannetti. "Nociceptive Laser-Evoked Brain Potentials Do Not Reflect Nociceptive-Specific Neural Activity." Journal of Neurophysiology 101, no. 6 (2009): 3258–69. http://dx.doi.org/10.1152/jn.91181.2008.

Full text
Abstract:
Brief radiant laser pulses can be used to activate cutaneous Aδ and C nociceptors selectively and elicit a number of transient brain responses [laser-evoked potentials (LEPs)] in the ongoing EEG. LEPs have been used extensively in the past 30 years to gain knowledge about the cortical mechanisms underlying nociception and pain in humans, by assuming that they reflect at least neural activities uniquely or preferentially involved in processing nociceptive input. Here, by applying a novel blind source separation algorithm (probabilistic independent component analysis) to 124-channel event-relate
APA, Harvard, Vancouver, ISO, and other styles
32

Jantvik, Tamas, Lennart Gustafsson, and Andrew P. Papliński. "A Self-Organized Artificial Neural Network Architecture for Sensory Integration with Applications to Letter-Phoneme Integration." Neural Computation 23, no. 8 (2011): 2101–39. http://dx.doi.org/10.1162/neco_a_00149.

Full text
Abstract:
The multimodal self-organizing network (MMSON), an artificial neural network architecture carrying out sensory integration, is presented here. The architecture is designed using neurophysiological findings and imaging studies that pertain to sensory integration and consists of interconnected lattices of artificial neurons. In this artificial neural architecture, the degree of recognition of stimuli, that is, the perceived reliability of stimuli in the various subnetworks, is included in the computation. The MMSON's behavior is compared to aspects of brain function that deal with sensory integr
APA, Harvard, Vancouver, ISO, and other styles
33

Honegger, Fabian, Yuan Feng, and Matthias Rauterberg. "Multimodality for Passive Experience: Effects of Visual, Auditory, Vibration and Draught Stimuli on Sense of Presence." JUCS - Journal of Universal Computer Science 27, no. 6 (2021): 582–608. http://dx.doi.org/10.3897/jucs.68384.

Full text
Abstract:
Adequate use of multimodal stimuli plays a crucial role in help forming the sense of presence within a virtual environment. While most of the presence research attempts to engage more sensory modalities to induce a higher sense of presence, this paper investigates the relevance of each sensory modality and different combinations on the subjective sense of presence using a specifically designed scenario of a passive experience. We chose a neutral test scenario of “waiting at a train station while a train is passing by” to avoid the potential influence of story narrative on m
APA, Harvard, Vancouver, ISO, and other styles
34

Worley, Alan, Kirubin Pillay, Maria M. Cobo, et al. "The PiNe box: Development and validation of an electronic device to time-lock multimodal responses to sensory stimuli in hospitalised infants." PLOS ONE 18, no. 7 (2023): e0288488. http://dx.doi.org/10.1371/journal.pone.0288488.

Full text
Abstract:
Recording multimodal responses to sensory stimuli in infants provides an integrative approach to investigate the developing nervous system. Accurate time-locking across modalities is essential to ensure that responses are interpreted correctly, and could also improve clinical care, for example, by facilitating automatic and objective multimodal pain assessment. Here we develop and assess a system to time-lock stimuli (including clinically-required heel lances and experimental visual, auditory and tactile stimuli) to electrophysiological research recordings and data recorded directly from a hos
APA, Harvard, Vancouver, ISO, and other styles
35

Kiose, Mariya, Vadim Potekhin, and Oleg Zubkov. "Experimental Methods of Exploring Multimodal Discourse: Crossmodal Alignment." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije 23, no. 5 (2024): 149–60. https://doi.org/10.15688/jvolsu2.2024.5.12.

Full text
Abstract:
The study advances Crossmodal Alignment Framework to explore multimodal discourse in its three formats – semiotic, communicative and perceptive – via multimodal experiment. It considers the alignment patterns obtained from two semiotic modes (text and image), transferred in two communicative modes (speech and gesture), sensed by two perception modes (visual and audial). The common research framework determines the patterns as modulated by discourse tasks. The study features the results of multimodal experiments with the participants engaged in three discourse tasks: 1) receptive, which presume
APA, Harvard, Vancouver, ISO, and other styles
36

Guo, Zhuen, Mingqing Yang, Li Lin, et al. "E-MFNN: an emotion-multimodal fusion neural network framework for emotion recognition." PeerJ Computer Science 10 (April 19, 2024): e1977. http://dx.doi.org/10.7717/peerj-cs.1977.

Full text
Abstract:
Emotional recognition is a pivotal research domain in computer and cognitive science. Recent advancements have led to various emotion recognition methods, leveraging data from diverse sources like speech, facial expressions, electroencephalogram (EEG), electrocardiogram, and eye tracking (ET). This article introduces a novel emotion recognition framework, primarily targeting the analysis of users’ psychological reactions and stimuli. It is important to note that the stimuli eliciting emotional responses are as critical as the responses themselves. Hence, our approach synergizes stimulus data w
APA, Harvard, Vancouver, ISO, and other styles
37

Portnova, Galina, Aleksandra Maslennikova, Natalya Zakharova, and Olga Martynova. "The Deficit of Multimodal Perception of Congruent and Non-Congruent Fearful Expressions in Patients with Schizophrenia: The ERP Study." Brain Sciences 11, no. 1 (2021): 96. http://dx.doi.org/10.3390/brainsci11010096.

Full text
Abstract:
Emotional dysfunction, including flat affect and emotional perception deficits, is a specific symptom of schizophrenia disorder. We used a modified multimodal odd-ball paradigm with fearful facial expressions accompanied by congruent and non-congruent emotional vocalizations (sounds of women screaming and laughing) to investigate the impairment of emotional perception and reactions to other people’s emotions in schizophrenia. We compared subjective ratings of emotional state and event-related potentials (EPPs) in response to congruent and non-congruent stimuli in patients with schizophrenia an
APA, Harvard, Vancouver, ISO, and other styles
38

Regenbogen, Christina, Thilo Kellermann, Janina Seubert, et al. "Neural responses to dynamic multimodal stimuli and pathology-specific impairments of social cognition in schizophrenia and depression." British Journal of Psychiatry 206, no. 3 (2015): 198–205. http://dx.doi.org/10.1192/bjp.bp.113.143040.

Full text
Abstract:
BackgroundIndividuals with schizophrenia and people with depression both show abnormal behavioural and neural responses when perceiving and responding to emotional stimuli, but pathology-specific differences and commonalities remain mostly unclear.AimsTo directly compare empathic responses to dynamic multimodal emotional stimuli in a group with schizophrenia and a group with depression, and to investigate their neural correlates using functional magnetic resonance imaging (fMRI).MethodThe schizophrenia group (n = 20), the depression group (n = 24) and a control group (n = 24) were presented wi
APA, Harvard, Vancouver, ISO, and other styles
39

Paschew, Georgi, Rene Körbitz, and Andreas Richter. "Multimodal, High-Resolution Imaging System Based on Stimuli-Responsive Polymers." Advances in Science and Technology 82 (September 2012): 44–49. http://dx.doi.org/10.4028/www.scientific.net/ast.82.44.

Full text
Abstract:
Providing realistic impressions about a virtual ambient for interaction with human’s auditory, visual, and tactile perception is one of the core challenges of modern imaging systems. However, particularly tactile displays with high spatial resolution implemented as a large-scale integrated microelectromechanical system are not yet realized. Here, we report on a multimodal display with thousands of actuator pixels, which generates both visual and tactile impressions of a virtual surface. The fully polymeric, monolithically integrated device consists of an actuator array made from poly(N-isoprop
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Yanjia, Hasan Ayaz, and Ali N. Akansu. "Multimodal Affective State Assessment Using fNIRS + EEG and Spontaneous Facial Expression." Brain Sciences 10, no. 2 (2020): 85. http://dx.doi.org/10.3390/brainsci10020085.

Full text
Abstract:
Human facial expressions are regarded as a vital indicator of one’s emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective s
APA, Harvard, Vancouver, ISO, and other styles
41

Moodley, Thashini, and Moganavelli Singh. "Current Stimuli-Responsive Mesoporous Silica Nanoparticles for Cancer Therapy." Pharmaceutics 13, no. 1 (2021): 71. http://dx.doi.org/10.3390/pharmaceutics13010071.

Full text
Abstract:
With increasing incidence and mortality rates, cancer remains one of the most devastating global non-communicable diseases. Restricted dosages and decreased bioavailability, often results in lower therapeutic outcomes, triggering the development of resistance to conventionally used drug/gene therapeutics. The development of novel therapeutic strategies using multimodal nanotechnology to enhance specificity, increase bioavailability and biostability of therapeutics with favorable outcomes is critical. Gated vectors that respond to endogenous or exogenous stimuli, and promote targeted tumor deli
APA, Harvard, Vancouver, ISO, and other styles
42

Moodley, Thashini, and Moganavelli Singh. "Current Stimuli-Responsive Mesoporous Silica Nanoparticles for Cancer Therapy." Pharmaceutics 13, no. 1 (2021): 71. http://dx.doi.org/10.3390/pharmaceutics13010071.

Full text
Abstract:
With increasing incidence and mortality rates, cancer remains one of the most devastating global non-communicable diseases. Restricted dosages and decreased bioavailability, often results in lower therapeutic outcomes, triggering the development of resistance to conventionally used drug/gene therapeutics. The development of novel therapeutic strategies using multimodal nanotechnology to enhance specificity, increase bioavailability and biostability of therapeutics with favorable outcomes is critical. Gated vectors that respond to endogenous or exogenous stimuli, and promote targeted tumor deli
APA, Harvard, Vancouver, ISO, and other styles
43

McClure, John P., and Pierre-Olivier Polack. "Pure tones modulate the representation of orientation and direction in the primary visual cortex." Journal of Neurophysiology 121, no. 6 (2019): 2202–14. http://dx.doi.org/10.1152/jn.00069.2019.

Full text
Abstract:
Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse prim
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Min, Hao Zhao, Ziqi Zhang, et al. "CO/light dual-activatable Ru(ii)-conjugated oligomer agent for lysosome-targeted multimodal cancer therapeutics." Chemical Science 12, no. 34 (2021): 11515–24. http://dx.doi.org/10.1039/d1sc01317c.

Full text
Abstract:
The anticancer therapeutics of lysosome disruption/PDT/chemotherapy based on Ru-OTE complex was achieved, which provides a new strategy for developing multimodal and effective stimuli-activatable subcellular organelle-targeted cancer therapeutics.
APA, Harvard, Vancouver, ISO, and other styles
45

Fraix, Aurore, and Salvatore Sortino. "Combination of PDT photosensitizers with NO photodononors." Photochemical & Photobiological Sciences 17, no. 11 (2018): 1709–27. http://dx.doi.org/10.1039/c8pp00272j.

Full text
Abstract:
The appropriate assembly of PDT photosensitizers and NO photodonors may open up intriguing avenues towards new and still underexplored multimodal therapies not based on “conventional” drugs but entirely controlled by light stimuli.
APA, Harvard, Vancouver, ISO, and other styles
46

Schneider, Till R., Andreas K. Engel, and Stefan Debener. "Multisensory Identification of Natural Objects in a Two-Way Crossmodal Priming Paradigm." Experimental Psychology 55, no. 2 (2008): 121–32. http://dx.doi.org/10.1027/1618-3169.55.2.121.

Full text
Abstract:
Abstract. The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N = 56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments t
APA, Harvard, Vancouver, ISO, and other styles
47

Shetty, Yatiraj, Shubham Mehta, Diep Tran, Bhavica Soni, and Troy McDaniel. "Emotional Response to Vibrothermal Stimuli." Applied Sciences 11, no. 19 (2021): 8905. http://dx.doi.org/10.3390/app11198905.

Full text
Abstract:
Emotional response to haptic stimuli is a widely researched topic, but the combination of vibrotactile and thermal stimuli requires more attention. The purpose of this study is to investigate emotional response to vibrothermal stimulation by combining spatiotemporal vibrotactile stimulus with dynamic thermal stimulus (hot or cold). The vibrotactile and thermal stimuli were produced using the Haptic Chair and the Embr wave thermal bracelet, respectively. The results show that spatiotemporal vibrotactile patterns and their duration, and dynamic thermal stimulation, have an independent effect on
APA, Harvard, Vancouver, ISO, and other styles
48

Gil-Guevara, Oswaldo, and Andre J. Riveros. "Stimulus intensity and temporal configuration interact during bimodal learning and memory in honey bees." PLOS ONE 19, no. 10 (2024): e0309129. http://dx.doi.org/10.1371/journal.pone.0309129.

Full text
Abstract:
Multimodal integration is a core neural process with a keen relevance during ecological tasks requiring learning and memory, such as foraging. The benefits of learning multimodal signals imply solving whether the components come from a single event. This challenge presumably depends on the timing and intensity of the stimuli. Here, we used simultaneous and alternate presentations of olfactory and visual stimuli, at low and high intensities, to understand how temporal and intensity variations affect the learning of a bimodal stimulus and its components. We relied on the conditioning of the prob
APA, Harvard, Vancouver, ISO, and other styles
49

Ali, Amaal Abdulraqeb, Waad H. Abuwatfa, Mohammad H. Al-Sayah, and Ghaleb A. Husseini. "Gold-Nanoparticle Hybrid Nanostructures for Multimodal Cancer Therapy." Nanomaterials 12, no. 20 (2022): 3706. http://dx.doi.org/10.3390/nano12203706.

Full text
Abstract:
With the urgent need for bio-nanomaterials to improve the currently available cancer treatments, gold nanoparticle (GNP) hybrid nanostructures are rapidly rising as promising multimodal candidates for cancer therapy. Gold nanoparticles (GNPs) have been hybridized with several nanocarriers, including liposomes and polymers, to achieve chemotherapy, photothermal therapy, radiotherapy, and imaging using a single composite. The GNP nanohybrids used for targeted chemotherapy can be designed to respond to external stimuli such as heat or internal stimuli such as intratumoral pH. Despite their promis
APA, Harvard, Vancouver, ISO, and other styles
50

Macuch Silva, Vinicius, Judith Holler, Asli Ozyurek, and Seán G. Roberts. "Multimodality and the origin of a novel communication system in face-to-face interaction." Royal Society Open Science 7, no. 1 (2020): 182056. http://dx.doi.org/10.1098/rsos.182056.

Full text
Abstract:
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!