Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Cross-Modal effect.

Artykuły w czasopismach na temat „Cross-Modal effect”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Cross-Modal effect”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kitamura, Emi, Katsuya Miyashita, Kenji Ozawa, Masaki Omata i Atsumi Imamiya. "Cross-Modality Between Haptic and Auditory Roughness with a Force Feedback Device". Journal of Robotics and Mechatronics 18, nr 4 (20.08.2006): 450–57. http://dx.doi.org/10.20965/jrm.2006.p0450.

Pełny tekst źródła
Streszczenie:
Haptic roughness is basic to accurately identifying the texture of an object. When we manipulate everyday objects, their surfaces emit sound. Cross-modal effects between haptic and auditory roughness must thus be considered in realizing a multimodal human-computer interface. We conducted two experiments to accumulate basic data for the cross-modality using a force feedback device. In one experiment, we studied the cross-modal effect of auditory roughness on haptic roughness. We studied the effect of haptic roughness on auditory roughness in the other experiment. Results showed that cross-modal effects were mutually enhancing when their single-modal roughness was relatively high.
Style APA, Harvard, Vancouver, ISO itp.
2

Maki, Takuma, i Hideyoshi Yanagisawa. "A Methodology for Multisensory Product Experience Design Using Cross-modal Effect: A Case of SLR Camera". Proceedings of the Design Society: International Conference on Engineering Design 1, nr 1 (lipiec 2019): 3801–10. http://dx.doi.org/10.1017/dsi.2019.387.

Pełny tekst źródła
Streszczenie:
AbstractThroughout the course of product experience, a user employs multiple senses, including vision, hearing, and touch. Previous cross-modal studies have shown that multiple senses interact with each other and change perceptions. In this paper, we propose a methodology for designing multisensory product experiences by applying cross-modal effect to simultaneous stimuli. In this methodology, we first obtain a model of the comprehensive cognitive structure of user's multisensory experience by applying Kansei modeling methodology and extract opportunities of cross-modal effect from the structure. Second, we conduct experiments on these cross-modal effects and formulate them by obtaining a regression curve through analysis. Finally, we find solutions to improve the product sensory experience from the regression model of the target cross-modal effects. We demonstrated the validity of the methodology with SLR cameras as a case study, which is a typical product with multisensory perceptions.
Style APA, Harvard, Vancouver, ISO itp.
3

Xia, Jing, Wei Zhang, Yizhou Jiang, You Li i Qi Chen. "Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects". Cortex 106 (wrzesień 2018): 47–64. http://dx.doi.org/10.1016/j.cortex.2018.05.003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kang, Cheng, Nan Ye, Fangwen Zhang, Yanwen Wu, Guichun Jin, Jihong Xie i Lin Du. "The cross-modal affective priming effect: Effects of the valence and arousal of primes". Social Behavior and Personality: an international journal 49, nr 12 (1.12.2021): 1–11. http://dx.doi.org/10.2224/sbp.10202.

Pełny tekst źródła
Streszczenie:
Although studies have investigated the influence of the emotionality of primes on the cross-modal affective priming effect, it is unclear whether this effect is due to the contribution of the arousal or the valence of primes. We explored how the valence and arousal of primes influenced the cross-modal affective priming effect. In Experiment 1 we manipulated the valence of primes (positive and negative) that were matched by arousal. In Experiments 2 and 3 we manipulated the arousal of primes under the conditions of positive and negative valence, respectively. Affective words were used as auditory primes and affective faces were used as visual targets in a priming task. The results suggest that the valence of primes modulated the cross-modal affective priming effect but that the arousal of primes did not influence the priming effect. Only when the priming stimuli were positive did the cross-modal affective priming effect occur, but negative primes did not produce a priming effect. In addition, for positive but not negative primes, the arousal of primes facilitated the processing of subsequent targets. Our findings have great significance for understanding the interaction of different modal affective information.
Style APA, Harvard, Vancouver, ISO itp.
5

Li, A., i E. H. Dowell. "Asymptotic modal analysis of dynamical systems: the effect of modal cross-correlation". Journal of Sound and Vibration 276, nr 1-2 (wrzesień 2004): 65–80. http://dx.doi.org/10.1016/j.jsv.2003.07.031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hanauer, Julie B., i Patricia J. Brooks. "Developmental change in the cross-modal Stroop effect". Perception & Psychophysics 65, nr 3 (kwiecień 2003): 359–66. http://dx.doi.org/10.3758/bf03194567.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Tagliabue, Mariaelena, Marco Zorzi i Carlo Umiltà. "Cross-modal re-mapping influences the Simon effect". Memory & Cognition 30, nr 1 (styczeń 2002): 18–23. http://dx.doi.org/10.3758/bf03195261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Patchay, Sandhiran, Umberto Castiello i Patrick Haggard. "A cross-modal interference effect in grasping objects". Psychonomic Bulletin & Review 10, nr 4 (grudzień 2003): 924–31. http://dx.doi.org/10.3758/bf03196553.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kida, Tetsuo, Koji Inui, Emi Tanaka i Ryusuke Kakigi. "Dynamics of Within-, Inter-, and Cross-Modal Attentional Modulation". Journal of Neurophysiology 105, nr 2 (luty 2011): 674–86. http://dx.doi.org/10.1152/jn.00807.2009.

Pełny tekst źródła
Streszczenie:
Numerous studies have demonstrated effects of spatial attention within single sensory modalities (within-modal spatial attention) and the effect of directing attention to one sense compared with the other senses (intermodal attention) on cortical neuronal activity. Furthermore, recent studies have been revealing that the effects of spatial attention directed to a certain location in a certain sense spread to the other senses at the same location in space (cross-modal spatial attention). The present study used magnetoencephalography to examine the temporal dynamics of the effects of within-modal and cross-modal spatial and intermodal attention on cortical processes responsive to visual stimuli. Visual or tactile stimuli were randomly presented on the left or right side at a random interstimulus interval and subjects directed attention to the left or right when vision or touch was a task-relevant modality. Sensor-space analysis showed that a response around the occipitotemporal region at around 150 ms after visual stimulation was significantly enhanced by within-modal, cross-modal spatial, and intermodal attention. A later response over the right frontal region at around 200 ms was enhanced by within-modal spatial and intermodal attention, but not by cross-modal spatial attention. These effects were estimated to originate from the occipitotemporal and lateral frontal areas, respectively. Thus the results suggest different spatiotemporal dynamics of neural representations of cross-modal attention and intermodal or within-modal attention.
Style APA, Harvard, Vancouver, ISO itp.
10

Gu, Jiyou, i Huiqin Dong. "The effect of gender stereotypes on cross-modal spatial attention". Social Behavior and Personality: an international journal 49, nr 9 (1.09.2021): 1–6. http://dx.doi.org/10.2224/sbp.10753.

Pełny tekst źródła
Streszczenie:
Using a spatial-cueing paradigm in which trait words were set as visual cues and gender words were set as auditory targets, we examined whether cross-modal spatial attention was influenced by gender stereotypes. Results of an experiment conducted with 24 participants indicate that they tended to focus on targets in the valid-cue condition (i.e., the cues located at the same position as targets), regardless of the modality of cues and targets, which is consistent with the cross-modal attention effect found in previous studies. Participants tended to focus on targets that were stereotype-consistent with cues only when the cues were valid, which shows that stereotype-consistent information facilitated visual–auditory cross-modal spatial attention. These results suggest that cognitive schema, such as gender stereotypes, have an effect on cross-modal spatial attention.
Style APA, Harvard, Vancouver, ISO itp.
11

Vallet, Guillaume, Lionel Brunel i Rémy Versace. "The Perceptual Nature of the Cross-Modal Priming Effect". Experimental Psychology 57, nr 5 (1.12.2010): 376–82. http://dx.doi.org/10.1027/1618-3169/a000045.

Pełny tekst źródła
Streszczenie:
The aim of this study was to demonstrate that the cross-modal priming effect is perceptual and therefore consistent with the idea that knowledge is modality dependent. We used a two-way cross-modal priming paradigm in two experiments. These experiments were constructed on the basis of a two-phase priming paradigm. In the study phase of Experiment 1, participants had to categorize auditory primes as “animal” or “artifact”. In the test phase, they had to perform the same categorization task with visual targets which corresponded either to the auditory primes presented in the study phase (old items) or to new stimuli (new items). To demonstrate the perceptual nature of the cross-modal priming effect, half of the auditory primes were presented with a visual mask (old-masked items). In the second experiment, the visual stimuli were used as primes and the auditory stimuli as targets, and half of the visual primes were presented with an auditory mask (a white noise). We hypothesized that if the cross-modal priming effect results from an activation of modality-specific representations, then the mask should interfere with the priming effect. In both experiments, the results corroborated our predictions. In addition, we observed a cross-modal priming effect from pictures to sounds in a long-term paradigm for the first time.
Style APA, Harvard, Vancouver, ISO itp.
12

Sato, Naoto, Mana Miyamoto, Risa Santa, Ayaka Sasaki i Kenichi Shibuya. "Cross-modal and subliminal effects of smell and color". PeerJ 11 (17.02.2023): e14874. http://dx.doi.org/10.7717/peerj.14874.

Pełny tekst źródła
Streszczenie:
In the present study, we examined whether the cross-modal effect can be obtained between odors and colors, which has been confirmed under olfactory recognizable conditions and also occurs under unrecognizable conditions. We used two flavors of red fruits such as strawberries and tomatoes for this purpose. We also aimed to compare whether similar cross-modal effects could be achieved by setting the flavors at recognizable (liminal) and unrecognizable (subliminal) concentrations in the experiment. One flavor at a normal concentration (0.1%, Liminal condition) and one at a concentration below the subliminal threshold (0.015%, Subliminal condition), were presented, and the color that resembled the smell most closely from among the 10 colors, was selected by participants. Except for the subliminal tomato condition, each odor was significantly associated with at least one color (p < 0.01). Participants selected pink and red for liminal strawberry (0.1%) (p < 0.05), pink for subliminal strawberry (0.015%) (p < 0.05), and orange for liminal tomato (0.1%) (p < 0.05), but there was no color selected for subliminal tomato (0.015%) (p < 0.05). The results of this study suggest that the flavor of tomato produced a cross-modal effect in liminal conditions, but not in subliminal conditions. On the other hand, the results of the present study suggest that the flavor of strawberries produces a cross-modal effect even under subliminal conditions. This study showed that cross-modal effects might exist, even at unrecognizable levels of flavor.
Style APA, Harvard, Vancouver, ISO itp.
13

Hanada, G. M., J. Ahveninen, F. J. Calabro, A. Yengo-Kahn i L. M. Vaina. "Cross-Modal Cue Effects in Motion Processing". Multisensory Research 32, nr 1 (2019): 45–65. http://dx.doi.org/10.1163/22134808-20181313.

Pełny tekst źródła
Streszczenie:
Abstract The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.
Style APA, Harvard, Vancouver, ISO itp.
14

Sun, Xunwei, i Qiufang Fu. "The Visual Advantage Effect in Comparing Uni-Modal and Cross-Modal Probabilistic Category Learning". Journal of Intelligence 11, nr 12 (27.11.2023): 218. http://dx.doi.org/10.3390/jintelligence11120218.

Pełny tekst źródła
Streszczenie:
People rely on multiple learning systems to complete weather prediction (WP) tasks with visual cues. However, how people perform in audio and audiovisual modalities remains elusive. The present research investigated how the cue modality influences performance in probabilistic category learning and conscious awareness about the category knowledge acquired. A modified weather prediction task was adopted, in which the cues included two dimensions from visual, auditory, or audiovisual modalities. The results of all three experiments revealed better performances in the visual modality relative to the audio and audiovisual modalities. Moreover, participants primarily acquired unconscious knowledge in the audio and audiovisual modalities, while conscious knowledge was acquired in the visual modality. Interestingly, factors such as the amount of training, the complexity of visual stimuli, and the number of objects to which the two cues belonged influenced the amount of conscious knowledge acquired but did not change the visual advantage effect. These findings suggest that individuals can learn probabilistic cues and category associations across different modalities, but a robust visual advantage persists. Specifically, visual associations can be learned more effectively, and are more likely to become conscious. The possible causes and implications of these effects are discussed.
Style APA, Harvard, Vancouver, ISO itp.
15

Scurry, Alexandra N., Dustin Dutcher, John S. Werner i Fang Jiang. "Age-Related Effects on Cross-Modal Duration Perception". Multisensory Research 32, nr 8 (11.12.2019): 693–714. http://dx.doi.org/10.1163/22134808-20191461.

Pełny tekst źródła
Streszczenie:
Abstract Reliable duration perception of external events is necessary to coordinate perception with action, precisely discriminate speech, and for other daily functions. Visual duration perception can be heavily influenced by concurrent auditory signals; however, age-related effects on this process have received minimal attention. In the present study, we examined the effect of aging on duration perception by quantifying (1) duration discrimination thresholds, (2) auditory temporal dominance, and (3) visual duration expansion/compression percepts induced by an accompanying auditory stimulus of longer/shorter duration. Duration discrimination thresholds were significantly greater for visual than auditory tasks in both age groups, however there was no effect of age. While the auditory modality retained dominance in duration perception with age, older adults still performed worse than young adults when comparing durations of two target stimuli (e.g., visual) in the presence of distractors from the other modality (e.g., auditory). Finally, both age groups perceived similar visual duration compression, whereas older adults exhibited visual duration expansion over a wider range of auditory durations compared to their younger counterparts. Results are discussed in terms of multisensory integration and possible decision strategies that change with age.
Style APA, Harvard, Vancouver, ISO itp.
16

Fucci, Donald, Daniel Harris, Linda Petrosino i Elizabeth Randolph-Tyler. "Auditory Psychophysical Scaling Exposure Effects: Magnitude Estimation and Cross-Modal Matching". Perceptual and Motor Skills 66, nr 2 (kwiecień 1988): 643–48. http://dx.doi.org/10.2466/pms.1988.66.2.643.

Pełny tekst źródła
Streszczenie:
The purpose of the present study was to investigate possible effects of exposure upon suprathreshold psychological responses when auditory magnitude estimation and cross-modal matching with audition as the standard are conducted within the same experiment. Four groups of 10 subjects each whose over-all age range was 18 to 23 yr. were employed. During the cross-modal marching task the Groups 1 and 2 subjects adjusted a vibrotactile stimulus presented to the dorsal surface of the tongue and the Groups 3 and 4 subjects adjusted a vibrotactile stimulus presented to the thenar eminence of the right hand to match binaurally presented auditory stimuli. The magnitude-estimation task was conducted before the cross-modal matching task for Groups 1 and 3 and the cross-modal matching task was conducted before the magnitude-estimation task for Groups 2 and 4. The psychophysical methods of magnitude estimation and cross-modal matching showed no effect of one upon the other when used in the same experiment.
Style APA, Harvard, Vancouver, ISO itp.
17

Farnè, Alessandro, i Elisabetta Làdavas. "Auditory Peripersonal Space in Humans". Journal of Cognitive Neuroscience 14, nr 7 (1.10.2002): 1030–43. http://dx.doi.org/10.1162/089892902320474481.

Pełny tekst źródła
Streszczenie:
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Style APA, Harvard, Vancouver, ISO itp.
18

Kaiser, Saskia, Axel Buchner, Laura Mieth i Raoul Bell. "Negative target stimuli do not influence cross-modal auditory distraction". PLOS ONE 17, nr 10 (7.10.2022): e0274803. http://dx.doi.org/10.1371/journal.pone.0274803.

Pełny tekst źródła
Streszczenie:
The present study served to test whether emotion modulates auditory distraction in a serial-order reconstruction task. If auditory distraction results from an attentional trade-off between the targets and distractors, auditory distraction should decrease when attention is focused on targets with high negative arousal. Two experiments (with a total N of 284 participants) were conducted to test whether auditory distraction is influenced by target emotion. In Experiment 1 it was examined whether two benchmark effects of auditory distraction—the auditory-deviant effect and the changing-state effect—differ as a function of whether negative high-arousal targets or neutral low-arousal targets are used. Experiment 2 complements Experiment 1 by testing whether target emotion modulates the disruptive effects of reversed sentential speech and steady-state distractor sequences relative to a quiet control condition. Even though the serial order of negative high-arousal targets was better remembered than that of neutral low-arousal targets, demonstrating an emotional facilitation effect on serial-order reconstruction, auditory distraction was not modulated by target emotion. The results provide support of the automatic-capture account according to which auditory distraction, regardless of the specific type of auditory distractor sequence that has to be ignored, is a fundamentally stimulus-driven effect that is rooted in the automatic processing of the to-be-ignored auditory stream and remains unaffected by emotional-motivational factors.
Style APA, Harvard, Vancouver, ISO itp.
19

Fucci, Donald, Daniel Harris, Linda Petrosino i Elizabeth Randolph-Tyler. "Exposure Effects on the Psychophysical Scaling Methods of Magnitude Estimation and Cross-Modal Matching for Vibrotactile Stimulation of the Tongue and Hand". Perceptual and Motor Skills 66, nr 2 (kwiecień 1988): 479–85. http://dx.doi.org/10.2466/pms.1988.66.2.479.

Pełny tekst źródła
Streszczenie:
The purpose of the present study was to investigate possible effects of exposure upon psychophysical scaling responses when vibrotactile magnitude estimation and cross-modal matching are conducted within the same experiment. Four groups of 10 subjects each, with an over-all age range of 18–23 yr., were employed. Groups 1 and 2 performed magnitude estimation for lingual vibrotaction and cross-modal matching with the lingual vibrotactile stimulus as the standard. Group 1 received the magnitude-estimation task first and Group 2 received the cross-modal-matching task first. Groups 3 and 4 performed magnitude estimation for vibrotaction applied to the thenar eminence of the hand and cross-modal matching with the vibrotactile stimulus applied to the thenar eminence of the hand as the standard. Group 3 received the magnitude-estimation task first and Group 4 received the cross-modal-matching task first. The psychophysical scaling methods of magnitude estimation and cross-modal matching showed very little exposure effect of one upon the other when used in the same experiment. Also, magnitude scaling responses tended to increase more rapidly with increases in vibrotactile stimulus intensity when the test site was the thenar eminence of the hand as opposed to the dorsum of the tongue.
Style APA, Harvard, Vancouver, ISO itp.
20

Shcherbakova, O. "The Effect of Induced Emotional States on The Magnitude of Cross-Modal Correspondence Effect". Psikhologicheskii zhurnal 44, nr 1 (2023): 30. http://dx.doi.org/10.31857/s020595920023642-2.

Pełny tekst źródła
Streszczenie:
Cross-modal correspondence effect (i.e., facilitated processing of congruent stimuli from different modalities) occurs not only when simple multi-modal sensory stimuli are processed together, but also during their simultaneous processing with words with emotional and spatial connotations. We tested a hypothesis that the magnitude of cross-modal correspondence effect, arising from concurrent processing of basic sensory and verbal stimuli, is differentially modulated by individual’s emotional state. Thirty-six volunteers (26 females, 18–34 years old) watched videos that evoked positive, negative, or neutral emotional states. This was followed by the main task in which they were presented with sounds of different pitch (low: 1000 Hz; high: 2000 Hz) simultaneously with words that differed in their emotional valence and were associated with different parts of space (low/high). The participant’s task was to identify the pitch (low/high) of the non-verbal sound stimuli. Two-way mixed ANOVA and subsequent pairwise comparisons (Student&apos;s t-test for dependent samples) were used to compare both mean reaction times and estimated parameters of the ex-Gaussian distribution. The results showed that the audiovisual correspondence effect became manifested in faster responses to congruent stimulus combinations compared with non-congruent ones (t(35) = -3.20, p = .005, dz = -0.53, 95% CI [-0.89, -0.18]). However, we did not find a large size effect of the induced emotional state on the magnitude of this correspondence effect (F(4, 68) = 0.49, p = 0.744, = .001). This result may be explained either by robustness of cross-modal correspondence effect and its resilience to emotional influence or by specific limitations of present study design.
Style APA, Harvard, Vancouver, ISO itp.
21

Lo Verde, Luca, Maria Concetta Morrone i Claudia Lunghi. "Early Cross-modal Plasticity in Adults". Journal of Cognitive Neuroscience 29, nr 3 (marzec 2017): 520–29. http://dx.doi.org/10.1162/jocn_a_01067.

Pełny tekst źródła
Streszczenie:
It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.
Style APA, Harvard, Vancouver, ISO itp.
22

Hairston, W. D., M. T. Wallace, J. W. Vaughan, B. E. Stein, J. L. Norris i J. A. Schirillo. "Visual Localization Ability Influences Cross-Modal Bias". Journal of Cognitive Neuroscience 15, nr 1 (1.01.2003): 20–29. http://dx.doi.org/10.1162/089892903321107792.

Pełny tekst źródła
Streszczenie:
The ability of a visual signal to influence the localization of an auditory target (i.e., “cross-modal bias”) was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multi-sensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially “unified” (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the mid-line. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.
Style APA, Harvard, Vancouver, ISO itp.
23

Tomono, Keisuke, i Akira Tomono. "Cross-Modal Effect of Presenting Food Images on Taste Appetite". Sensors 20, nr 22 (19.11.2020): 6615. http://dx.doi.org/10.3390/s20226615.

Pełny tekst źródła
Streszczenie:
We researched a method to objectively evaluate the presence of food images, for the purpose of applying it to digital signage. In this paper, we defined the presence of food images as a sensation that makes us recognize that food is there, and investigated the relationship between that recognition and the salivary secretion reaction. If saliva secretion can be detected by a non-invasive method, it may be possible to objectively estimate the presence of the viewer from the outside. Two kinds of experiments were conducted. STUDY 1 included presentations of popular cooking images, which portrayed a sense of deliciousness, and evaluated changes in the volume of saliva secretions and cerebral blood flow near the temples. STUDY 2 included comparisons of changes between presenting images only and images with corresponded smells. The images included scenes that introduced foods (i.e., almond pudding cake/bergamot orange) that were relatively simple, so that they did not induce the subjects themselves. As a result, we clarified the cross-modal effects that were closely related to sense of presence and salivation. Moreover, we clarified presentation of images with smells to improve one’s sense of presence, even though the images were relatively simple.
Style APA, Harvard, Vancouver, ISO itp.
24

Tobayama, Risa, Haruna Kusano i Kazuhiko Yokosawa. "The effect of cross-modal correspondence on memory and preference." Proceedings of the Annual Convention of the Japanese Psychological Association 84 (8.09.2020): PI—087—PI—087. http://dx.doi.org/10.4992/pacjpa.84.0_pi-087.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

思竹, 韩. "The Cross-Modal Affective Priming Effect of Music Priming Stimulus". Advances in Psychology 04, nr 01 (2014): 70–75. http://dx.doi.org/10.12677/ap.2014.41013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Zhao, Song, Yajie Wang, Hongyuan Xu, Chengzhi Feng i Wenfeng Feng. "Early cross-modal interactions underlie the audiovisual bounce-inducing effect". NeuroImage 174 (lipiec 2018): 208–18. http://dx.doi.org/10.1016/j.neuroimage.2018.03.036.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

ZU, Guangyao, Shuqi LI, Tianyang ZHANG, Aijun WANG i Ming ZHANG. "Effect of inhibition of return on audiovisual cross-modal correspondence". Acta Psychologica Sinica 55, nr 8 (2023): 1220. http://dx.doi.org/10.3724/sp.j.1041.2023.01220.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Gold, Rinat, Dina Klein i Osnat Segal. "The Bouba-Kiki Effect in Children With Childhood Apraxia of Speech". Journal of Speech, Language, and Hearing Research 65, nr 1 (12.01.2022): 43–52. http://dx.doi.org/10.1044/2021_jslhr-21-00070.

Pełny tekst źródła
Streszczenie:
Purpose: The bouba-kiki (BK) effect refers to associations between visual shapes and auditory pseudonames. Thus, when tested, people tend to associate the pseudowords bouba and kiki with round or spiky shapes, respectively. This association requires cross-modal sensory integration. The ability to integrate information from different sensory modalities is crucial for speech development. A clinical population that may be impaired in cross-modal sensory integration is children with childhood apraxia of speech (CAS). The purpose of this study was to examine the involvement of cross-modal sensory integration in children with (CAS). Method: The BK effect was assessed in participants with CAS ( n = 18) and two control groups: One control group was composed of children with developmental language disorder (DLD), also termed specific language impairment ( n = 15), and a second group included typically developing (TD) children ( n = 22). The children were presented with 14 pairs of novel visual displays and nonwords. All the children were asked to state which shape and nonword correspond to one another. In addition, background cognitive (Leiter-3) and language measures (Hebrew PLS-4) were determined for all children. Results: Children in the CAS group were less successful in associating between visual shapes and corresponding auditory pseudonames (e.g., associating the spoken word “bouba” with a round shape; the spoken word “kiki” with a spiky shape). Thus, children with CAS demonstrated a statistically significant reduced BK effect compared with participants with TD and participants with DLD. No significant difference was found between the TD group and the DLD group. Conclusions: The reduced BK effect in children with CAS supports the notion that cross-modal sensory integration may be altered in these children. Cross-modal sensory integration is the basis for speech production. Thus, difficulties in sensory integration may contribute to speech difficulties in CAS.
Style APA, Harvard, Vancouver, ISO itp.
29

Chen, Lihan, Qingcui Wang i Ming Bao. "Spatial References and Audio-Tactile Interaction in Cross-Modal Dynamic Capture". Multisensory Research 27, nr 1 (2014): 55–70. http://dx.doi.org/10.1163/22134808-00002441.

Pełny tekst źródła
Streszczenie:
In audiotactile dynamic capture, judgment of the direction of an apparent motion stream (such as auditory motion) was impeded (hence ‘captured’) by the presentation of a concurrent, but directionally opposite apparent motion stream (such as tactile motion) from a distractor modality, leading to a cross-modal dynamic capture (CDC) effect. That is to say, the percentage of correct reporting of the direction of the target motion was reduced. Previous studies have revealed the effect of stimulus onset asynchronies (SOAs) and the potential spatial remapping (by adopting a cross-hands posture) in CDC. However, further exploration of the dynamic capture process under different postures was not available due to the fact that only two levels of time asynchronies were employed (either synchronous or with an SOA of 500 ms). This study introduced a broad range of SOAs (−400 ms to 400 ms, tactile stream preceded auditory stream or vice versa) to explore the time course of audio-tactile interaction in CDC with two spatial references — arms-uncrossed or arms-crossed postures. Participants judged the direction of auditory apparent motion with tactile distractors. The results showed that in the arms-uncrossed condition, the CDC effect was prominent when the auditory–tactile events were in the temporal integration window (0–60 ms). However, with a preceding tactile cueing effect of SOA equal to and above 150 ms, the CDC effect was reduced, and no CDC effect was observed with the arms-crossed posture. These results suggest CDC effect is modulated by both cross-modal interaction and the spatial reference (especially for the distractors). The magnitude of the CDC effects in audiotactile interaction may be accounted for by reliability of tactile spatial-temporal information.
Style APA, Harvard, Vancouver, ISO itp.
30

Cui, Jiahong, Daisuke Sawamura, Satoshi Sakuraba, Ryuji Saito, Yoshinobu Tanabe, Hiroshi Miura, Masaaki Sugi i in. "Effect of Audiovisual Cross-Modal Conflict during Working Memory Tasks: A Near-Infrared Spectroscopy Study". Brain Sciences 12, nr 3 (3.03.2022): 349. http://dx.doi.org/10.3390/brainsci12030349.

Pełny tekst źródła
Streszczenie:
Cognitive conflict effects are well characterized within unimodality. However, little is known about cross-modal conflicts and their neural bases. This study characterizes the two types of visual and auditory cross-modal conflicts through working memory tasks and brain activities. The participants consisted of 31 healthy, right-handed, young male adults. The Paced Auditory Serial Addition Test (PASAT) and the Paced Visual Serial Addition Test (PVSAT) were performed under distractor and no distractor conditions. Distractor conditions comprised two conditions in which either the PASAT or PVSAT was the target task, and the other was used as a distractor stimulus. Additionally, oxygenated hemoglobin (Oxy-Hb) concentration changes in the frontoparietal regions were measured during tasks. The results showed significantly lower PASAT performance under distractor conditions than under no distractor conditions, but not in the PVSAT. Oxy-Hb changes in the bilateral ventrolateral prefrontal cortex (VLPFC) and inferior parietal cortex (IPC) significantly increased in the PASAT with distractor compared with no distractor conditions, but not in the PVSAT. Furthermore, there were significant positive correlations between Δtask performance accuracy and ΔOxy-Hb in the bilateral IPC only in the PASAT. Visual cross-modal conflict significantly impairs auditory task performance, and bilateral VLPFC and IPC are key regions in inhibiting visual cross-modal distractors.
Style APA, Harvard, Vancouver, ISO itp.
31

Zhang, Mengqi, Fulvio Martinelli, Jian Wu, Peter J. Schmid i Maurizio Quadrio. "Modal and non-modal stability analysis of electrohydrodynamic flow with and without cross-flow". Journal of Fluid Mechanics 770 (1.04.2015): 319–49. http://dx.doi.org/10.1017/jfm.2015.134.

Pełny tekst źródła
Streszczenie:
We report the results of a complete modal and non-modal linear stability analysis of the electrohydrodynamic flow for the problem of electroconvection in the strong-injection region. Convective cells are formed by the Coulomb force in an insulating liquid residing between two plane electrodes subject to unipolar injection. Besides pure electroconvection, we also consider the case where a cross-flow is present, generated by a streamwise pressure gradient, in the form of a laminar Poiseuille flow. The effect of charge diffusion, often neglected in previous linear stability analyses, is included in the present study and a transient growth analysis, rarely considered in electrohydrodynamics, is carried out. In the case without cross-flow, a non-zero charge diffusion leads to a lower linear stability threshold and thus to a more unstable flow. The transient growth, though enhanced by increasing charge diffusion, remains small and hence cannot fully account for the discrepancy of the linear stability threshold between theoretical and experimental results. When a cross-flow is present, increasing the strength of the electric field in the high-$\mathit{Re}$Poiseuille flow yields a more unstable flow in both modal and non-modal stability analyses. Even though the energy analysis and the input–output analysis both indicate that the energy growth directly related to the electric field is small, the electric effect enhances the lift-up mechanism. The symmetry of channel flow with respect to the centreline is broken due to the additional electric field acting in the wall-normal direction. As a result, the centres of the streamwise rolls are shifted towards the injector electrode, and the optimal spanwise wavenumber achieving maximum transient energy growth increases with the strength of the electric field.
Style APA, Harvard, Vancouver, ISO itp.
32

He, Ping, Huaying Qi, Shiyi Wang i Jiayue Cang. "Cross-Modal Sentiment Analysis of Text and Video Based on Bi-GRU Cyclic Network and Correlation Enhancement". Applied Sciences 13, nr 13 (25.06.2023): 7489. http://dx.doi.org/10.3390/app13137489.

Pełny tekst źródła
Streszczenie:
Cross-modal sentiment analysis is an emerging research area in natural language processing. The core task of cross-modal fusion lies in cross-modal relationship extraction and joint feature learning. The existing research methods of cross-modal sentiment analysis focus on static text, video, audio, and other modality data but ignore the fact that different modality data are often unaligned in practical applications. There is a long-term time dependence among unaligned data sequences, and it is difficult to explore the interaction between different modalities. The paper proposes a sentiment analysis model (UA-BFET) based on feature enhancement technology in unaligned data scenarios, which can perform sentiment analysis on unaligned text and video modality data in social media. Firstly, the model adds a cyclic memory enhancement network across time steps. Then, the obtained cross-modal fusion features with interaction are applied to the unimodal feature extraction process of the next time step in the Bi-directional Gated Recurrent Unit (Bi-GRU) so that the progressively enhanced unimodal features and cross-modal fusion features continuously complement each other. Secondly, the extracted unimodal text and video features taken jointly from the enhanced cross-modal fusion features are subjected to canonical correlation analysis (CCA) and input into the fully connected layer and Softmax function for sentiment analysis. Through experiments executed on unaligned public datasets MOSI and MOSEI, the UA-BFET model has achieved or even exceeded the sentiment analysis effect of text, video, and audio modality fusion and has outstanding advantages in solving cross-modal sentiment analysis in unaligned data scenarios.
Style APA, Harvard, Vancouver, ISO itp.
33

Bertelsen, Anne Sjoerup, Line Ahm Mielby, Niki Alexi, Derek Victor Byrne i Ulla Kidmose. "Individual Differences in Sweetness Ratings and Cross-Modal Aroma-Taste Interactions". Foods 9, nr 2 (1.02.2020): 146. http://dx.doi.org/10.3390/foods9020146.

Pełny tekst źródła
Streszczenie:
Aroma-taste interactions, which are believed to occur due to previous coexposure (concurrent presence of aroma and taste), have been suggested as a strategy to aid sugar reduction in food and beverages. However, coexposures might be influenced by individual differences. We therefore hypothesized that aroma-taste interactions vary across individuals. The present study investigated how individual differences (gender, age, and sweet liker status) influenced the effect of aroma on sweetness intensity among young adults. An initial screening of five aromas, all congruent with sweet taste, for their sweetness enhancing effect was carried out using descriptive analysis. Among the aromas tested, vanilla was found most promising for its sweet enhancing effects and was therefore tested across three sucrose concentrations by 129 young adults. Among the subjects tested, females were found to be more susceptible to the sweetness enhancing effect of vanilla aroma than males. For males, the addition of vanilla aroma increased the sweet taste ratings significantly for the 22–25-year-olds, but not the 19–21-year-olds. Consumers were clustered according to their sweet liker status based on their liking for the samples. Although sweet taste ratings were found to vary with the sweet liker status, aroma enhanced the sweetness ratings similarly across clusters. These results call for more targeted product development in order to aid sugar reduction.
Style APA, Harvard, Vancouver, ISO itp.
34

Bahrami Balani, Alex. "Cross-modal information transfer and the effect of concurrent task-load." Journal of Experimental Psychology: Learning, Memory, and Cognition 46, nr 1 (styczeń 2020): 104–16. http://dx.doi.org/10.1037/xlm0000715.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

van der Zande, Patrick, Alexandra Jesse i Anne Cutler. "Hearing words helps seeing words: A cross-modal word repetition effect". Speech Communication 59 (kwiecień 2014): 31–43. http://dx.doi.org/10.1016/j.specom.2014.01.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

TAYAMA, Tadayuki, i Qiongyao SHAO. "Cross-modal study about the effect of expectation on time perception". Proceedings of the Annual Convention of the Japanese Psychological Association 77 (19.09.2013): 3PM—044–3PM—044. http://dx.doi.org/10.4992/pacjpa.77.0_3pm-044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Kwok, Sze Chai, Carlo Fantoni, Laura Tamburini, Lei Wang i Walter Gerbino. "A biphasic effect of cross-modal priming on visual shape recognition". Acta Psychologica 183 (luty 2018): 43–50. http://dx.doi.org/10.1016/j.actpsy.2017.12.013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

HAYASHI, Jumpei, Hiromasa SASAKI i Takeo KATO. "Cross-modal Effect between Taste and Shape Controlled by Curvature Entropy". Proceedings of Design & Systems Conference 2022.32 (2022): 2413. http://dx.doi.org/10.1299/jsmedsd.2022.32.2413.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Bertelsen, Anne Sjoerup, Line Ahm Mielby, Derek Victor Byrne i Ulla Kidmose. "Ternary Cross-Modal Interactions between Sweetness, Aroma, and Viscosity in Different Beverage Matrices". Foods 9, nr 4 (30.03.2020): 395. http://dx.doi.org/10.3390/foods9040395.

Pełny tekst źródła
Streszczenie:
Sugar reduction in food and beverage products involves several challenges. Non-nutritive sweeteners may give unwanted off-flavors, while sugar-reduced products often lack mouthfeel. To overcome this, the addition of aroma to increase sweetness through cross-modal interactions, and the addition of hydrocolloids such as pectin to increase viscosity, have been suggested as strategies to aid sugar reduction. However, viscosity has been shown to decrease both taste and aroma intensities. An increase in viscosity may thereby affect the use of aromas as sweetness enhancers. Additionally, the effects of aromas and hydrocolloids on sweetness intensity and mouthfeel depend on the food matrix involved. The present study investigated cross-modal aroma–sweetness–viscosity interactions in two beverage matrices: water and apple nectar. The perceptual effects of vanilla aroma (0–1 mL/kg), sucrose (2.5%–7.5% w/w) and pectin (0%–0.3% w/w) were studied in both matrices. For each matrix, cross-modal interactions were analyzed with descriptive analysis using a trained sensory panel. The effect of vanilla aroma on sweetness intensity was found to be higher in apple nectar compared to in water. Furthermore, pectin affected neither taste, aroma, nor the cross-modal effects of aroma on taste in either of the matrices. These results indicate that pectin, in the studied range of concentrations, may be used to improve mouthfeel in sugar-reduced beverages, without compromising taste or aroma perception.
Style APA, Harvard, Vancouver, ISO itp.
40

Naidu Balireddy, Somi, Pitchaimani Jeyaraj, Lenin Babu Mailan Chinnapandi i Ch V. S. N. Reddi. "Effect of lamination schemes on natural frequency and modal damping of fiber reinforced laminated beam using Ritz method". International Journal for Simulation and Multidisciplinary Design Optimization 12 (2021): 15. http://dx.doi.org/10.1051/smdo/2021016.

Pełny tekst źródła
Streszczenie:
The current study focussed on analysing natural frequency and damping of laminated composite beams (LCBs) by varying fiber angle, aspect ratio, material property and boundary conditions. Ritz method with displacement field based on the shear and normal deformable theory is used and the modal damping is calculated using modal strain energy method. Effects of symmetric angle-ply and cross-ply, anti symmetric cross-ply, balanced and quasi-isotropic lay up schemes on modal damping are presented for the first time. Results revealed that influence of lay-up scheme on natural frequencies is significant for the thin beams while the modal damping of the thin beams are not sensitive to lay-up scheme. However, the lay-up scheme influences the damping significantly for the thick beams. Similarly, high strength fiber reinforced LCBs have higher natural frequency while low strength fiber reinforced LCBs have higher damping due to the better fiber-matrix interaction.
Style APA, Harvard, Vancouver, ISO itp.
41

Mishra, Jyoti, Antigona Martínez i Steven A. Hillyard. "Effect of Attention on Early Cortical Processes Associated with the Sound-induced Extra Flash Illusion". Journal of Cognitive Neuroscience 22, nr 8 (sierpień 2010): 1714–29. http://dx.doi.org/10.1162/jocn.2009.21295.

Pełny tekst źródła
Streszczenie:
When a single flash of light is presented interposed between two brief auditory stimuli separated by 60–100 msec, subjects typically report perceiving two flashes [Shams, L., Kamitani, Y., & Shimojo, S. Visual illusion induced by sound. Brain Research, Cognitive Brain Research, 14, 147–152, 2002; Shams, L., Kamitani, Y., & Shimojo, S. Illusions. What you see is what you hear. Nature, 408, 788, 2000]. Using ERP recordings, we previously found that perception of the illusory extra flash was accompanied by a rapid dynamic interplay between auditory and visual cortical areas that was triggered by the second sound [Mishra, J., Martínez, A., Sejnowski, T. J., & Hillyard, S. A. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience, 27, 4120–4131, 2007]. In the current study, we investigated the effect of attention on the ERP components associated with the illusory extra flash in 15 individuals who perceived this cross-modal illusion frequently. All early ERP components in the cross-modal difference wave associated with the extra flash illusion were significantly enhanced by selective spatial attention. The earliest attention-related modulation was an amplitude increase of the positive-going PD110/PD120 component, which was previously shown to be correlated with an individual's propensity to perceive the illusory second flash [Mishra, J., Martínez, A., Sejnowski, T. J., & Hillyard, S. A. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience, 27, 4120–4131, 2007]. The polarity of the early PD110/PD120 component did not differ as a function of the visual field (upper vs. lower) of stimulus presentation. This, along with the source localization of the component, suggested that its principal generator lies in extrastriate visual cortex. These results indicate that neural processes previously shown to be associated with the extra flash illusion can be modulated by attention, and thus are not the result of a wholly automatic cross-modal integration process.
Style APA, Harvard, Vancouver, ISO itp.
42

Robinson, Christopher W., i Vladimir M. Sloutsky. "When Audition Dominates Vision". Experimental Psychology 60, nr 2 (1.11.2013): 113–21. http://dx.doi.org/10.1027/1618-3169/a000177.

Pełny tekst źródła
Streszczenie:
Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.
Style APA, Harvard, Vancouver, ISO itp.
43

Liu, Yongfei, Bo Wan, Xiaodan Zhu i Xuming He. "Learning Cross-Modal Context Graph for Visual Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11645–52. http://dx.doi.org/10.1609/aaai.v34i07.6833.

Pełny tekst źródła
Streszczenie:
Visual grounding is a ubiquitous building block in many vision-language tasks and yet remains challenging due to large variations in visual and linguistic features of grounding entities, strong context effect and the resulting semantic ambiguities. Prior works typically focus on learning representations of individual phrases with limited context information. To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task. In particular, we introduce a modular graph neural network to compute context-aware representations of phrases and object proposals respectively via message propagation, followed by a graph-based matching module to generate globally consistent localization of grounding phrases. We train the entire graph neural network jointly in a two-stage strategy and evaluate it on the Flickr30K Entities benchmark. Extensive experiments show that our method outperforms the prior state of the arts by a sizable margin, evidencing the efficacy of our grounding framework. Code is available at https://github.com/youngfly11/LCMCG-PyTorch.
Style APA, Harvard, Vancouver, ISO itp.
44

Lin, Yi, Hongwei Ding i Yang Zhang. "Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects". Journal of Speech, Language, and Hearing Research 63, nr 3 (23.03.2020): 896–912. http://dx.doi.org/10.1044/2020_jslhr-19-00258.

Pełny tekst źródła
Streszczenie:
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics–prosody Stroop task) and cross-modal audiovisual task (i.e., semantics–prosody–face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
Style APA, Harvard, Vancouver, ISO itp.
45

Du, Qinsheng, Yingxu Bian, Jianyu Wu, Shiyan Zhang i Jian Zhao. "Cross-Modal Adaptive Interaction Network for RGB-D Saliency Detection". Applied Sciences 14, nr 17 (23.08.2024): 7440. http://dx.doi.org/10.3390/app14177440.

Pełny tekst źródła
Streszczenie:
The salient object detection (SOD) task aims to automatically detect the most prominent areas observed by the human eye in an image. Since RGB images and depth images contain different information, how to effectively integrate cross-modal features in the RGB-D SOD task remains a major challenge. Therefore, this paper proposes a cross-modal adaptive interaction network (CMANet) for the RGB-D salient object detection task, which consists of a cross-modal feature integration module (CMF) and an adaptive feature fusion module (AFFM). These modules are designed to integrate and enhance multi-scale features from both modalities, improve the effect of integrating cross-modal complementary information of RGB and depth images, enhance feature information, and generate richer and more representative feature maps. Extensive experiments were conducted on four RGB-D datasets to verify the effectiveness of CMANet. Compared with 17 RGB-D SOD methods, our model accurately detects salient regions in images and achieves state-of-the-art performance across four evaluation metrics.
Style APA, Harvard, Vancouver, ISO itp.
46

Bonetti, Leonardo, i Marco Costa. "Pitch-verticality and pitch-size cross-modal interactions". Psychology of Music 46, nr 3 (2.06.2017): 340–56. http://dx.doi.org/10.1177/0305735617710734.

Pełny tekst źródła
Streszczenie:
Two studies were conducted on cross-modal matching between pitch and sound source localization on the vertical axis, and pitch and size. In the first study 100 Hz, 200 Hz, 600 Hz, and 800 Hz tones were emitted by a loudspeaker positioned 60 cm above or below to the participant’s ear level. Using a speeded classification task, 30 participants had to indicate the sound source in 160 trials. Both reaction times and errors were analyzed. The results showed that in the congruent condition of high-pitched tones emitted from the upper loudspeaker, reaction times were significantly faster and the number of errors was significantly lower. Pitch was mapped on the vertical axis for sound localization. A main effect for sound source direction was also found. Tones coming from the upper loudspeaker were recognized faster and more accurately. Males were faster than females in identifying sound source direction. In the second experiment, 20 participants had to match 21 tones varying in pitch with 9 circles differing in visual angle on 42 trials. The results showed a clear inverse linear association between log-spaced tone pitch and circle diameter.
Style APA, Harvard, Vancouver, ISO itp.
47

Janyan, Armina, Yury Shtyrov, Ekaterina Andriushchenko, Ekaterina Blinova i Olga Shcherbakova. "Look and ye shall hear: Selective auditory attention modulates the audiovisual correspondence effect". i-Perception 13, nr 3 (maj 2022): 204166952210958. http://dx.doi.org/10.1177/20416695221095884.

Pełny tekst źródła
Streszczenie:
One of the unresolved questions in multisensory research is that of automaticity of consistent associations between sensory features from different modalities (e.g. high visual locations associated with high sound pitch). We addressed this issue by examining a possible role of selective attention in the audiovisual correspondence effect. We orthogonally manipulated loudness and pitch, directing participants’ attention to the auditory modality only and using pitch and loudness identification tasks. Visual stimuli in high, low or central spatial locations appeared simultaneously with the sounds. If the correspondence effect is automatic, it should not be affected by task changes. The results, however, demonstrated a cross-modal pitch-verticality correspondence effect only when participants’ attention was directed to pitch, but not to loudness identification task; moreover, the effect was present only in the upper location. The findings underscore the involvement of selective attention in cross-modal associations and support a top-down account of audiovisual correspondence effects.
Style APA, Harvard, Vancouver, ISO itp.
48

Feenders, Gesa, i Georg M. Klump. "Audio-Visual Interference During Motion Discrimination in Starlings". Multisensory Research 36, nr 2 (17.01.2023): 181–212. http://dx.doi.org/10.1163/22134808-bja10092.

Pełny tekst źródła
Streszczenie:
Abstract Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.
Style APA, Harvard, Vancouver, ISO itp.
49

Li, Wenxiao, Hongyan Mei, Yutian Li, Jiayao Yu, Xing Zhang, Xiaorong Xue i Jiahao Wang. "A Cross-Modal Hash Retrieval Method with Fused Triples". Applied Sciences 13, nr 18 (21.09.2023): 10524. http://dx.doi.org/10.3390/app131810524.

Pełny tekst źródła
Streszczenie:
Due to the fast retrieval speed and low storage cost, cross-modal hashing has become the primary method for cross-modal retrieval. Since the emergence of deep cross-modal hashing methods, cross-modal retrieval significantly improved. However, the existing cross-modal hash retrieval methods still need to effectively utilize the dataset’s supervisory information and the lack of similarity expression ability. This means that the label information needs to be maximized, and the potential semantic relationship between two modalities cannot be fully explored, thus affecting the judgment of semantic similarity between two modalities. To address these problems, this paper proposes Tri-CMH, a cross-modal hash retrieval method with fused triples, which is an end-to-end modeling framework consisting of two parts: feature extraction and hash learning. Firstly, the multi-modal data are preprocessing into the form of triple groups. The data supervision matrix is constructed so that the samples with labels and their meanings are aggregated together. In contrast, the samples with labels and their opposite meanings are separated, thus avoiding the problem of the under-utilization of supervisory information in the data set and achieving the effect of efficiently utilizing the global supervisory information. Meanwhile, the loss function of the hash learning part is optimized by considering the Hamming distance loss, single-modality internal loss, cross-modality loss, and quantization loss to explicitly constrain semantically similar hash codes and semantically dissimilar hash codes and to improve the model’s ability to judge cross-modality semantic similarity. The method is trained and tested on the IAPR-TC12, MIRFLICKR-25K, and NUS-WIDE datasets, and the experimental evaluation criteria are mAP and PR curve, and the experimental results show the effectiveness and practicality of the method.
Style APA, Harvard, Vancouver, ISO itp.
50

Zheng, Fuzhong, Weipeng Li, Xu Wang, Luyao Wang, Xiong Zhang i Haisu Zhang. "A Cross-Attention Mechanism Based on Regional-Level Semantic Features of Images for Cross-Modal Text-Image Retrieval in Remote Sensing". Applied Sciences 12, nr 23 (29.11.2022): 12221. http://dx.doi.org/10.3390/app122312221.

Pełny tekst źródła
Streszczenie:
With the rapid development of remote sensing (RS) observation technology over recent years, the high-level semantic association-based cross-modal retrieval of RS images has drawn some attention. However, few existing studies on cross-modal retrieval of RS images have addressed the issue of mutual interference between semantic features of images caused by “multi-scene semantics”. Therefore, we proposed a novel cross-attention (CA) model, called CABIR, based on regional-level semantic features of RS images for cross-modal text-image retrieval. This technique utilizes the CA mechanism to implement cross-modal information interaction and guides the network with textual semantics to allocate weights and filter redundant features for image regions, reducing the effect of irrelevant scene semantics on retrieval. Furthermore, we proposed BERT plus Bi-GRU, a new approach to generating statement-level textual features, and designed an effective temperature control function to steer the CA network toward smooth running. Our experiment suggested that CABIR not only outperforms other state-of-the-art cross-modal image retrieval methods but also demonstrates high generalization ability and stability, with an average recall rate of up to 18.12%, 48.30%, and 55.53% over the datasets RSICD, UCM, and Sydney, respectively. The model proposed in this paper will be able to provide a possible solution to the problem of mutual interference of RS images with “multi-scene semantics” due to complex terrain objects.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii