To see the other types of publications on this topic, follow the link: Multisensory fusion.

Journal articles on the topic 'Multisensory fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multisensory fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

de Winkel, Ksander N., Mikhail Katliar, and Heinrich H. Bülthoff. "Forced Fusion in Multisensory Heading Estimation." PLOS ONE 10, no. 5 (May 4, 2015): e0127104. http://dx.doi.org/10.1371/journal.pone.0127104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Prsa, Mario, Steven Gale, and Olaf Blanke. "Self-motion leads to mandatory cue fusion across sensory modalities." Journal of Neurophysiology 108, no. 8 (October 15, 2012): 2282–91. http://dx.doi.org/10.1152/jn.00439.2012.

Full text
Abstract:
When perceiving properties of the world, we effortlessly combine multiple sensory cues into optimal estimates. Estimates derived from the individual cues are generally retained once the multisensory estimate is produced and discarded only if the cues stem from the same sensory modality (i.e., mandatory fusion). Does multisensory integration differ in that respect when the object of perception is one's own body, rather than an external variable? We quantified how humans combine visual and vestibular information for perceiving own-body rotations and specifically tested whether such idiothetic cues are subjected to mandatory fusion. Participants made extensive size comparisons between successive whole body rotations using only visual, only vestibular, and both senses together. Probabilistic descriptions of the subjects' perceptual estimates were compared with a Bayes-optimal integration model. Similarity between model predictions and experimental data echoed a statistically optimal mechanism of multisensory integration. Most importantly, size discrimination data for rotations composed of both stimuli was best accounted for by a model in which only the bimodal estimator is accessible for perceptual judgments as opposed to an independent or additive use of all three estimators (visual, vestibular, and bimodal). Indeed, subjects' thresholds for detecting two multisensory rotations as different from one another were, in pertinent cases, larger than those measured using either single-cue estimate alone. Rotations different in terms of the individual visual and vestibular inputs but quasi-identical in terms of the integrated bimodal estimate became perceptual metamers. This reveals an exceptional case of mandatory fusion of cues stemming from two different sensory modalities.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Chaoming, Bowei He, Yixuan Wang, Jin Cao, and Shuo Gao. "EMG-Centered Multisensory Based Technologies for Pattern Recognition in Rehabilitation: State of the Art and Challenges." Biosensors 10, no. 8 (July 26, 2020): 85. http://dx.doi.org/10.3390/bios10080085.

Full text
Abstract:
In the field of rehabilitation, the electromyography (EMG) signal plays an important role in interpreting patients’ intentions and physical conditions. Nevertheless, utilizing merely the EMG signal suffers from difficulty in recognizing slight body movements, and the detection accuracy is strongly influenced by environmental factors. To address the above issues, multisensory integration-based EMG pattern recognition (PR) techniques have been developed in recent years, and fruitful results have been demonstrated in diverse rehabilitation scenarios, such as achieving high locomotion detection and prosthesis control accuracy. Owing to the importance and rapid development of the EMG centered multisensory fusion technologies in rehabilitation, this paper reviews both theories and applications in this emerging field. The principle of EMG signal generation and the current pattern recognition process are explained in detail, including signal preprocessing, feature extraction, classification algorithms, etc. Mechanisms of collaborations between two important multisensory fusion strategies (kinetic and kinematics) and EMG information are thoroughly explained; corresponding applications are studied, and the pros and cons are discussed. Finally, the main challenges in EMG centered multisensory pattern recognition are discussed, and a future research direction of this area is prospected.
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Il Young, Vladimir Shin, Seokhyoung Lee, and Won Choi. "Multisensor Estimation Fusion of Nonlinear Cost Functions in Mixed Continuous-Discrete Stochastic Systems." Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/218381.

Full text
Abstract:
We propose centralized and distributed fusion algorithms for estimation of nonlinear cost function (NCF) in multisensory mixed continuous-discrete stochastic systems. The NCF represents a nonlinear multivariate functional of state variables. For polynomial NCFs, we propose a closed-form estimation procedure based on recursive formulas for high-order moments for a multivariate normal distribution. In general case, the unscented transformation is used for calculation of nonlinear estimates of a cost functions. To fuse local state estimates, the mixed differential difference equations for error cross-covariance between local estimates are derived. The subsequent application of the proposed fusion estimators for a multisensory environment demonstrates their effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
5

Kiemel, Tim, Kelvin S. Oie, and John J. Jeka. "Multisensory fusion and the stochastic structure of postural sway." Biological Cybernetics 87, no. 4 (October 1, 2002): 262–77. http://dx.doi.org/10.1007/s00422-002-0333-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Cheng, and Hong Hua Wang. "Research on Signal Detection Method of High Precision Based on Bayesian Fusion of Multisensory System." Advanced Materials Research 945-949 (June 2014): 1962–67. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1962.

Full text
Abstract:
Faced to low detection rate and low credibility of single sensor because of noise , this paper proposes Bayesian fusion based on multisensory system, whose detec tion rate and credibility are discussed. After simulation , it is concluded that the Bayesian fusion is a feasible detection method with high precision that improves detection rate and credibility.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jinjiang, Junyao Xie, Rui Zhao, Laibin Zhang, and Lixiang Duan. "Multisensory fusion based virtual tool wear sensing for ubiquitous manufacturing." Robotics and Computer-Integrated Manufacturing 45 (June 2017): 47–58. http://dx.doi.org/10.1016/j.rcim.2016.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevenson, Ryan A., and Mark T. Wallace. "The Multisensory Temporal Binding Window: Perceptual Fusion, Training, and Autism." i-Perception 2, no. 8 (October 2011): 760. http://dx.doi.org/10.1068/ic760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Makarau, Aliaksei, Gintautas Palubinskas, and Peter Reinartz. "Alphabet-Based Multisensory Data Fusion and Classification Using Factor Graphs." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6, no. 2 (April 2013): 969–90. http://dx.doi.org/10.1109/jstars.2012.2219507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ernst, M. O. "From independence to fusion: A comprehensive model for multisensory integration." Journal of Vision 5, no. 8 (March 17, 2010): 650. http://dx.doi.org/10.1167/5.8.650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alyannezhadi, Mohammad M., Ali A. Pouyan, and Vahid Abolghasemi. "An efficient algorithm for multisensory data fusion under uncertainty condition." Journal of Electrical Systems and Information Technology 4, no. 1 (May 2017): 269–78. http://dx.doi.org/10.1016/j.jesit.2016.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Machida, Kazuo, Yoshitsugu Toda, and Mitsushige Oda. "Precise In-Orbit Servicing by Multisensory Hand-Connected with Long Arm." Journal of Robotics and Mechatronics 12, no. 4 (August 20, 2000): 371–77. http://dx.doi.org/10.20965/jrm.2000.p0371.

Full text
Abstract:
This paper presents the space experiment of in-orbit servicing from a chaser satellite to a target satellite, using a multisensory hand connected with a long arm. The experiment is carried out to acquire the technology that enables a long robot arm to perform highprecision tasks through the smart hand. A three-finger multisensory hand, ARH (Advanced Robotic Hand), is connected to the 2.4m length manipulator arm, ERA (ETS RobotArm), in orbit, and sample retrieving from ""Orihime"" to ""Hikoboshi"" is achieved in this configuration. The work environment is measured by sensor fusion of the range sensors, hand-eye camera and contact sensors, and the world model is precisely calibrated before the task. The sample retrieving is successfully performed due to the position/force hybrid control of the arm and fine compensation by the hand mechanism under multisensory monitoring.
APA, Harvard, Vancouver, ISO, and other styles
13

MINOR, CHRISTIAN P., DANIEL A. STEINHURST, KEVIN J. JOHNSON, SUSAN L. ROSE-PEHRSSON, JEFFREY C. OWRUTSKY, STEPHEN C. WALES, and DANIEL T. GOTTUK. "MULTISENSORY DETECTION SYSTEM FOR DAMAGE CONTROL AND SITUATIONAL AWARENESS." International Journal of High Speed Electronics and Systems 18, no. 03 (September 2008): 575–92. http://dx.doi.org/10.1142/s0129156408005588.

Full text
Abstract:
A data fusion-based, multisensory detection system, called “Volume Sensor”, was developed under the Advanced Damage Countermeasures (ADC) portion of the US Navy's Future Naval Capabilities program (FNC) to meet reduced manning goals. A diverse group of sensing modalities was chosen to provide an automated damage control monitoring capability that could be constructed at a relatively low cost and also easily integrated into existing ship infrastructure. Volume Sensor employs an efficient, scalable, and adaptable design framework that can serve as a template for heterogeneous sensor network integration for situational awareness. In the development of Volume Sensor, a number of challenges were addressed and met with solutions that are applicable to heterogeneous sensor networks of any type. These solutions include: 1) a uniform, but general format for encapsulating sensor data, 2) a communications protocol for the transfer of sensor data and command and control of networked sensor systems, 3) the development of event specific data fusion algorithms, and 4) the design and implementation of modular and scalable system architecture. In full-scale testing on a shipboard environment, two prototype Volume Sensor systems demonstrated the capability to provide highly accurate and timely situational awareness regarding damage control events while simultaneously imparting a negligible footprint on the ship's 100 Mbps Ethernet network and maintaining smooth and reliable operation in a real-time fashion.
APA, Harvard, Vancouver, ISO, and other styles
14

Axenie, Cristian, Christoph Richter, and Jörg Conradt. "A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics." Sensors 16, no. 10 (October 20, 2016): 1751. http://dx.doi.org/10.3390/s16101751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fazeli, N., M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum, and A. Rodriguez. "See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion." Science Robotics 4, no. 26 (January 30, 2019): eaav3123. http://dx.doi.org/10.1126/scirobotics.aav3123.

Full text
Abstract:
Humans are able to seamlessly integrate tactile and visual stimuli with their intuitions to explore and execute complex manipulation skills. They not only see but also feel their actions. Most current robotic learning methodologies exploit recent progress in computer vision and deep learning to acquire data-hungry pixel-to-action policies. These methodologies do not exploit intuitive latent structure in physics or tactile signatures. Tactile reasoning is omnipresent in the animal kingdom, yet it is underdeveloped in robotic manipulation. Tactile stimuli are only acquired through invasive interaction, and interpretation of the data stream together with visual stimuli is challenging. Here, we propose a methodology to emulate hierarchical reasoning and multisensory fusion in a robot that learns to play Jenga, a complex game that requires physical interaction to be played effectively. The game mechanics were formulated as a generative process using a temporal hierarchical Bayesian model, with representations for both behavioral archetypes and noisy block states. This model captured descriptive latent structures, and the robot learned probabilistic models of these relationships in force and visual domains through a short exploration phase. Once learned, the robot used this representation to infer block behavior patterns and states as it played the game. Using its inferred beliefs, the robot adjusted its behavior with respect to both its current actions and its game strategy, similar to the way humans play the game. We evaluated the performance of the approach against three standard baselines and show its fidelity on a real-world implementation of the game.
APA, Harvard, Vancouver, ISO, and other styles
16

Young Song, Il, Vladimir Shin, Seokhyoung Lee, and Won Choi. "Estimation fusion of nonlinear cost functions with application to multisensory Kalman filtering." Journal of the Franklin Institute 351, no. 10 (October 2014): 4672–87. http://dx.doi.org/10.1016/j.jfranklin.2014.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shao, Haidong, Jing Lin, Liangwei Zhang, Diego Galar, and Uday Kumar. "A novel approach of multisensory fusion to collaborative fault diagnosis in maintenance." Information Fusion 74 (October 2021): 65–76. http://dx.doi.org/10.1016/j.inffus.2021.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rövid, András, Viktor Remeli, Norbert Paufler, Henrietta Lengyel, Máté Zöldy, and Zsolt Szalay. "Towards Reliable Multisensory Perception and Its Automotive Applications." Periodica Polytechnica Transportation Engineering 48, no. 4 (July 7, 2020): 334–40. http://dx.doi.org/10.3311/pptr.15921.

Full text
Abstract:
Autonomous driving poses numerous challenging problems, one of which is perceiving and understanding the environment. Since self-driving is safety critical and many actions taken during driving rely on the outcome of various perception algorithms (for instance all traffic participants and infrastructural objects in the vehicle's surroundings must reliably be recognized and localized), thus the perception might be considered as one of the most critical subsystems in an autonomous vehicle. Although the perception itself might further be decomposed into various sub-problems, such as object detection, lane detection, traffic sign detection, environment modeling, etc. In this paper the focus is on fusion models in general (giving support for multisensory data processing) and some related automotive applications such as object detection, traffic sign recognition, end-to-end driving models and an example of taking decisions in multi-criterial traffic situations that are complex for both human drivers and for the self-driving vehicles as well.
APA, Harvard, Vancouver, ISO, and other styles
19

Spence, Charles. "Multisensory Flavour Perception: Blending, Mixing, Fusion, and Pairing within and between the Senses." Foods 9, no. 4 (April 1, 2020): 407. http://dx.doi.org/10.3390/foods9040407.

Full text
Abstract:
This review summarizes the various outcomes that may occur when two or more elements are paired in the context of flavour perception. In the first part, I review the literature concerning what happens when flavours, ingredients, and/or culinary techniques are deliberately combined in a dish, drink, or food product. Sometimes the result is fusion but, if one is not careful, the result can equally well be confusion instead. In fact, blending, mixing, fusion, and flavour pairing all provide relevant examples of how the elements in a carefully-crafted multi-element tasting experience may be combined. While the aim is sometimes to obscure the relative contributions of the various elements to the mix (as in the case of blending), at other times, consumers/tasters are explicitly encouraged to contemplate/perceive the nature of the relationship between the contributing elements instead (e.g., as in the case of flavour pairing). There has been a noticeable surge in both popular and commercial interest in fusion foods and flavour pairing in recent years, and various of the ‘rules’ that have been put forward to help explain the successful combination of the elements in such food and/or beverage experiences are discussed. In the second part of the review, I examine the pairing of flavour stimuli with music/soundscapes, in the emerging field of ‘sonic seasoning’. I suggest that the various perceptual pairing principles/outcomes identified when flavours are paired deliberately can also be meaningfully extended to provide a coherent framework when it comes to categorizing the ways in which what we hear can influence our flavour experiences, both in terms of the sensory-discriminative and hedonic response.
APA, Harvard, Vancouver, ISO, and other styles
20

Hartnagel, David, Alain Bichot, and Corinne Roumes. "Eye Position Affects Audio — Visual Fusion in Darkness." Perception 36, no. 10 (October 2007): 1487–96. http://dx.doi.org/10.1068/p5847.

Full text
Abstract:
We investigated the frame of reference involved in audio – visual (AV) fusion over space. This multisensory phenomenon refers to the perception of unity resulting from visual and auditory stimuli despite their potential spatial disparity. The extent of this illusion depends on the eccentricity in azimuth of the bimodal stimulus (Godfroy et al, 2003 Perception32 1233–1245). In a previous study, conducted in a luminous environment, Roumes et al 2004 ( Perception33 Supplement, 142) have shown that variation of AV fusion is gaze-dependent. Here we examine the contribution of ego- or allocentric visual cues by conducting the experiment in total darkness. Auditory and visual stimuli were displayed in synchrony with various spatial disparities. Subjects had to judge their unity (‘fusion’ or ‘no fusion’). Results showed that AV fusion in darkness remains gaze-dependent despite the lack of any allocentric cues and confirmed the hypothesis that the reference frame of the bimodal space is neither head-centred nor eye-centred.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, J., B. Yang, Y. Cong, S. Li, and Y. Yue. "INTEGRATION OF A LOW-COST MULTISENSORY UAV SYSTEM FOR FOREST APPLICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1027–31. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1027-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> To integrate the multi-spectral imagery and laser scanning data for forest management, a low-cost multisensory UAV system, named Kylin Cloud, is introduced in this paper. The Kylin Cloud is composed several low-cost sensors (i.e., GNSS receiver, IMU, global shutter camera, multispectral camera, and laser scanner), providing the fusion of the imagery and laser scanning data for reliable forest inventory. Experiments were undertaken in a forest park in Wuhan. Results showed that the registration error of the multispectral Digital Orthophoto Map (DOM) and laser scanning data is about one pixel, demonstrating a high potential of the proposed low-cost system.</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Hongzhao, Dong, Zhao Yuting, and Li Minghe. "An Approach of Multiplexing for Bus Lanes Based on VII and Multisensory Information Fusion." Sensor Letters 9, no. 5 (October 1, 2011): 1968–73. http://dx.doi.org/10.1166/sl.2011.1541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Song, Ha Ryong, Il Young Song, and Vladimir Shin. "Multisensory Prediction Fusion of Nonlinear Functions of the State Vector in Discrete-Time Systems." International Journal of Distributed Sensor Networks 11, no. 11 (January 2015): 249857. http://dx.doi.org/10.1155/2015/249857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Seokhyoung, Moongu Jeon, and Vladimir Shin. "Distributed Estimation Fusion With Application to a Multisensory Vehicle Suspension System With Time Delays." IEEE Transactions on Industrial Electronics 59, no. 11 (November 2012): 4475–82. http://dx.doi.org/10.1109/tie.2011.2182010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Yun, Shu Sun, and Gang Hao. "A Weighted Measurement Fusion Particle Filter for Nonlinear Multisensory Systems Based on Gauss–Hermite Approximation." Sensors 17, no. 10 (September 28, 2017): 2222. http://dx.doi.org/10.3390/s17102222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Oie, Kelvin S., Tim Kiemel, and John J. Jeka. "Multisensory fusion: simultaneous re-weighting of vision and touch for the control of human posture." Cognitive Brain Research 14, no. 1 (June 2002): 164–76. http://dx.doi.org/10.1016/s0926-6410(02)00071-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Jinjiang, Junyao Xie, Rui Zhao, Kezhi Mao, and Laibin Zhang. "A New Probabilistic Kernel Factor Analysis for Multisensory Data Fusion: Application to Tool Condition Monitoring." IEEE Transactions on Instrumentation and Measurement 65, no. 11 (November 2016): 2527–37. http://dx.doi.org/10.1109/tim.2016.2584238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Xiuzhu, Xinyue Zhang, Yi Ding, and Lin Zhang. "Indoor Activity and Vital Sign Monitoring for Moving People with Multiple Radar Data Fusion." Remote Sensing 13, no. 18 (September 21, 2021): 3791. http://dx.doi.org/10.3390/rs13183791.

Full text
Abstract:
The monitoring of human activity and vital signs plays a significant role in remote health-care. Radar provides a non-contact monitoring approach without privacy and illumination concerns. However, multiple people in a narrow indoor environment bring dense multipaths for activity monitoring, and the received vital sign signals are heavily distorted with body movements. This paper proposes a framework based on Frequency Modulated Continuous Wave (FMCW) and Impulse Radio Ultra-Wideband (IR-UWB) radars to address these challenges, designing intelligent spatial-temporal information fusion for activity and vital sign monitoring. First, a local binary pattern (LBP) and energy features are extracted from FMCW radar, combined with the wavelet packet transform (WPT) features on IR-UWB radar for activity monitoring. Then the additional information guided fusing network (A-FuseNet) is proposed with a modified generative and adversarial structure for vital sign monitoring. A Cascaded Convolutional Neural Network (CCNN) module and a Long Short Term Memory (LSTM) module are designed as the fusion sub-network for vital sign information extraction and multisensory data fusion, while a discrimination sub-network is constructed to optimize the fused heartbeat signal. In addition, the activity and movement characteristics are introduced as additional information to guide the fusion and optimization. A multi-radar dataset with an FMCW and two IR-UWB radars in a cotton tent, a small room and a wide lobby is constructed, and the accuracies of activity and vital sign monitoring achieve 99.9% and 92.3% respectively. Experimental results demonstrate the superiority and robustness of the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
29

Riley, Hubert Bryan, David Solomon Raj Kondru, and Mehmet Celenk. "IR Sensing Embedded System Development for Prototype Mobile Platform and Multisensory Data Fusion for Autonomous Convoy." Advances in Science, Technology and Engineering Systems Journal 3, no. 4 (August 2018): 372–77. http://dx.doi.org/10.25046/aj030438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Oie, Kelvin S., Tim Kiemel, and John J. Jeka. "Human multisensory fusion of vision and touch: detecting non-linearity with small changes in the sensory environment." Neuroscience Letters 315, no. 3 (November 2001): 113–16. http://dx.doi.org/10.1016/s0304-3940(01)02348-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ma, Meng, Chuang Sun, Xuefeng Chen, Xingwu Zhang, and Ruqiang Yan. "A Deep Coupled Network for Health State Assessment of Cutting Tools Based on Fusion of Multisensory Signals." IEEE Transactions on Industrial Informatics 15, no. 12 (December 2019): 6415–24. http://dx.doi.org/10.1109/tii.2019.2912428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nagla, KS, Moin Uddin, and Dilbag Singh. "Dedicated Filter for Robust Occupancy Grid Mapping." IAES International Journal of Robotics and Automation (IJRA) 4, no. 1 (March 1, 2014): 82. http://dx.doi.org/10.11591/ijra.v4i1.pp82-92.

Full text
Abstract:
<p>Sensor based perception of the environment is an emerging area of the mobile robot research where sensors play a pivotal role. For autonomous mobile robots, the fundamental requirement is the convergent of the range information in to high level internal representation. Internal representation in the form of occupancy grid is commonly used in autonomous mobile robots due to its various advantages. There are several sensors such as vision sensor, laser rage finder, and ultrasonic and infrared sensors etc. play roles in mapping. However the sensor information failure, sensor inaccuracies, noise, and slow response are the major causes of an error in the mapping. To improve the reliability of the mobile robot mapping multisensory data fusion is considered as an optimal solution. This paper presents a novel architecture of sensor fusion frame work in which a dedicated filter (DF) is proposed to increase the robustness of the occupancy grid for indoor environment. The technique has been experimentally verified for different indoor test environments. The proposed configuration shows improvement in the occupancy grid with the implementation of dedicated filters.</p>
APA, Harvard, Vancouver, ISO, and other styles
33

Lindborg, Alma, and Tobias S. Andersen. "Bayesian binding and fusion models explain illusion and enhancement effects in audiovisual speech perception." PLOS ONE 16, no. 2 (February 19, 2021): e0246986. http://dx.doi.org/10.1371/journal.pone.0246986.

Full text
Abstract:
Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the perception of a faint auditory speech stimulus, whereas adding incongruent visual speech can alter the perception of the utterance. The latter phenomenon is the case of the McGurk illusion, where an auditory stimulus such as e.g. “ba” dubbed onto a visual stimulus such as “ga” produces the illusion of hearing “da”. Bayesian models of multisensory perception suggest that both the enhancement and the illusion case can be described as a two-step process of binding (informed by prior knowledge) and fusion (informed by the information reliability of each sensory cue). However, there is to date no study which has accounted for how they each contribute to audiovisual speech perception. In this study, we expose subjects to both congruent and incongruent audiovisual speech, manipulating the binding and the fusion stages simultaneously. This is done by varying both temporal offset (binding) and auditory and visual signal-to-noise ratio (fusion). We fit two Bayesian models to the behavioural data and show that they can both account for the enhancement effect in congruent audiovisual speech, as well as the McGurk illusion. This modelling approach allows us to disentangle the effects of binding and fusion on behavioural responses. Moreover, we find that these models have greater predictive power than a forced fusion model. This study provides a systematic and quantitative approach to measuring audiovisual integration in the perception of the McGurk illusion as well as congruent audiovisual speech, which we hope will inform future work on audiovisual speech perception.
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Zhiwen, Jianmin Zhu, Jingtao Lei, Xiaoru Li, and Fengqing Tian. "Tool Wear Predicting Based on Multisensory Raw Signals Fusion by Reshaped Time Series Convolutional Neural Network in Manufacturing." IEEE Access 7 (2019): 178640–51. http://dx.doi.org/10.1109/access.2019.2958330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

LIU, QING (CHARLIE), and HSU-PIN (BEN) WANG. "A case study on multisensor data fusion for imbalance diagnosis of rotating machinery." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 3 (June 2001): 203–10. http://dx.doi.org/10.1017/s0890060401153011.

Full text
Abstract:
Techniques for machine condition monitoring and diagnostics are gaining acceptance in various industrial sectors. They have proved to be effective in predictive or proactive maintenance and quality control. Along with the fast development of computer and sensing technologies, sensors are being increasingly used to monitor machine status. In recent years, the fusion of multisensor data has been applied to diagnose machine faults. In this study, multisensors are used to collect signals of rotating imbalance vibration of a test rig. The characteristic features of each vibration signal are extracted with an auto-regressive (AR) model. Data fusion is then implemented with a Cascade-Correlation (CC) neural network. The results clearly show that multisensor data-fusion-based diagnostics outperforms the single sensor diagnostics with statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
36

Petković, Miro, Igor Vujović, and Ivica Kuzmanić. "An Overview on Horizon Detection Methods in Maritime Video Surveillance." Transactions on Maritime Science 9, no. 1 (April 20, 2020): 106–12. http://dx.doi.org/10.7225/toms.v09.n01.010.

Full text
Abstract:
The interest in video surveillance has been increasing in the fields of maritime industry in the past decade. Maritime transportation system is a vital part of the world’s economy and the extent of global ship traffic is increasing. This trend encourages the development of intelligent surveillance systems in the maritime zone. The development of intelligent surveillance systems includes sensor and data fusion, which incorporates multispectral and multisensory data to replace the traditional approach with radars only. Video cameras are widely used since they capture images of greater resolution than most sensor systems. Also, combined with video analytics they provide sensors with high capability, complex pattern recognition analytics, and multiple variables for the decision making process. In this paper, an overview of a small part of the system is presented – horizon detection.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Aijun, Heng Zhou, Wei Yu, Fan Zhang, Hanbin Sang, Xiaoyu Tang, Tianyang Zhang, and Ming Zhang. "Repetition Suppression in Visual and Auditory Modalities Affects the Sound-Induced Flash Illusion." Perception 50, no. 6 (May 26, 2021): 489–507. http://dx.doi.org/10.1177/03010066211018614.

Full text
Abstract:
Sound-induced flash illusion (SiFI) refers to the illusion that the number of visual flashes is equal to the number of auditory sounds when the visual flashes are accompanied by an unequal number of auditory sounds presented within 100 ms. The effect of repetition suppression (RS), an adaptive effect caused by stimulus repetition, upon the SiFI has not been investigated. Based on the classic SiFI paradigm, the present study investigated whether RS would affect the SiFI differently by adding preceding stimuli in visual and auditory modalities prior to the appearance of audiovisual stimuli. The results showed the auditory RS effect on the SiFI varied with the number of preceding auditory stimuli. The hit rate was higher with two preceding auditory stimuli than one preceding auditory stimulus in fission illusion, but it did not affect the size of the fusion illusion. However, the visual RS had no effect on the size of the fission and fusion illusions. The present study suggested that RS could affect the SiFI, indicating that the RS effect in different modalities would differentially affect the magnitude of the SiFI. In the process of multisensory integration, the visual and auditory modalities had asymmetrical RS effects.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Bao Jun. "Study on Multisensor Data Fusion of Ultrasonic Sensor." Advanced Materials Research 722 (July 2013): 44–48. http://dx.doi.org/10.4028/www.scientific.net/amr.722.44.

Full text
Abstract:
Aiming at data fusion of Autonomous car multisensors experiment at many times on distances, a novel fusion method is proposed based on the approach degree and weights. The method calculate mean and variance based on the measured sensors data, Using the maximum and minimum approach degree of this fuzzy set , the approach degree of the measured data from various sensors is processed quantitatively ,eliminating outlier data by Grubbs method, assigned the weights of data measured in the fusion process reasonably , so that the final expression of the data fusion is obtained, thus the data fusion of multisensor is realized.Test results demonstrate that this method can bring higher fusion precision, and more suitable for microcontroller and embedded systems applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Schlegel, Peter, Lars C. Gussen, Daniel Frank, and Robert H. Schmitt. "Modeling perceived quality of haptic impressions based on various sensor data sources." Sensor Review 38, no. 3 (June 18, 2018): 289–97. http://dx.doi.org/10.1108/sr-07-2017-0123.

Full text
Abstract:
Purpose This paper aims to provide an approach of modeling haptic impressions of surfaces over a wide range of applications by using multiple sensor sources. Design/methodology/approach A multisensory measurement experiment was conducted using various leather and artificial leather surfaces. After processing of measurement data and feature extraction, different learning algorithms were applied to the measurement data and a corresponding set of data from a sensory study. The study contained evaluations of the same surfaces regarding descriptors of haptic quality (e.g. roughness) by human subjects and was conducted in a former research project. Findings The research revealed that it is possible to model and project haptic impressions by using multiple sensor sources in combination with data fusion. The presented method possesses the potential for an industrial application. Originality/value This paper provides a new approach to predict haptic impressions of surfaces by using multiple sensor sources.
APA, Harvard, Vancouver, ISO, and other styles
40

Matchin, William, Kier Groulx, and Gregory Hickok. "Audiovisual Speech Integration Does Not Rely on the Motor System: Evidence from Articulatory Suppression, the McGurk Effect, and fMRI." Journal of Cognitive Neuroscience 26, no. 3 (March 2014): 606–20. http://dx.doi.org/10.1162/jocn_a_00515.

Full text
Abstract:
Visual speech influences the perception of heard speech. A classic example of this is the McGurk effect, whereby an auditory /pa/ overlaid onto a visual /ka/ induces the fusion percept of /ta/. Recent behavioral and neuroimaging research has highlighted the importance of both articulatory representations and motor speech regions of the brain, particularly Broca's area, in audiovisual (AV) speech integration. Alternatively, AV speech integration may be accomplished by the sensory system through multisensory integration in the posterior STS. We assessed the claims regarding the involvement of the motor system in AV integration in two experiments: (i) examining the effect of articulatory suppression on the McGurk effect and (ii) determining if motor speech regions show an AV integration profile. The hypothesis regarding experiment (i) is that if the motor system plays a role in McGurk fusion, distracting the motor system through articulatory suppression should result in a reduction of McGurk fusion. The results of experiment (i) showed that articulatory suppression results in no such reduction, suggesting that the motor system is not responsible for the McGurk effect. The hypothesis of experiment (ii) was that if the brain activation to AV speech in motor regions (such as Broca's area) reflects AV integration, the profile of activity should reflect AV integration: AV > AO (auditory only) and AV > VO (visual only). The results of experiment (ii) demonstrate that motor speech regions do not show this integration profile, whereas the posterior STS does. Instead, activity in motor regions is task dependent. The combined results suggest that AV speech integration does not rely on the motor system.
APA, Harvard, Vancouver, ISO, and other styles
41

Pi, Chen-Huan, Yi-Wei Dai, Kai-Chun Hu, and Stone Cheng. "General Purpose Low-Level Reinforcement Learning Control for Multi-Axis Rotor Aerial Vehicles." Sensors 21, no. 13 (July 2, 2021): 4560. http://dx.doi.org/10.3390/s21134560.

Full text
Abstract:
This paper proposes a multipurpose reinforcement learning based low-level multirotor unmanned aerial vehicles control structure constructed using neural networks with model-free training. Other low-level reinforcement learning controllers developed in studies have only been applicable to a model-specific and physical-parameter-specific multirotor, and time-consuming training is required when switching to a different vehicle. We use a 6-degree-of-freedom dynamic model combining acceleration-based control from the policy neural network to overcome these problems. The UAV automatically learns the maneuver by an end-to-end neural network from fusion states to acceleration command. The state estimation is performed using the data from on-board sensors and motion capture. The motion capture system provides spatial position information and a multisensory fusion framework fuses the measurement from the onboard inertia measurement units for compensating the time delay and low update frequency of the capture system. Without requiring expert demonstration, the trained control policy implemented using an improved algorithm can be applied to various multirotors with the output directly mapped to actuators. The algorithm’s ability to control multirotors in the hovering and the tracking task is evaluated. Through simulation and actual experiments, we demonstrate the flight control with a quadrotor and hexrotor by using the trained policy. With the same policy, we verify that we can stabilize the quadrotor and hexrotor in the air under random initial states.
APA, Harvard, Vancouver, ISO, and other styles
42

Di, Peng, Xuan Wang, Tong Chen, and Bin Hu. "Multisensor Data Fusion in Testability Evaluation of Equipment." Mathematical Problems in Engineering 2020 (November 30, 2020): 1–16. http://dx.doi.org/10.1155/2020/7821070.

Full text
Abstract:
The multisensor data fusion method has been extensively utilized in many practical applications involving testability evaluation. Due to the flexibility and effectiveness of Dempster–Shafer evidence theory in modeling and processing uncertain information, this theory has been widely used in various fields of multisensor data fusion method. However, it may lead to wrong results when fusing conflicting multisensor data. In order to deal with this problem, a testability evaluation method of equipment based on multisensor data fusion method is proposed. First, a novel multisensor data fusion method, based on the improvement of Dempster–Shafer evidence theory via the Lance distance and the belief entropy, is proposed. Next, based on the analysis of testability multisensor data, such as testability virtual test data, testability test data of replaceable unit, and testability growth test data, the corresponding prior distribution conversion schemes of testability multisensor data are formulated according to their different characteristics. Finally, the testability evaluation method of equipment based on the multisensor data fusion method is proposed. The result of experiment illustrated that the proposed method is feasible and effective in handling the conflicting evidence; besides, the accuracy of fusion of the proposed method is higher and the result of evaluation is more reliable than other testability evaluation methods, which shows that the basic probability assignment of the true target is 94.71%.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Yuqing, and Wei Xue. "A Multisensor Fusion Method for Tool Condition Monitoring in Milling." Sensors 18, no. 11 (November 10, 2018): 3866. http://dx.doi.org/10.3390/s18113866.

Full text
Abstract:
Tool fault diagnosis in numerical control (NC) machines plays a significant role in ensuring manufacturing quality. Tool condition monitoring (TCM) based on multisensors can provide more information related to tool condition, but it can also increase the risk that effective information is overwhelmed by redundant information. Thus, the method of obtaining the most effective feature information from multisensor signals is currently a hot topic. However, most of the current feature selection methods take into account the correlation between the feature parameters and the tool state and do not analyze the influence of feature parameters on prediction accuracy. In this paper, a multisensor global feature extraction method for TCM in the milling process is researched. Several statistical parameters in the time, frequency, and time–frequency (Wavelet packet transform) domains of multiple sensors are selected as an alternative parameter set. The monitoring model is executed by a Kernel-based extreme learning Machine (KELM), and a modified genetic algorithm (GA) is applied in order to search the optimal parameter combinations in a two-objective optimization model to achieve the highest prediction precision. The experimental results show that the proposed method outperforms the Pearson’s correlation coefficient (PCC) based, minimal redundancy and maximal relevance (mRMR) based, and Principal component analysis (PCA)-based feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Xin, Qi Dan Zhu, and Ye Bin Wu. "A Measurement Fusion Fault-Tolerating PID Control for Time-Delay System with Colored Noise Disturbance." Key Engineering Materials 419-420 (October 2009): 589–92. http://dx.doi.org/10.4028/www.scientific.net/kem.419-420.589.

Full text
Abstract:
The designing method of filtering, fault-tolerating, and fusing PID control is put forward, concerning multisensor time-delay system with colored noise disturbance. First of all, this method detects fault and isolate the data by the weighted square sum of residuals (WSSR) method which is measured by multisensors, then the data which is detected right will be measurement fused, and the fused data will be optimally filtered basing on modern time series analysis method. Finally, the global optimal estimation of measured data will be got, which will be brought back to the input endian in order to improve PID controlling accuracy. A 3-sensor servomotor control example shows the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Aijun, Hanbin Sang, Jiaying He, Clara Sava-Segal, Xiaoyu Tang, and Ming Zhang. "Effects of Cognitive Expectation on Sound-Induced Flash Illusion." Perception 48, no. 12 (December 2019): 1214–34. http://dx.doi.org/10.1177/0301006619885796.

Full text
Abstract:
Sound-induced flash illusion (SIFI) is an auditory-dominated multisensory integration phenomenon in which flashes presented in conjunction with an unequal number of auditory sounds are illusorily perceived as equal in number to the auditory sounds. Previous studies on the factors that impact SIFI have mainly focused on top-down and bottom-up factors. This study aimed to explore the effects of top-down cognitive expectations on the SIFI by manipulating the proportion of trial types. The results showed that the accuracy of judgment was improved and reaction times were shortened when the instructions were consistent with the actual proportion of trial type. When the instructions were not consistent with the actual proportion of trial types, the instructions could still regulate the accuracy and reaction times in judging the fission illusion (i.e., a brief flash accompanied by two auditory stimuli tends to be perceived as two flashes) regardless of the actual proportion of trial types. The results indicated that top-down cognitive expectations could significantly reduce the fission illusion and accelerate the judgment, but the effect was not significant in the fusion illusion (i.e., two brief flashes accompanied by single auditory stimuli tend to be perceived as a single flash) due to the instability of the illusion.
APA, Harvard, Vancouver, ISO, and other styles
46

Taramelli, Andrea, Sergio Cappucci, Emiliana Valentini, Lorenzo Rossi, and Iolanda Lisi. "Nearshore Sandbar Classification of Sabaudia (Italy) with LiDAR Data: The FHyL Approach." Remote Sensing 12, no. 7 (March 25, 2020): 1053. http://dx.doi.org/10.3390/rs12071053.

Full text
Abstract:
An application of the FHyL (field spectral libraries, airborne hyperspectral images and topographic LiDAR) method is presented. It is aimed to map and classify bedforms in submerged beach systems and has been applied to Sabaudia coast (Tirrenyan Sea, Central Italy). The FHyl method allows the integration of geomorphological observations into detailed maps by the multisensory data fusion process from hyperspectral, LiDAR, and in-situ radiometric data. The analysis of the sandy beach classification provides an identification of the variable bedforms by using LiDAR bathymetric Digital Surface Model (DSM) and Bathymetric Position Index (BPI) along the coastal stretch. The nearshore sand bars classification and analysis of the bed form parameters (e.g., depth, slope and convexity/concavity properties) provide excellent results in very shallow waters zones. Thanks to well-established LiDAR and spectroscopic techniques developed under the FHyL approach, remote sensing has the potential to deliver significant quantitative products in coastal areas. The developed method has become the standard for the systematic definition of the operational coastal airborne dataset that must be provided by coastal operational services as input to national downstream services. The methodology is also driving the harmonization procedure of coastal morphological dataset definition at the national scale and results have been used by the authorities to adopt a novel beach management technique.
APA, Harvard, Vancouver, ISO, and other styles
47

Varshney, P. K. "Multisensor data fusion." Electronics & Communication Engineering Journal 9, no. 6 (December 1, 1997): 245–53. http://dx.doi.org/10.1049/ecej:19970602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Stateczny, Andrzej, and Witold Kazimierski. "Multisensor Tracking of Marine Targets - Decentralized Fusion of Kalman and Neural Filters." International Journal of Electronics and Telecommunications 57, no. 1 (March 1, 2011): 65–70. http://dx.doi.org/10.2478/v10177-011-0009-8.

Full text
Abstract:
Multisensor Tracking of Marine Targets - Decentralized Fusion of Kalman and Neural FiltersThis paper presents an algorithm of multisensor decentralized data fusion for radar tracking of maritime targets. The fusion is performed in the space of Kalman Filter and is done by finding weighted average of single state estimates provided be each of the sensors. The sensors use numerical or neural filters for tracking. The article presents two tracking methods - Kalman Filter and General Regression Neural Network, together with the fusion algorithm. The structural and measurement models of moving target are determined. Two approaches for data fusion are stated - centralized and decentralized - and the latter is thoroughly examined. Further, the discussion on main fusing process problems in complex radar systems is presented. This includes coordinates transformation, track association and measurements synchronization. The results of numerical experiment simulating tracking and fusion process are highlighted. The article is ended with a summary of the issues pointed out during the research.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Zhi Yong, Dong Hui Chen, Zhi Hong Zhang, Yue Ying Tong, Jin Tong, and Lu Dai. "Design of a Bionic Olfactory, Tactile Integrated System and its Application in Chicken Meat Quality Inspection." Applied Mechanics and Materials 461 (November 2013): 814–21. http://dx.doi.org/10.4028/www.scientific.net/amm.461.814.

Full text
Abstract:
This study is aiming at the practical problem of meat freshness evaluation. Since meat putrefaction is a complex process that is influenced by many factors, it is necessary to have a comprehensive investigation of the various indicators to determine the freshness of meat. This research integrated information from a multisensory system to reduce uncertainty of evaluation. According to the odor mechanism model of rotten chicken, six types of sensors were chosen, which were combined as array for olfactory experiments. WDW-20 electronic universal testing machine (UTM) was adopted as tactile sensing device. As a bionic tactile test part, the UTM head is to obtain pressure characteristic curves of the meat. According to the odor model and elastic mechanics parameters of the chicken, the mechanical parameters were analyzed under the condition of cold storage, as well as time-varying results of fingerprint odor signal and salt base nitrogen volatile signal. Then, established the meat odor, elastic mechanics and freshness parameters, which were integrated into a fusion system and combined with the data through the experimental test. Eventually, established the mathematical model among meat odor, elastic mechanics parameters and meat freshness. This study provides theory reference for the evaluation of meat freshness, and delivers new thought and method for the design of multiphase bionic intelligent electrical measuring equipment.
APA, Harvard, Vancouver, ISO, and other styles
50

Richardson, John M., and Kenneth A. Marsh. "Fusion of Multisensor Data." International Journal of Robotics Research 7, no. 6 (December 1988): 78–96. http://dx.doi.org/10.1177/027836498800700607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography