Academic literature on the topic 'Virtual multisensor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Virtual multisensor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Virtual multisensor"

1

Emura, Satoru, and Susumu Tachi. "Multisensor Integrated Prediction for Virtual Reality." Presence: Teleoperators and Virtual Environments 7, no. 4 (1998): 410–22. http://dx.doi.org/10.1162/105474698565811.

Full text
Abstract:
Unconstrained measurement of human head motion is essential for HMDs (headmounted displays) to be really interactive. Polhemus sensors developed for that purpose have deficiencies of critical latency and low sampling rates. Adding to this, a delay for rendering virtual scenes is inevitable. This paper proposes methods that compensate the latency and raises the effective sampling rate by integrating Polhemus and gyro sensors. The adoption of quaternion representation enables us to avoid singularity and the complicated boundary process of rotational motion. The ability of proposed methods under various rendering delays was evaluated in the respect of RMS error and our new correlational technique, which enables us to check the latency and fidelity of a magnetic tracker, and to assess the environment where the magnetic tracker is used. The real-time implementation of our simpler method on personal computers is also reported in detail.
APA, Harvard, Vancouver, ISO, and other styles
2

Wenhao, Dong. "Multisensor Information Fusion-Assisted Intelligent Art Design under Wireless Virtual Reality Environment." Journal of Sensors 2021 (December 31, 2021): 1–10. http://dx.doi.org/10.1155/2021/6119127.

Full text
Abstract:
Under the background of intelligent technologies, art designers need to use information technology to assist the design of art factors and fully realize the integration of art design and information technology. Multisensor information fusion technology can more intuitively and visually carry out a more comprehensive grasp of the objectives to be designed, maximize the positive effects of art design, and achieve its overall optimization and can also help art designers get rid of the traditional monolithic and obsolete design concepts. Based on multisensor information fusion technology under wireless virtual reality environment, principles of signal acquisition and preprocessing, feature extraction, and fusion calculation, to analyze the information processing process of multisensor information fusion, conduct the model construction and performance evaluation for intelligent art design, and propose an intelligent art design model based on multisensor information fusion technology, we discuss the realization of multisensor information fusion algorithm in intelligent art design and finally carry out a simulation experiment and its result analysis by taking the environment design of a parent-child restaurant as an example. The study results show that using multisensor information fusion in the environmental design of parent-child restaurant is better than using a single sensor for that; at the same time, using force sensors has a better environmental design effect than using vibration sensors. The multisensor information fusion technology can automatically analyze the observation information of several sources obtained in time sequence under certain criteria and comprehensively perform information processing for the completion of the decision-making and estimation tasks required for intelligent art design.
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Jiahao, Daozhi Wei, Shucai Huang, and Xiangwei Bu. "A Sensor Deployment Approach Using Improved Virtual Force Algorithm Based on Area Intensity for Multisensor Networks." Mathematical Problems in Engineering 2019 (February 27, 2019): 1–9. http://dx.doi.org/10.1155/2019/8015309.

Full text
Abstract:
Sensor deployment is one of the major concerns in multisensor networks. This paper proposes a sensor deployment approach using improved virtual force algorithm based on area intensity for multisensor networks to realize the optimal deployment of multisensor and obtain better coverage effect. Due to the real-time sensor detection model, the algorithm uses the intensity of sensor area to select the optimal deployment distance. In order to verify the effectiveness of this algorithm to improve coverage quality, VFA and PSOA are selected for comparative analysis. The simulation results show that the algorithm can achieve global coverage optimization better and improve the performance of virtual force algorithm. It avoids the unstable coverage caused by the large amount of computation, slow convergence speed, and easily falling into local optimum, which provides a new idea for multisensor deployment.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Peng, Xuan Wang, Tong Chen, and Bin Hu. "Multisensor Data Fusion in Testability Evaluation of Equipment." Mathematical Problems in Engineering 2020 (November 30, 2020): 1–16. http://dx.doi.org/10.1155/2020/7821070.

Full text
Abstract:
The multisensor data fusion method has been extensively utilized in many practical applications involving testability evaluation. Due to the flexibility and effectiveness of Dempster–Shafer evidence theory in modeling and processing uncertain information, this theory has been widely used in various fields of multisensor data fusion method. However, it may lead to wrong results when fusing conflicting multisensor data. In order to deal with this problem, a testability evaluation method of equipment based on multisensor data fusion method is proposed. First, a novel multisensor data fusion method, based on the improvement of Dempster–Shafer evidence theory via the Lance distance and the belief entropy, is proposed. Next, based on the analysis of testability multisensor data, such as testability virtual test data, testability test data of replaceable unit, and testability growth test data, the corresponding prior distribution conversion schemes of testability multisensor data are formulated according to their different characteristics. Finally, the testability evaluation method of equipment based on the multisensor data fusion method is proposed. The result of experiment illustrated that the proposed method is feasible and effective in handling the conflicting evidence; besides, the accuracy of fusion of the proposed method is higher and the result of evaluation is more reliable than other testability evaluation methods, which shows that the basic probability assignment of the true target is 94.71%.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Tao. "Performance of VR Technology in Environmental Art Design Based on Multisensor Information Fusion under Computer Vision." Mobile Information Systems 2022 (April 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/3494535.

Full text
Abstract:
Multisensor information fusion technology is a symbol of scientific and technological progress. This paper is aimed at discussing the performance of virtual reality (VR) technology in the environmental art design of multisensor information fusion technology. This paper prepares some related work in the early stage and then lists the algorithms and models, such as the multisensor information fusion model based on VR instrument technology, and shows the principle of information fusion and GPID bus structure. This paper describes the multisensor information fusion algorithm to analyze DS evidence theory. In the evidence-based decision theory, the multisensor information fusion process is the calculation of the qualitative level and/or confidence level function, generally calculating the posterior distribution information. In addition to showing its algorithm, this paper also shows the data flow of the multisensor information fusion system through pictures. Then, this paper explains the design and construction of garden art environment based on active panoramic stereo vision sensor, shows the relationship of the four coordinates in an all-round way, and shows the interactive experience of indoor and outdoor environmental art design. Then, this paper conducts estimation simulation experiments based on EKF and shows the results, and it is concluded that the fusion data using the extended Kalman filter algorithm is closer to the actual target motion data and the accuracy rate is better than 92%.
APA, Harvard, Vancouver, ISO, and other styles
6

Gu, Yingjie, and Ye Zhou. "Application of Virtual Reality Based on Multisensor Data Fusion in Theater Space and Installation Art." Mobile Information Systems 2022 (August 28, 2022): 1–8. http://dx.doi.org/10.1155/2022/4101910.

Full text
Abstract:
The application of Virtual Reality (VR) in theater space and installation art is the general trend, and it can be seen in large stage plays and installation art exhibitions. However, as the current VR is not mature enough, it is difficult to perfectly fulfill the exhibition requirements of large theaters, so this paper aims to change this situation by using VR based on multisensor data fusion. In this paper, a data fusion algorithm based on multisensors is designed, which improves the data transmission efficiency and delay of the VR system, so that VR can have a better viewing experience in theater space and installation art. And, through the questionnaire survey and actual interview, the actual feelings of VR audience in theater space and installation art are investigated and studied. Through the experimental analysis of this paper, the algorithm in this paper has high reliability and can improve the experience of using VR. The interview results and results show that the main application of VR in theater space is manifested in three aspects: multiangle and all-round viewing, multiroute viewing, and man-machine interaction in art galleries. The application of VR in installation art is mainly reflected in the perception of installation materials.
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Dongli. "Application of GIS and Multisensor Technology in Green Urban Garden Landscape Design." Journal of Sensors 2023 (March 27, 2023): 1–7. http://dx.doi.org/10.1155/2023/9730980.

Full text
Abstract:
In order to solve the problem of low definition of the original 3D virtual imaging system, the author proposes the application method of GIS and multisensor technology in green urban garden landscape design. By formulating a hardware design framework, an image collector is selected for image acquisition according to the framework, the image is filtered and denoised by a computer, the processed image is output through laser refraction, and a photoreceptor and a transparent transmission module are used for virtual imaging. Formulate a software design framework, perform noise reduction processing on the collected image through convolutional neural network calculation, and use pixel grayscale calculation to obtain the feature points of the original image, and use C language to set and output the virtual imaging, thus completing the software design. Combined with the above hardware and software design, the design of 3D virtual imaging system in garden landscape design is completed. Construct a comparative experiment to compare with the original system. The results showed the following: The designed system has a significant improvement in the clarity, the original system clarity is 82%~85%, and the image clarity of this system is 85%~90%. In conclusion, the author designed the method to be more effective.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Wonjun, Hyung-Jun Lim, and Mun Sang Kim. "Development for Multisensor and Virtual Simulator–Based Automatic Broadcast Shooting System." International Journal of Digital Multimedia Broadcasting 2022 (July 16, 2022): 1–13. http://dx.doi.org/10.1155/2022/2724804.

Full text
Abstract:
To solve the limitations of complexity and repeatability of existing broadcast filming systems, a new broadcast filming system was developed. In particular, for Korean music broadcasts, the shooting sequence is stage and lighting installation, rehearsal, lighting effect production, and main shooting; however, this sequence is complex and involves multiple people. We developed an automatic shooting system that can produce the same effect as the sequence with a minimum number of people as the era of un-tact has emerged because of COVID-19. The developed system comprises a simulator. After developing a stage using the simulator, during rehearsal, dancers’ movements are acquired using UWB and two-dimensional (2D) LiDAR sensors. By inserting acquired movement data in the developed stage, a camera effect is produced using a virtual camera installed in the developed simulator. The camera effect comprises pan, tilt, and zoom, and a camera director creates lightning effects while evaluating the movements of virtual dancers on the virtual stage. In this study, four cameras were used, three of which were used for camera pan, tilt, and zoom control, and the fourth was used as a fixed camera for a full shot. Video shooting is performed according to the pan, tilt, and zoom values ​​of the three cameras and switcher data. Only the video of dancers recorded during rehearsal and that produced by the lighting director via the existing broadcast filming process is overlapped in the developed simulator to assess lighting effects. The lighting director assesses the overlapping video and then corrects parts that require to be corrected or emphasized. The abovementioned method produced better lighting effects optimized for music and choreography compared to existing lighting effect production methods. Finally, the performance and lighting effects of the developed simulator and system were confirmed by shooting using K-pop using the pan, tilt, and zoom control plan, switcher sequence, and lighting effects of the selected camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Oue, Mariko, Aleksandra Tatarevic, Pavlos Kollias, Dié Wang, Kwangmin Yu, and Andrew M. Vogelmann. "The Cloud-resolving model Radar SIMulator (CR-SIM) Version 3.3: description and applications of a virtual observatory." Geoscientific Model Development 13, no. 4 (2020): 1975–98. http://dx.doi.org/10.5194/gmd-13-1975-2020.

Full text
Abstract:
Abstract. Ground-based observatories use multisensor observations to characterize cloud and precipitation properties. One of the challenges is how to design strategies to best use these observations to understand these properties and evaluate weather and climate models. This paper introduces the Cloud-resolving model Radar SIMulator (CR-SIM), which uses output from high-resolution cloud-resolving models (CRMs) to emulate multiwavelength, zenith-pointing, and scanning radar observables and multisensor (radar and lidar) products. CR-SIM allows for direct comparison between an atmospheric model simulation and remote-sensing products using a forward-modeling framework consistent with the microphysical assumptions used in the atmospheric model. CR-SIM has the flexibility to easily incorporate additional microphysical modules, such as microphysical schemes and scattering calculations, and expand the applications to simulate multisensor retrieval products. In this paper, we present several applications of CR-SIM for evaluating the representativeness of cloud microphysics and dynamics in a CRM, quantifying uncertainties in radar–lidar integrated cloud products and multi-Doppler wind retrievals, and optimizing radar sampling strategy using observing system simulation experiments. These applications demonstrate CR-SIM as a virtual observatory operator on high-resolution model output for a consistent comparison between model results and observations to aid interpretation of the differences and improve understanding of the representativeness errors due to the sampling limitations of the ground-based measurements. CR-SIM is licensed under the GNU GPL package and both the software and the user guide are publicly available to the scientific community.
APA, Harvard, Vancouver, ISO, and other styles
10

Bidaut, Luc. "Multisensor Imaging and Virtual Simulation for Assessment, Diagnosis, Therapy Planning, and Navigation." Simulation & Gaming 32, no. 3 (2001): 370–90. http://dx.doi.org/10.1177/104687810103200307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography