Siga este link para ver outros tipos de publicações sobre o tema: Depth of field fusion.

Artigos de revistas sobre o tema "Depth of field fusion"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Depth of field fusion".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

汪, 嘉欣. "A Fast Multi-Exposure Fusion Algorithm for Ultra Depth of Field Fusion." Modeling and Simulation 13, no. 03 (2024): 3797–806. http://dx.doi.org/10.12677/mos.2024.133346.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Zhaoyu, Qiankun Liu, Ke Xu, and Xiaoyang Liu. "Weighted Fusion Method of Marine Gravity Field Model Based on Water Depth Segmentation." Remote Sensing 16, no. 21 (2024): 4107. http://dx.doi.org/10.3390/rs16214107.

Texto completo da fonte
Resumo:
Among the marine gravity field models derived from satellite altimetry, the Scripps Institution of Oceanography (SIO) series and Denmark Technical University (DTU) series models are the most representative and are often used to integrate global gravity field models, which were inverted by the deflection of vertical method and sea surface height method, respectively. The fusion method based on the offshore distance used in the EGM2008 model is just model stitching, which cannot realize the true fusion of the two types of marine gravity field models. In the paper, a new fusion method based on wa
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Wang, Shuzhen, Haili Zhao, and Wenbo Jing. "Fast all-focus image reconstruction method based on light field imaging." ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.

Texto completo da fonte
Resumo:
To achieve high-quality imaging of all focal planes with large depth of field information, a fast all-focus image reconstruction technology based on light field imaging is proposed: combining light field imaging to collect field of view information, and using light field reconstruction to obtain a multi-focus image source set, using the improved NSML image fusion method performs image fusion to quickly obtain an all-focus image with a large depth of field. Experiments have proved that this method greatly reduces the time consumed in the image fusion process by simplifying the calculation proce
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chen, Jiaxin, Shuo Zhang, and Youfang Lin. "Attention-based Multi-Level Fusion Network for Light Field Depth Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.

Texto completo da fonte
Resumo:
Depth estimation from Light Field (LF) images is a crucial basis for LF related applications. Since multiple views with abundant information are available, how to effectively fuse features of these views is a key point for accurate LF depth estimation. In this paper, we propose a novel attention-based multi-level fusion network. Combined with the four-branch structure, we design intra-branch fusion strategy and inter-branch fusion strategy to hierarchically fuse effective features from different views. By introducing the attention mechanism, features of views with less occlusions and richer te
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Piao, Yongri, Miao Zhang, Xiaohui Wang, and Peihua Li. "Extended depth of field integral imaging using multi-focus fusion." Optics Communications 411 (March 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Fu, Bangshao, Xunbo Yu, Xin Gao, et al. "Depth-of-field enhancement in light field display based on fusion of voxel information on the depth plane." Optics and Lasers in Engineering 183 (December 2024): 108543. http://dx.doi.org/10.1016/j.optlaseng.2024.108543.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Oucherif, Sabrine Djedjiga, Mohamad Motasem Nawaf, Jean-Marc Boï, et al. "Enhancing Facial Expression Recognition through Light Field Cameras." Sensors 24, no. 17 (2024): 5724. http://dx.doi.org/10.3390/s24175724.

Texto completo da fonte
Resumo:
In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancin
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li, and Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks." Remote Sensing 11, no. 5 (2019): 487. http://dx.doi.org/10.3390/rs11050487.

Texto completo da fonte
Resumo:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at diffe
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Gao, Yuxuan, Haiwei Zhang, Zhihong Chen, Lifang Xue, Yinping Miao, and Jiamin Fu. "Enhanced light field depth estimation through occlusion refinement and feature fusion." Optics and Lasers in Engineering 184 (January 2025): 108655. http://dx.doi.org/10.1016/j.optlaseng.2024.108655.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

De, Ishita, Bhabatosh Chanda, and Buddhajyoti Chattopadhyay. "Enhancing effective depth-of-field by image fusion using mathematical morphology." Image and Vision Computing 24, no. 12 (2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Bouzos, Odysseas, Ioannis Andreadis, and Nikolaos Mitianoudis. "Conditional Random Field-Guided Multi-Focus Image Fusion." Journal of Imaging 8, no. 9 (2022): 240. http://dx.doi.org/10.3390/jimaging8090240.

Texto completo da fonte
Resumo:
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA t
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Jie, Yuchan, Xiaosong Li, Mingyi Wang, and Haishu Tan. "Multi-Focus Image Fusion for Full-Field Optical Angiography." Entropy 25, no. 6 (2023): 951. http://dx.doi.org/10.3390/e25060951.

Texto completo da fonte
Resumo:
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the F
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Pei, Xiangyu, Shujun Xing, Xunbo Yu, et al. "Three-dimensional light field fusion display system and coding scheme for extending depth of field." Optics and Lasers in Engineering 169 (October 2023): 107716. http://dx.doi.org/10.1016/j.optlaseng.2023.107716.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Wang, Hui-Feng, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan, and Shi-qin Chen. "A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving." Sensor Review 36, no. 1 (2016): 7–13. http://dx.doi.org/10.1108/sr-04-2015-0055.

Texto completo da fonte
Resumo:
Purpose – This study aims to consider active vision in low-visibility environments to reveal the factors of optical properties which affect visibility and to explore a method of obtaining different depths of fields by multimode imaging.Bad weather affects the driver’s visual range tremendously and thus has a serious impact on transport safety. Design/methodology/approach – A new mechanism and a core algorithm for obtaining an excellent large field-depth image which can be used to aid safe driving is designed and implemented. In this mechanism, atmospheric extinction principle and field expansi
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Tsai, Yu-Hsiang, Yung-Jhe Yan, Meng-Hsin Hsiao, Tzu-Yi Yu, and Mang Ou-Yang. "Real-Time Information Fusion System Implementation Based on ARM-Based FPGA." Applied Sciences 13, no. 14 (2023): 8497. http://dx.doi.org/10.3390/app13148497.

Texto completo da fonte
Resumo:
In this study, an information fusion system displayed fusion information on a transparent display by considering the relationships among the display, background exhibit, and user’s gaze direction. We used an ARM-based field-programmable gate array (FPGA) to perform virtual–real fusion of this system as well as evaluated the virtual–real fusion execution speed. The ARM-based FPGA used Intel® RealsenseTM D435i depth cameras to capture depth and color images of an observer and exhibit. The image data was received by the ARM side and fed to the FPGA side for real-time object detection. The FPGA ac
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lv, Chen, Min Xiao, Bingbing Zhang, Pengbo Chen, and Xiaomin Liu. "P‐2.29: Depth Estimation of Light Field Image Based on Compressed Sensing and Multi‐clue Fusion." SID Symposium Digest of Technical Papers 54, S1 (2023): 594–97. http://dx.doi.org/10.1002/sdtp.16363.

Texto completo da fonte
Resumo:
In the current research field of light field depth estimation, the occlusion of complex scenes and a large amount of computational data are the problems that every researcher must face. For complex occlusion scenes, this paper proposes a depth estimation method based on the fusion of adaptive defocus cues and constrained angular entropy cues, which is more robust to occlusion. At the same time, the compressed sensing theory is used to compress and reconstruct the light field image to solve the problem of a large amount of data in the process of light field image acquisition, transmission, and
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Xiao, Min, Chen Lv, and Xiaomin Liu. "FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images." Sensors 23, no. 17 (2023): 7480. http://dx.doi.org/10.3390/s23177480.

Texto completo da fonte
Resumo:
A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. This paper proposes a depth estimation network of light field images with occlusion awareness. Since light field images contain many views from different viewpoints, identifying the combinations that contribute the most to the depth estimation of the center view is critical to improving the depth estim
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Xiao, Yuhao, Guijin Wang, Xiaowei Hu, Chenbo Shi, Long Meng, and Huazhong Yang. "Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack." Sensors 19, no. 22 (2019): 4845. http://dx.doi.org/10.3390/s19224845.

Texto completo da fonte
Resumo:
Three dimensional (3D) imaging technology has been widely used for many applications, such as human–computer interactions, making industrial measurements, and dealing with cultural relics. However, existing active methods often require both large apertures of projector and camera to maximize light throughput, resulting in a shallow working volume in which projector and camera are simultaneously in focus. In this paper, we propose a novel method to extend the working range of the structured light 3D imaging system based on the focal stack. Specifically in the case of large depth variation scene
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Xiao, Bo, Stuart Perry, Xiujing Gao, and Hongwu Huang. "Efficiency–Accuracy Trade-Off in Light Field Estimation with Cost Volume Construction and Aggregation." Sensors 24, no. 11 (2024): 3583. http://dx.doi.org/10.3390/s24113583.

Texto completo da fonte
Resumo:
The Rich spatial and angular information in light field images enables accurate depth estimation, which is a crucial aspect of environmental perception. However, the abundance of light field information also leads to high computational costs and memory pressure. Typically, selectively pruning some light field information can significantly improve computational efficiency but at the expense of reduced depth estimation accuracy in the pruned model, especially in low-texture regions and occluded areas where angular diversity is reduced. In this study, we propose a lightweight disparity estimation
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Cheng, Hao, Kaijie Wu, Chaochen Gu, and Dingrui Ma. "Multi-Focus Images Fusion for Fluorescence Imaging Based on Local Maximum Luminosity and Intensity Variance." Sensors 24, no. 15 (2024): 4909. http://dx.doi.org/10.3390/s24154909.

Texto completo da fonte
Resumo:
Due to the limitations on the depth of field of high-resolution fluorescence microscope, it is difficult to obtain an image with all objects in focus. The existing image fusion methods suffer from blocking effects or out-of-focus fluorescence. The proposed multi-focus image fusion method based on local maximum luminosity, intensity variance and the information filling method can reconstruct the all-in-focus image. Moreover, the depth of tissue’s surface can be estimated to reconstruct the 3D surface model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Zhang, Yuquan, Guangan Jiang, Mingrui Li, and Guosheng Feng. "UE-SLAM: Monocular Neural Radiance Field SLAM with Semantic Mapping Capabilities." Symmetry 17, no. 4 (2025): 508. https://doi.org/10.3390/sym17040508.

Texto completo da fonte
Resumo:
Neural Radiance Fields (NeRF) have transformed 3D reconstruction by enabling high-fidelity scene generation from sparse views. However, existing neural SLAM systems face challenges such as limited scene understanding and heavy reliance on depth sensors. We propose UE-SLAM, a real-time monocular SLAM system integrating semantic segmentation, depth fusion, and robust tracking modules. By leveraging the inherent symmetry between semantic segmentation and depth estimation, UE-SLAM utilizes DINOv2 for instance segmentation and combines monocular depth estimation, radiance field-rendered depth, and
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Ren, Nai Fei, Lei Jia, and Dian Wang. "Numerical Simulation Analysis on the Temperature Field in Indirect Selective Laser Sintering of 316L." Advanced Materials Research 711 (June 2013): 209–13. http://dx.doi.org/10.4028/www.scientific.net/amr.711.209.

Texto completo da fonte
Resumo:
Using APDL programming language, an appropriate finite element model is created and the moving cyclic loads of Gauss heat source are realized. From the detailed qualitative analysis of the results, the variety laws of temperature field in indirect SLS are obtained. Plot results at different moments, temperature cyclic curves of key points and the curves of depth of fusion and width of fusion on the set paths, are of important guiding significance for subsequent physical experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Chen, Junying, Boxuan Wang, Xiuyu Chen, et al. "A Micro-Topography Measurement and Compensation Method for the Key Component Surface Based on White-Light Interferometry." Sensors 23, no. 19 (2023): 8307. http://dx.doi.org/10.3390/s23198307.

Texto completo da fonte
Resumo:
The grinding grooves of material removal machining and the residues of a machining tool on the key component surface cause surface stress concentration. Thus, it is critical to carry out precise measurements on the key component surface to evaluate the stress concentration. Based on white-light interferometry (WLI), we studied the measurement distortion caused by the reflected light from the steep side of the grinding groove being unable to return to the optical system for imaging. A threshold value was set to eliminate the distorted measurement points, and the cubic spline algorithm was used
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Liu, Hang, Hengyu Li, Jun Luo, Shaorong Xie, and Yu Sun. "Construction of All-in-Focus Images Assisted by Depth Sensing." Sensors 19, no. 6 (2019): 1409. http://dx.doi.org/10.3390/s19061409.

Texto completo da fonte
Resumo:
Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Zhou, Youyong, Lingjie Yu, Chao Zhi, et al. "A Survey of Multi-Focus Image Fusion Methods." Applied Sciences 12, no. 12 (2022): 6281. http://dx.doi.org/10.3390/app12126281.

Texto completo da fonte
Resumo:
As an important branch in the field of image fusion, the multi-focus image fusion technique can effectively solve the problem of optical lens depth of field, making two or more partially focused images fuse into a fully focused image. In this paper, the methods based on boundary segmentation was put forward as a group of image fusion method. Thus, a novel classification method of image fusion algorithms is proposed: transform domain methods, boundary segmentation methods, deep learning methods, and combination fusion methods. In addition, the subjective and objective evaluation standards are l
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Zhou, Junyu. "Comparative Study on BEV Vision and LiDAR Point Cloud Data Fusion Methods." Transactions on Computer Science and Intelligent Systems Research 2 (December 21, 2023): 14–18. http://dx.doi.org/10.62051/ww28m534.

Texto completo da fonte
Resumo:
With the gradual maturity of autonomous driving technology, efficient fusion and processing of multimodal sensor data has become an important research direction. This study mainly explores the strategy of integrating BEV (Bird's Eye View) vision with LiDAR point cloud data. We evaluated the performance and applicability of each of the three main data fusion methods through in-depth comparison: early fusion, mid-term fusion, and late fusion. First of all, we summarize the working principle and data characteristics of BEV vision and LiDAR, and emphasize their key roles in auto drive system. Then
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Kim, Yeon-Soo, Taek-Jin Kim, Yong-Joo Kim, Sang-Dae Lee, Seong-Un Park, and Wan-Soo Kim. "Development of a Real-Time Tillage Depth Measurement System for Agricultural Tractors: Application to the Effect Analysis of Tillage Depth on Draft Force during Plow Tillage." Sensors 20, no. 3 (2020): 912. http://dx.doi.org/10.3390/s20030912.

Texto completo da fonte
Resumo:
The objectives of this study were to develop a real-time tillage depth measurement system for agricultural tractor performance analysis and then to validate these configured systems through soil non-penetration tests and field experiment during plow tillage. The real-time tillage depth measurement system was developed by using a sensor fusion method, consisting of a linear potentiometer, inclinometer, and optical distance sensor to measure the vertical penetration depth of the attached implement. In addition, a draft force measurement system was developed using six-component load cells, and an
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zhuang, Chuanqing, Zhengda Lu, Yiqun Wang, Jun Xiao, and Ying Wang. "ACDNet: Adaptively Combined Dilated Convolution for Monocular Panorama Depth Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 3653–61. http://dx.doi.org/10.1609/aaai.v36i3.20278.

Texto completo da fonte
Resumo:
Depth estimation is a crucial step for 3D reconstruction with panorama images in recent years. Panorama images maintain the complete spatial information but introduce distortion with equirectangular projection. In this paper, we propose an ACDNet based on the adaptively combined dilated convolution to predict the dense depth map for a monocular panoramic image. Specifically, we combine the convolution kernels with different dilations to extend the receptive field in the equirectangular projection. Meanwhile, we introduce an adaptive channel-wise fusion module to summarize the feature maps and
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

HANSEN, JIM, and DENNIS D. HARWIG. "Impact of Electrode Rotation on Aluminum GMAW Bead Shape." Welding Journal 102, no. 6 (2023): 125–36. http://dx.doi.org/10.29391/2023.102.010.

Texto completo da fonte
Resumo:
Aluminum gas metal arc welding (GMAW) uses inert shielding gas to minimize weld pool oxidation and reduce susceptibility to porosity and incomplete fusion defects. For aluminum shipbuilding, naval welding requirements highly recommend the use of helium-argon mixtures or pure helium shielding gas to provide a broader heat field and better weld toe fusion. Pure argon shielding gas can be used but has been susceptible to incomplete fusion and porosity defects, where argon’s lower thermal conductivity promotes a narrower arc heat field and shallow weld fusion depth. Using helium is a concern becau
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Silva, César de Oliveira Ferreira, Rodrigo Lilla Manzione, Epitácio Pedro da Silva Neto, Ulisses Alencar Bezerra, and John Elton Cunha. "Fusion of Remotely Sensed Data with Monitoring Well Measurements for Groundwater Level Management." AgriEngineering 7, no. 1 (2025): 14. https://doi.org/10.3390/agriengineering7010014.

Texto completo da fonte
Resumo:
In the realm of hydrological engineering, integrating extensive geospatial raster data from remote sensing (Big Data) with sparse field measurements offers a promising approach to improve prediction accuracy in groundwater studies. In this study, we integrated multisource data by applying the LMC to model the spatial relationships of variables and then utilized block support regularization with collocated block cokriging (CBCK) to enhance our predictions. A critical engineering challenge addressed in this study is support homogenization, where we adjusted punctual variances to block variances
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Pagliari, D., F. Menna, R. Roncella, F. Remondino, and L. Pinto. "Kinect Fusion improvement using depth camera calibration." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 479–85. http://dx.doi.org/10.5194/isprsarchives-xl-5-479-2014.

Texto completo da fonte
Resumo:
Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect i
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Li, Yunqiang, Shuowen Huang, Ying Chen, et al. "RGBTSDF: An Efficient and Simple Method for Color Truncated Signed Distance Field (TSDF) Volume Fusion Based on RGB-D Images." Remote Sensing 16, no. 17 (2024): 3188. http://dx.doi.org/10.3390/rs16173188.

Texto completo da fonte
Resumo:
RGB-D image mapping is an important tool in applications such as robotics, 3D reconstruction, autonomous navigation, and augmented reality (AR). Efficient and reliable mapping methods can improve the accuracy, real-time performance, and flexibility of sensors in various fields. However, the currently widely used Truncated Signed Distance Field (TSDF) still suffers from the problem of inefficient memory management, making it difficult to directly use it for large-scale 3D reconstruction. In order to address this problem, this paper proposes a highly efficient and accurate TSDF voxel fusion meth
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

DOULAMIS, ANASTASIOS, NIKOLAOS DOULAMIS, KLIMIS NTALIANIS, and STEFANOS KOLLIAS. "EFFICIENT UNSUPERVISED CONTENT-BASED SEGMENTATION IN STEREOSCOPIC VIDEO SEQUENCES." International Journal on Artificial Intelligence Tools 09, no. 02 (2000): 277–303. http://dx.doi.org/10.1142/s0218213000000197.

Texto completo da fonte
Resumo:
This paper presents an efficient technique for unsupervised content-based segmentation in stereoscopic video sequences by appropriately combined different content descriptors in a hierarchical framework. Three main modules are involved in the proposed scheme; extraction of reliable depth information, image partition into color and depth regions and a constrained fusion algorithm of color segments using information derived from the depth map. In the first module, each stereo pair is analyzed and the disparity field and depth map are estimated. Occlusion detection and compensation are also appli
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Fan, Xuqing, Sai Deng, Zhengxing Wu, Junfeng Fan, and Chao Zhou. "Spatial Domain Image Fusion with Particle Swarm Optimization and Lightweight AlexNet for Robotic Fish Sensor Fault Diagnosis." Biomimetics 8, no. 6 (2023): 489. http://dx.doi.org/10.3390/biomimetics8060489.

Texto completo da fonte
Resumo:
Safety and reliability are vital for robotic fish, which can be improved through fault diagnosis. In this study, a method for diagnosing sensor faults is proposed, which involves using Gramian angular field fusion with particle swarm optimization and lightweight AlexNet. Initially, one-dimensional time series sensor signals are converted into two-dimensional images using the Gramian angular field method with sliding window augmentation. Next, weighted fusion methods are employed to combine Gramian angular summation field images and Gramian angular difference field images, allowing for the full
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Jin, Chul Kyu, Jae Hyun Kim, and Bong-Seop Lee. "Powder Bed Fusion 3D Printing and Performance of Stainless-Steel Bipolar Plate with Rectangular Microchannels and Microribs." Energies 15, no. 22 (2022): 8463. http://dx.doi.org/10.3390/en15228463.

Texto completo da fonte
Resumo:
For the high performance of a fuel cell where a bipolar plate (BP) is applied, rectangular channel, microchannel width, micro-rib, enough channel quantity, adequate channel depth, and innovative flow field design should be realized from a configuration standpoint. In this study, a stainless-steel BP with a microchannel flow field is fabricated with a powder bed fusion (PBF) 3D printer to improve fuel cell performance. A BP with a triple serpentine flow field, rectangular channel, 300 μm channel width, 300 μm rib, and 500 μm channel depth is designed. The print is completed perfectly until the
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Han, Qihui, and Cheolkon Jung. "Guided filtering based data fusion for light field depth estimation with L0 gradient minimization." Journal of Visual Communication and Image Representation 55 (August 2018): 449–56. http://dx.doi.org/10.1016/j.jvcir.2018.06.020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Yang, Ning, Kangpeng Chang, Jian Tang, et al. "Detection method of rice blast based on 4D light field refocusing depth information fusion." Computers and Electronics in Agriculture 205 (February 2023): 107614. http://dx.doi.org/10.1016/j.compag.2023.107614.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Liu, Yihui, Yufei Xu, Ziyang Zhang, Lei Wan, Jiyong Li, and Yinghao Zhang. "Unsupervised Learning-Based Optical–Acoustic Fusion Interest Point Detector for AUV Near-Field Exploration of Hydrothermal Areas." Journal of Marine Science and Engineering 12, no. 8 (2024): 1406. http://dx.doi.org/10.3390/jmse12081406.

Texto completo da fonte
Resumo:
The simultaneous localization and mapping (SLAM) technique provides long-term near-seafloor navigation for autonomous underwater vehicles (AUVs). However, the stability of the interest point detector (IPD) remains challenging in the seafloor environment. This paper proposes an optical–acoustic fusion interest point detector (OAF-IPD) using a monocular camera and forward-looking sonar. Unlike the artificial feature detectors most underwater IPDs adopt, a deep neural network model based on unsupervised interest point detector (UnsuperPoint) was built to reach stronger environmental adaption. Fir
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Tucci, Michelle A., Robert A. McGuire, Gerri A. Wilson, David P. Gordy, and Hamed A. Benghuzzi. "Treatment Depth Effects of Combined Magnetic Field Technology using a Commercial Bone Growth Stimulator." Journal of the Mississippi Academy of Sciences Vol 66 (1), January (1) (2021): 28–34. http://dx.doi.org/10.31753//jmas.66_1028.

Texto completo da fonte
Resumo:
Lumbar spinal fusion is one of the more common spinal surgeries, and its use is on the rise. If the bone fails to fuse properly, then a pseudarthrosis or “false joint” develops and results in pain, instability, and disability. Since1974, three types of electrical stimulation technologies have been approved for clinical use to enhance spinal fusions. One such technology is inductive coupling, which includes combined magnetic fields (CMFs). The purpose of this study was to evaluate the effects of a CMF device known as the Donjoy (SpinaLogic®) on MG-63 (ATCC® CRL1427TM) human osteosarcoma cells a
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Chi, Xiaoni, Qinyuan Meng, Qiuxuan Wu, et al. "A Laser Data Compensation Algorithm Based on Indoor Depth Map Enhancement." Electronics 12, no. 12 (2023): 2716. http://dx.doi.org/10.3390/electronics12122716.

Texto completo da fonte
Resumo:
The field of mobile robotics has seen significant growth regarding the use of indoor laser mapping technology, but most two-dimensional Light Detection and Ranging (2D LiDAR) can only scan a plane of fixed height, and it is difficult to obtain the information of objects below the fixed height, so inaccurate environmental mapping and navigation mis-collision problems easily occur. Although three-dimensional (3D) LiDAR is also gradually applied, it is less used in indoor mapping because it is more expensive and requires a large amount of memory and computation. Therefore, a laser data compensati
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Wang, Hantao, Ente Guo, Feng Chen, and Pingping Chen. "Depth Completion in Autonomous Driving: Adaptive Spatial Feature Fusion and Semi-Quantitative Visualization." Applied Sciences 13, no. 17 (2023): 9804. http://dx.doi.org/10.3390/app13179804.

Texto completo da fonte
Resumo:
The safety of autonomous driving is closely linked to accurate depth perception. With the continuous development of autonomous driving, depth completion has become one of the crucial methods in this field. However, current depth completion methods have major shortcomings in small objects. To solve this problem, this paper proposes an end-to-end architecture with adaptive spatial feature fusion by encoder–decoder (ASFF-ED) module for depth completion. The architecture is built on the basis of the network architecture proposed in this paper, and is able to extract depth information adaptively wi
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Wang, Shuling, Fengze Jiang, and Xiaojin Gong. "A Transformer-Based Image-Guided Depth-Completion Model with Dual-Attention Fusion Module." Sensors 24, no. 19 (2024): 6270. http://dx.doi.org/10.3390/s24196270.

Texto completo da fonte
Resumo:
Depth information is crucial for perceiving three-dimensional scenes. However, depth maps captured directly by depth sensors are often incomplete and noisy, our objective in the depth-completion task is to generate dense and accurate depth maps from sparse depth inputs by fusing guidance information from corresponding color images obtained from camera sensors. To address these challenges, we introduce transformer models, which have shown great promise in the field of vision, into the task of image-guided depth completion. By leveraging the self-attention mechanism, we propose a novel network a
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wang, Jin, and Yanfei Gao. "Suspect Multifocus Image Fusion Based on Sparse Denoising Autoencoder Neural Network for Police Multimodal Big Data Analysis." Scientific Programming 2021 (January 7, 2021): 1–12. http://dx.doi.org/10.1155/2021/6614873.

Texto completo da fonte
Resumo:
In recent years, the success rate of solving major criminal cases through big data has been greatly improved. The analysis of multimodal big data plays a key role in the detection of suspects. However, the traditional multiexposure image fusion methods have low efficiency and are largely time-consuming due to the artifact effect in the image edge and other sensitive factors. Therefore, this paper focuses on the suspect multiexposure image fusion. The self-coding neural network based on deep learning has become a hotspot in the research of data dimension reduction, which can effectively elimina
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Wang, Jin, and Yanfei Gao. "Suspect Multifocus Image Fusion Based on Sparse Denoising Autoencoder Neural Network for Police Multimodal Big Data Analysis." Scientific Programming 2021 (January 7, 2021): 1–12. http://dx.doi.org/10.1155/2021/6614873.

Texto completo da fonte
Resumo:
In recent years, the success rate of solving major criminal cases through big data has been greatly improved. The analysis of multimodal big data plays a key role in the detection of suspects. However, the traditional multiexposure image fusion methods have low efficiency and are largely time-consuming due to the artifact effect in the image edge and other sensitive factors. Therefore, this paper focuses on the suspect multiexposure image fusion. The self-coding neural network based on deep learning has become a hotspot in the research of data dimension reduction, which can effectively elimina
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Cheng, Haoyuan, Qi Chen, Xiangwei Zeng, Haoxun Yuan, and Linjie Zhang. "The Polarized Light Field Enables Underwater Unmanned Vehicle Bionic Autonomous Navigation and Automatic Control." Journal of Marine Science and Engineering 11, no. 8 (2023): 1603. http://dx.doi.org/10.3390/jmse11081603.

Texto completo da fonte
Resumo:
In response to the critical need for autonomous navigation capabilities of underwater vehicles independent of satellites, this paper studies a novel navigation and control method based on underwater polarization patterns. We propose an underwater course angle measurement algorithm and develop underwater polarization detection equipment. By establishing the automatic control model of an ROV (Remote Operated Vehicle) with polarization information, we develop a strapdown navigation method combining polarization and inertial information. We verify the feasibility of angle measurement based on pola
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Li, Ying, Wenyue Li, Zhijie Zhao, and JiaHao Fan. "DRI-MVSNet: A depth residual inference network for multi-view stereo images." PLOS ONE 17, no. 3 (2022): e0264721. http://dx.doi.org/10.1371/journal.pone.0264721.

Texto completo da fonte
Resumo:
Three-dimensional (3D) image reconstruction is an important field of computer vision for restoring the 3D geometry of a given scene. Due to the demand for large amounts of memory, prevalent methods of 3D reconstruction yield inaccurate results, because of which the highly accuracy reconstruction of a scene remains an outstanding challenge. This study proposes a cascaded depth residual inference network, called DRI-MVSNet, that uses a cross-view similarity-based feature map fusion module for residual inference. It involves three improvements. First, a combined module is used for processing chan
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

K. Kannan. "Application of Partial Differential Equations in Multi Focused Image Fusion." International Journal of Advanced Networking and Applications 14, no. 01 (2022): 5266–70. http://dx.doi.org/10.35444/ijana.2022.14105.

Texto completo da fonte
Resumo:
Image Fusion is a process used to combine two or more images to form more informative image. More often, machine vision cameras are affected by limited depth of field and capture the clear view of the objects which are in focus. Other objects in the scene will be blurred. So, it is necessary to combine set of images to have the clear view of all objects in the scene. This is called Multi focused image fusion. This paper compares and presents the performance of second order and fourth order partial differential equation in multi focused image fusion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

He, Kangjian, Jian Gong, and Dan Xu. "Focus-pixel estimation and optimization for multi-focus image fusion." Multimedia Tools and Applications 81, no. 6 (2022): 7711–31. http://dx.doi.org/10.1007/s11042-022-12031-x.

Texto completo da fonte
Resumo:
AbstractTo integrate the effective information and improve the quality of multi-source images, many spatial or transform domain-based image fusion methods have been proposed in the field of information fusion. The key purpose of multi-focus image fusion is to integrate the focused pixels and remove redundant information of each source image. Theoretically, if the focused pixels and complementary information of different images are detected completely, the fusion image with best quality can be obtained. For this goal, we propose a focus-pixel estimation and optimization based multi-focus image
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Zeng, Hui, Bin Yang, Xiuqing Wang, Jiwei Liu, and Dongmei Fu. "RGB-D Object Recognition Using Multi-Modal Deep Neural Network and DS Evidence Theory." Sensors 19, no. 3 (2019): 529. http://dx.doi.org/10.3390/s19030529.

Texto completo da fonte
Resumo:
With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, res
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

An, Ying. "Medical Social Security System Considering Multisensor Data Fusion and Online Analysis." Mobile Information Systems 2022 (August 8, 2022): 1–12. http://dx.doi.org/10.1155/2022/2312333.

Texto completo da fonte
Resumo:
At present, multi-sensor data fusion technology has been applied in many fields, and we have to understand the algorithm principle of multisensor data fusion technology, master the basic theory of multisensor data fusion technology, and analyze the application field of multisensor data fusion technology. The technical challenges of its application area should be understood. The discussion on its development direction has promoted the wide application of multisensor data fusion technology. By improving the query planning, query interpretation, and cache query optimization mechanism of different
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!