To see the other types of publications on this topic, follow the link: Depth-image-based-rendering (DIBR).

Journal articles on the topic 'Depth-image-based-rendering (DIBR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Depth-image-based-rendering (DIBR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Xiaodong, Haitao Liang, Huaiyuan Xu, Siyu Ren, Huaiyu Cai, and Yi Wang. "Virtual View Synthesis Based on Asymmetric Bidirectional DIBR for 3D Video and Free Viewpoint Video." Applied Sciences 10, no. 5 (2020): 1562. http://dx.doi.org/10.3390/app10051562.

Full text
Abstract:
Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Jeong, Young Ju, Youngshin Kwak, Youngran Han, Yong Ju Jung, and Du-sik Park. "11.3: Depth-Image-Based Rendering (DIBR) Using Disocclusion Area Restoration." SID Symposium Digest of Technical Papers 40, no. 1 (2009): 119. http://dx.doi.org/10.1889/1.3256505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alvi, Hafiz Muhammad Usama Hassan, Muhammad Shahid Farid, Muhammad Hassan Khan, and Marcin Grzegorzek. "Quality Assessment of 3D Synthesized Images Based on Textural and Structural Distortion Estimation." Applied Sciences 11, no. 6 (2021): 2666. http://dx.doi.org/10.3390/app11062666.

Full text
Abstract:
Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience by providing immersion. They need an infinite number of views to provide a full parallax to the viewer, which is not practical due to various financial and technological constraints. Therefore, novel 3D views are generated from a set of available views and their depth maps using depth-image-based rendering (DIBR) techniques. The quality of a DIBR-synthesized image may be compromised for several reasons, e.g., inaccurate depth estimation. Since depth is important in this application, inaccuracies in depth maps lead to different textural and structural distortions that degrade the quality of the generated image and result in a poor quality of experience (QoE). Therefore, quality assessment DIBR-generated images are essential to guarantee an appreciative QoE. This paper aims at estimating the quality of DIBR-synthesized images and proposes a novel 3D objective image quality metric. The proposed algorithm aims to measure both textural and structural distortions in the DIBR image by exploiting the contrast sensitivity and the Hausdorff distance, respectively. The two measures are combined to estimate an overall quality score. The experimental evaluations performed on the benchmark MCL-3D dataset show that the proposed metric is reliable and accurate, and performs better than existing 2D and 3D quality assessment metrics.
APA, Harvard, Vancouver, ISO, and other styles
4

SCHMEING, MICHAEL, and XIAOYI JIANG. "A BACKGROUND MODELING-BASED FAITHFUL APPROACH TO THE DISOCCLUSION PROBLEM IN DEPTH IMAGE-BASED RENDERING." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 02 (2013): 1354005. http://dx.doi.org/10.1142/s0218001413540050.

Full text
Abstract:
In this paper, we address the disocclusion problem that occurs during view synthesis in depth image-based rendering (DIBR). We propose a method that can recover faithful texture information for disoccluded areas. In contrast to common disocclusion filling methods, which usually work frame-by-frame, our algorithm can take information from temporally neighboring frames into account. This way, we are able to reconstruct a faithful filling for the disocclusion regions and not just an approximate or plausible one. Our method avoids artifacts that occur with common approaches and can additionally reduce compression artifacts at object boundaries.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xiaodong, Haitao Liang, Huaiyuan Xu, Siyu Ren, Huaiyu Cai, and Yi Wang. "Artifact Handling Based on Depth Image for View Synthesis." Applied Sciences 9, no. 9 (2019): 1834. http://dx.doi.org/10.3390/app9091834.

Full text
Abstract:
The depth image based rendering (DIBR) is a popular technology for 3D video and free viewpoint video (FVV) synthesis, by which numerous virtual views can be generated from a single reference view and its depth image. However, some artifacts are produced in the DIBR process and reduce the visual quality of virtual view. Due to the diversity of artifacts, effectively handling them becomes a challenging task. In this paper, an artifact handling method based on depth image is proposed. The reference image and its depth image are extended to fill the holes that belong to the out-of-field regions. A depth image preprocessing method is applied to project the ghosts to their correct place. The 3D warping process is optimized by an adaptive one-to-four method to deal with the cracks and pixel overlapping. For disocclusions, we calculate depth and background terms of the filling priority based on depth information. The search for the best matching patch is performed simultaneously in the reference image and the virtual image. Moreover, adaptive patch size is used in all hole-filling processes. Experimental results demonstrate the effectiveness of the proposed method, which has better performance compared with previous methods in subjective and objective evaluation.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Qiuwen, Liang Tian, Lixun Huang, Xiaobing Wang, and Haodong Zhu. "Rendering Distortion Estimation Model for 3D High Efficiency Depth Coding." Mathematical Problems in Engineering 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/940737.

Full text
Abstract:
A depth map represents three-dimensional (3D) scene geometry information and is used for depth image based rendering (DIBR) to synthesize arbitrary virtual views. Since the depth map is only used to synthesize virtual views and is not displayed directly, the depth map needs to be compressed in a certain way that can minimize distortions in the rendered views. In this paper, a modified distortion estimation model is proposed based on view rendering distortion instead of depth map distortion itself and can be applied to the high efficiency video coding (HEVC) rate distortion cost function process for rendering view quality optimization. Experimental results on various 3D video sequences show that the proposed algorithm provides about 31% BD-rate savings in comparison with HEVC simulcast and 1.3 dB BD-PSNR coding gain for the rendered view.
APA, Harvard, Vancouver, ISO, and other styles
7

Sandić-Stanković, Dragana, Dragan Kukolj, and Patrick Le Callet. "Multi–Scale Synthesized View Assessment Based on Morphological Pyramids." Journal of Electrical Engineering 67, no. 1 (2016): 3–11. http://dx.doi.org/10.1515/jee-2016-0001.

Full text
Abstract:
Abstract The Depth-Image-Based-Rendering (DIBR) algorithms used for 3D video applications introduce geometric distortions affecting the edge coherency in the synthesized images. In order to better deal with specific geometric distortions in the DIBR synthesized images, we propose full-reference metric based on multi-scale pyramid decompositions using morphological filters. The non-linear morphological filters used in multi-scale image decompositions maintain important geometric information such as edges across different resolution levels. We show that PSNR has particularly good agreement with human judgment when it is calculated between detailed images at higher scales of morphological pyramids. Consequently, we propose reduced morphological pyramid peak signal-to-noise ratio metric (MP-PSNR), taking into account only mean squared errors between pyramids’ images at higher scales. Proposed computationally efficient metric achieves significantly higher correlation with human judgment compared to the state-of-the-art image quality assessment metrics and compared to the tested metric dedicated to synthesis-related artifacts.
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Chen, Xujun Wu, Jun Yang, and Juyan Li. "A Novel DIBR 3D Image Hashing Scheme Based on Pixel Grouping and NMF." Wireless Communications and Mobile Computing 2020 (December 10, 2020): 1–14. http://dx.doi.org/10.1155/2020/8820436.

Full text
Abstract:
Most of the traditional 2D image hashing schemes do not take into account the change of viewpoint when constructing the final hash vector. This result in the classification accuracy rate is unsatisfactory when applied for depth-image-based rendering (DBIR) 3D image identification. In this work, pixel grouping based on histogram shape and nonnegative matrix factorization (NMF) are applied to design DIBR 3D image hashing with better robustness resisting to geometric distortions and higher classification accuracy rate for virtual image identification. Experiments show that the proposed hashing is robust against common signal and geometric distortion attacks, such as additive noise, blurring, JPEG compression, scaling, and rotation. Compared with the state-of-art schemes of traditional 2D image hashing, the proposed hashing achieves better performances under above attacks, especially for virtual image identification.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Hui-Yu, and Shao-Yu Huang. "Fast Hole Filling for View Synthesis in Free Viewpoint Video." Electronics 9, no. 6 (2020): 906. http://dx.doi.org/10.3390/electronics9060906.

Full text
Abstract:
The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to present 3D content, and depth image-based rendering (DIBR) is a classic virtual view synthesis method. With a texture image and its corresponding depth map, a virtual view can be generated using the DIBR technique. The depth and camera parameters are used to project the entire pixel in the image to the 3D world coordinate system. The results in the world coordinates are then reprojected into the virtual view, based on 3D warping. However, these projections will result in cracks (holes). Hence, we herein propose a new method of DIBR for free viewpoint videos to solve the hole problem due to these projection processes. First, the depth map is preprocessed to reduce the number of holes, which does not produce large-scale geometric distortions; subsequently, improved 3D warping projection is performed collectively to create the virtual view. A median filter is used to filter the hole regions in the virtual view, followed by 3D inverse warping blending to remove the holes. Next, brightness adjustment and adaptive image blending are performed. Finally, the synthesized virtual view is obtained using the inpainting method. Experimental results verify that our proposed method can produce a pleasant visibility of the synthetized virtual view, maintain a high peak signal-to-noise ratio (PSNR) value, and efficiently decrease execution time compared with state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Bonatto, Daniele, Sarah Fachada, and Gauthier Lafruit. "RaViS: Real-time accelerated View Synthesizer for immersive video 6DoF VR." Electronic Imaging 2020, no. 13 (2020): 382–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.13.ervr-381.

Full text
Abstract:
MPEG-I, the upcoming standard for immersive video, has steadily explored immersive video technology for free navigation applications, where any virtual viewpoint to the scene is created using Depth Image-Based Rendering (DIBR) from any number of stationary cameras positioned around the scene. This exploration has recently evolved towards a rendering pipeline using camera feeds, as well as a standard file format, containing all information for synthesizing a virtual viewpoint to a scene. We present an acceleration of our Reference View Synthesis software (RVS) that enables the rendering in real-time of novel views in a head mounted display, hence supporting virtual reality (VR) with 6 Degrees of Freedom (6DoF) including motion parallax within a restricted viewing volume. In this paper, we explain its main engineering challenges.
APA, Harvard, Vancouver, ISO, and other styles
11

Jin, Chongchong, Zongju Peng, Wenhui Zou, Fen Chen, Gangyi Jiang, and Mei Yu. "No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis." Entropy 23, no. 6 (2021): 770. http://dx.doi.org/10.3390/e23060770.

Full text
Abstract:
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.
APA, Harvard, Vancouver, ISO, and other styles
12

Jiao, Yuzhong, Kayton Wai Keung Cheung, Mark Ping Chan Mok, and Yiu Kei Li. "Spatial Distance-based Interpolation Algorithm for Computer Generated 2D+Z Images." Electronic Imaging 2020, no. 2 (2020): 140–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-140.

Full text
Abstract:
Computer generated 2D plus Depth (2D+Z) images are common input data for 3D display with depth image-based rendering (DIBR) technique. Due to their simplicity, linear interpolation methods are usually used to convert low-resolution images into high-resolution images for not only depth maps but also 2D RGB images. However linear methods suffer from zigzag artifacts in both depth map and RGB images, which severely affects the 3D visual experience. In this paper, spatial distance-based interpolation algorithm for computer generated 2D+Z images is proposed. The method interpolates RGB images with the help of depth and edge information from depth maps. Spatial distance from interpolated pixel to surrounding available pixels is utilized to obtain the weight factors of surrounding pixels. Experiment results show that such spatial distance-based interpolation can achieve sharp edges and less artifacts for 2D RGB images. Naturally, it can improve the performance of 3D display. Since bilinear interpolation is used in homogenous areas, the proposed algorithm keeps low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
13

Favalli, Lorenzo, and Marco Folli. "A Scalable Multiple Description Scheme for 3D Video Coding Based on the Interlayer Prediction Structure." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–16. http://dx.doi.org/10.1155/2010/425641.

Full text
Abstract:
The most recent literature indicates multiple description coding (MDC) as a promising coding approach to handle the problem of video transmission over unreliable networks with different quality and bandwidth constraints. Furthermore, following recent commercial availability of autostereoscopic 3D displays that allow 3D visual data to be viewed without the use of special headgear or glasses, it is anticipated that the applications of 3D video will increase rapidly in the near future. Moving from the concept of spatial MDC, in this paper we introduce some efficient algorithms to obtain 3D substreams that also exploit some form of scalability. These algorithms are then applied to both coded stereo sequences and to depth image-based rendering (DIBR). In these algorithms, we first generate four 3D subsequences by subsampling, and then two of these subsequences are jointly used to form each of the two descriptions. For each description, one of the original subsequences is predicted from the other one via some scalable algorithms, focusing on the inter layer prediction scheme. The proposed algorithms can be implemented as pre- and postprocessing of the standard H.264/SVC coder that remains fully compatible with any standard coder. The experimental results presented show that these algorithms provide excellent results.
APA, Harvard, Vancouver, ISO, and other styles
14

Yao, Li, Qiurui Lu, and Xiaomin Li. "View synthesis based on spatio-temporal continuity." EURASIP Journal on Image and Video Processing 2019, no. 1 (2019). http://dx.doi.org/10.1186/s13640-019-0485-9.

Full text
Abstract:
AbstractFree viewpoint video is generated on the basis of a video plus depth (V+D) virtual point rendering framework. Because of the limited bandwidth of video transmission, depth-image-based rendering (DIBR) has become a common method. Most DIBR methods not only have holes and ghost artifacts but also have problems with time continuity, leading to frequent flickers. In this paper, we make full use of time domain information in video sequences and adjacent frames to extract the static background image of the whole scene. Furthermore, we propose a weighted-fusion hole-filling method based on static background to fill holes and maintain time continuity. Experimental results show that the proposed method can improve the quality of virtual view images and strengthen spatio-temporal continuity.
APA, Harvard, Vancouver, ISO, and other styles
15

Tehrani, Mehrdad Panahpour, Tomoyuki Tezuka, Kazuyoshi Suzuki, Keita Takahashi, and Toshiaki Fujii. "Free-viewpoint image synthesis using superpixel segmentation." APSIPA Transactions on Signal and Information Processing 6 (2017). http://dx.doi.org/10.1017/atsip.2017.5.

Full text
Abstract:
A free-viewpoint image can be synthesized using color and depth maps of reference viewpoints, via depth-image-based rendering (DIBR). In this process, three-dimensional (3D) warping is generally used. A 3D warped image consists of disocclusion holes with missing pixels that correspond to occluded regions in the reference images, and non-disocclusion holes due to limited sampling density of the reference images. The non-disocclusion holes are those among scattered pixels of a same region or object. These holes are larger when the reference viewpoints and the free viewpoint images have a larger physical distance. Filling these holes has a crucial impact on the quality of free-viewpoint image. In this paper, we focus on free-viewpoint image synthesis that is precisely capable of filling the non-disocclusion holes caused by limited sampling density, using superpixel segmentation. In this approach, we proposed two criteria for segmenting depth and color data of each reference viewpoint. By these criteria, we can detect which neighboring pixels should be connected or kept isolated in each references image, before being warped. Polygons enclosed by the connected pixels, i.e. superpixel, are inpainted by k-means interpolation. Our superpixel approach has a high accuracy since we use both color and depth data to detect superpixels at the location of the reference viewpoint. Therefore, once a reference image that consists of superpixels is 3D warped to a virtual viewpoint, the non-disocclusion holes are significantly reduced. Experimental results verify the advantage of our approach and demonstrate high quality of synthesized image when the virtual viewpoint is physically far from the reference viewpoints.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography