To see the other types of publications on this topic, follow the link: Volumetric video.

Journal articles on the topic 'Volumetric video'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Volumetric video.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Veeraswamy, Mr D. "3D-Based Compression Framework for High Quality Video Streaming." International Journal for Research in Applied Science and Engineering Technology 13, no. 4 (2025): 2628–36. https://doi.org/10.22214/ijraset.2025.68746.

Full text
Abstract:
Video compression plays a pivotal role in managing the storage and transmission of multimedia content, especially in bandwidth-constrained environments. Nowadays, volumetric video has emerged as an attractive multimedia application, which provides highly immersive watching experiences. How- ever, streaming the volumetric video demands prohibitively high bandwidth. Thus, effectively compressing its underlying point cloud frames is essential to deploying the volumetric videos. The existing compression techniques are either 3D-based or 2D-based, but they still have drawbacks when being deployed i
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Zhen, Yinghao Xu, Zhiyuan Yu, et al. "Representing Long Volumetric Video with Temporal Gaussian Hierarchy." ACM Transactions on Graphics 43, no. 6 (2024): 1–18. http://dx.doi.org/10.1145/3687919.

Full text
Abstract:
This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generall
APA, Harvard, Vancouver, ISO, and other styles
3

Kakkar, Preetish, and Hariharan Ragothaman. "The Evolution of Volumetric Video: A Survey of Smart Transcoding and Compression Approaches." International Journal of Computer Graphics & Animation 14, no. 1/2/3/4 (2024): 01–11. http://dx.doi.org/10.5121/ijcga.2024.14401.

Full text
Abstract:
Volumetric video, the capture and display of three-dimensional (3D) imagery, has emerged as a revolutionary technology poised to transform the media landscape, enabling immersive experiences that transcend the limitations of traditional 2D video. One of the key challenges in this domain is the efficient delivery of these high-bandwidth, data-intensive volumetric video streams, which requires innovative transcoding and compression techniques. This research paper explores the state-of-the-art in volumetric video compression and delivery, with a focus on the potential of AI-driven solutions to ad
APA, Harvard, Vancouver, ISO, and other styles
4

Sohn, Bong-Soo, Chandrajit Bajaj, and Vinay Siddavanahalli. "Volumetric video compression for interactive playback." Computer Vision and Image Understanding 96, no. 3 (2004): 435–52. http://dx.doi.org/10.1016/j.cviu.2004.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ke, Yan, Rahul Sukthankar, and Martial Hebert. "Volumetric Features for Video Event Detection." International Journal of Computer Vision 88, no. 3 (2009): 339–62. http://dx.doi.org/10.1007/s11263-009-0308-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Penghao, Zhirui Zhang, Liao Wang, et al. "V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians." ACM Transactions on Graphics 43, no. 6 (2024): 1–13. http://dx.doi.org/10.1145/3687935.

Full text
Abstract:
Experiencing high-fidelity volumetric video as seamlessly as 2D videos is a long-held dream. However, current dynamic 3DGS methods, despite their high rendering quality, face challenges in streaming on mobile devices due to computational and bandwidth constraints. In this paper, we introduce V 3 (Viewing Volumetric Videos), a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians. Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs. Additionally, we propose a two-stage training strategy to reduce s
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Rui, Bihai Zhang, and Dan Wang. "VVRec: Reconstruction Attacks on DL-based Volumetric Video Upstreaming via Latent Diffusion Model with Gamma Distribution." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19133–42. https://doi.org/10.1609/aaai.v39i18.34106.

Full text
Abstract:
With the popularity of 3D volumetric video applications, such as Autonomous Driving, Virtual Reality, and Mixed Reality, current developers have turned to deep learning for compressing volumetric video frames, i.e., point clouds for video upstreaming. The latest deep learning-based solutions offer higher efficiency, lower distortion, and better hardware support compared to traditional ones like MPEG and JPEG. However, privacy threats arise, especially reconstruction attacks targeting to recover the original input point cloud from the intermediate results. In this paper, we design VVRec, to the
APA, Harvard, Vancouver, ISO, and other styles
8

Sharp, Louis J., Inchon B. Choi, Thomas E. Lee, Abegail Sy, and Byoung I. Suh. "Volumetric shrinkage of composites using video-imaging." Journal of Dentistry 31, no. 2 (2003): 97–103. http://dx.doi.org/10.1016/s0300-5712(03)00005-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jing, and Zhi-Jie Xu. "Video analysis based on volumetric event detection." International Journal of Automation and Computing 7, no. 3 (2010): 365–71. http://dx.doi.org/10.1007/s11633-010-0516-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shinoda, Takayuki, Yuji Watanabe, Satoshi Sasaki, Taiji Kamiya, Hajime Sato, and Atsushi Date. "Professional Baseball Broadcasts using Volumetric Video Technology." Journal of The Institute of Image Information and Television Engineers 78, no. 2 (2024): 247–51. http://dx.doi.org/10.3169/itej.78.247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Essa, Almabrok, and Vijayan Asari. "High Order Volumetric Directional Pattern for Video-Based Face Recognition." Mathematical Problems in Engineering 2019 (June 25, 2019): 1–10. http://dx.doi.org/10.1155/2019/6798750.

Full text
Abstract:
Describing the dynamic textures has attracted growing attention in the field of computer vision and pattern recognition. In this paper, a novel approach for recognizing dynamic textures, namely, high order volumetric directional pattern (HOVDP), is proposed. It is an extension of the volumetric directional pattern (VDP) which extracts and fuses the temporal information (dynamic features) from three consecutive frames. HOVDP combines the movement and appearance features together considering the nth order volumetric directional variation patterns of all neighboring pixels from three consecutive
APA, Harvard, Vancouver, ISO, and other styles
12

Schreer, Oliver, Markus Worchel, Rodrigo Diaz, et al. "Preserving Memories of Contemporary Witnesses Using Volumetric Video." i-com 21, no. 1 (2022): 71–82. http://dx.doi.org/10.1515/icom-2022-0015.

Full text
Abstract:
Abstract Volumetric Video is a novel technology that enables the creation of dynamic 3D models of persons, which can then be integrated in any 3D environment. In contrast to classical character animation, volumetric video is authentic and much more realistic and therefore ideal for the transfer of emotions, facial expressions and gestures, which is highly relevant in the context of preservation of contemporary witnesses and survivors of the Holocaust. Fraunhofer Heinrich-Hertz-Institute (HHI) is working on two projects in this cultural heritage context. In a recent project between UFA and Frau
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Sang Hyun. "Volumetric Image System for High Efficiency Video Coding." Journal of the Korea Contents Association 16, no. 1 (2016): 515–20. http://dx.doi.org/10.5392/jkca.2016.16.01.515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Smolic, Aljosa, Konstantinos Amplianitis, Matthew Moynihan, et al. "Volumetric Video Content Creation for Immersive XR Experiences." London Imaging Meeting 3, no. 1 (2022): 54–59. http://dx.doi.org/10.2352/lim.2022.1.1.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Baran, Utku, Wei Wei, Jingjiang Xu, Xiaoli Qi, Wyatt O. Davis, and Ruikang K. Wang. "Video-rate volumetric optical coherence tomography-based microangiography." Optical Engineering 55, no. 4 (2016): 040503. http://dx.doi.org/10.1117/1.oe.55.4.040503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hu, Qiang, Houqiang Zhong, Zihan Zheng, et al. "VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 4 (2025): 3563–71. https://doi.org/10.1609/aaai.v39i4.32370.

Full text
Abstract:
Neural Radiance Field (NeRF)-based volumetric video has revolutionized visual media by delivering photorealistic Free-Viewpoint Video (FVV) experiences that provide audiences with unprecedented immersion and interactivity. However, the substantial data volumes pose significant challenges for storage and transmission. Existing solutions typically optimize NeRF representation and compression independently or focus on a single fixed rate-distortion (RD) tradeoff. In this paper, we propose VRVVC, a novel end-to-end joint optimization variable-rate framework for volumetric video compression that ac
APA, Harvard, Vancouver, ISO, and other styles
17

Jing, Chang Long, Qi Bin Feng, Ying Song Zhang, et al. "LED-Based 3-DMD Volumetric 3D Display." Applied Mechanics and Materials 596 (July 2014): 442–45. http://dx.doi.org/10.4028/www.scientific.net/amm.596.442.

Full text
Abstract:
A solid-state volumetric true 3D display developed by Hefei University of Technology consists of two main components: a high-speed video projector and a stack of liquid crystal shutters. The shutters are based on polymer stabilized cholesteric texture material, presenting different states that can be switched by different voltage. The high-speed video projector includes LED-based light source and tree-chip digital micro-mirror devices modulating RGB lights. A sequence of slices of three-dimensional images are projected into the liquid crystal shutters locating at the proper depth, forming a tr
APA, Harvard, Vancouver, ISO, and other styles
18

Huang, Chenn-Jung, Hao-Wen Cheng, Yi-Hung Lien, and Mei-En Jian. "A Survey on Video Streaming for Next-Generation Vehicular Networks." Electronics 13, no. 3 (2024): 649. http://dx.doi.org/10.3390/electronics13030649.

Full text
Abstract:
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In addition to conventional 2D videos, 360° videos are gaining popularity, and volumetric videos, which can offer users a better immersive experience, have been discussed. However, these applications place high demands on network capabilities, leading to a dependence on next-generation wireles
APA, Harvard, Vancouver, ISO, and other styles
19

Lischer-Katz, Zack, Bryan Carter, and Rashida Braggs. "Volumetric Video: Preservation and Curation Challenges of an Emerging Medium." International Journal of Digital Curation 19, no. 1 (2025): 23. https://doi.org/10.2218/ijdc.v19i1.976.

Full text
Abstract:
Volumetric video is an emerging media format that uses multiple cameras to record live-action subjects and produce three-dimensional, time-based digital media. The resulting digital objects encode visual and spatial information, colour, textures, and sound in a format that allows for users to view the subject from any angle and use the assets in video games, virtual reality, augmented reality, or films. The technology has been pioneered by Hollywood production companies but is now being experimented with by digital humanities scholars. As it becomes more popular, information institutions, part
APA, Harvard, Vancouver, ISO, and other styles
20

O’dwyer, Néill, Emin Zerman, Gareth W. Young, Aljosa Smolic, Siobhán Dunne, and Helen Shenton. "Volumetric Video in Augmented Reality Applications for Museological Narratives." Journal on Computing and Cultural Heritage 14, no. 2 (2021): 1–20. http://dx.doi.org/10.1145/3425400.

Full text
Abstract:
Cross-reality technologies are quickly establishing themselves as commonplace platforms for presenting objects of historical, scientific, artistic, and cultural interest to the public. In this space, augmented reality (AR) is notably successful in delivering cultural heritage applications, including architectural and environmental heritage reconstruction, exhibition data management and representation, storytelling, and exhibition curation. Generally, it has been observed that the nature of information delivery in applications created for narrating exhibitions tends to be informative and formal
APA, Harvard, Vancouver, ISO, and other styles
21

Zerman, Emin, Pan Gao, Cagri Ozcinar, and Aljosa Smolic. "Subjective and Objective Quality Assessment for Volumetric Video Compression." Electronic Imaging 2019, no. 10 (2019): 323–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.10.iqsp-323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Isaac, Yeongil Ryu, JunHyeong Park, et al. "Real-life Spatial Volumetric Video Acquisition and Encoding System." JOURNAL OF BROADCAST ENGINEERING 29, no. 4 (2024): 425–42. http://dx.doi.org/10.5909/jbe.2024.29.4.425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

O'Dwyer, Néill, and Nicholas Johnson. "Exploring volumetric video and narrative through Samuel Beckett’s Play." International Journal of Performance Arts and Digital Media 15, no. 1 (2019): 53–69. http://dx.doi.org/10.1080/14794713.2019.1567243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ban, Seonghoon, and Kyung Hoon Hyun. "Pixel of Matter: New Ways of Seeing with an Active Volumetric Filmmaking System." Leonardo 53, no. 4 (2020): 434–37. http://dx.doi.org/10.1162/leon_a_01932.

Full text
Abstract:
Using volumetric filmmaking as a medium for artists and designers requires the development of new methodologies and tools. We introduce an installation art project using the active volumetric filmmaking technology to investigate its possibilities in art practice. To do that, we developed a system to film volumetric video in real time, thereby allowing its users to capture large environments and objects without fixed placement or preinstallation of cameras. Active volumetric filmmaking helps us realize the digital reconstruction of physical space in real time and can be expected to ultimately f
APA, Harvard, Vancouver, ISO, and other styles
25

Nakamura, Tomoharu, Yuriko Imai, Yuta Yoshimizu, et al. "36‐1: 360‐degree Transparent Light Field Display with Highly‐Directional Holographic Screens for Fully Volumetric 3D Video Experience." SID Symposium Digest of Technical Papers 54, no. 1 (2023): 514–17. http://dx.doi.org/10.1002/sdtp.16606.

Full text
Abstract:
We have developed a novel 360‐degree transparent light field display with 120 viewpoints for fully volumetric 3D video experience. It was achieved by a rotating cylindrical transparent highly‐directional holographic screen and a high‐frame‐rate projector. It enables multiple people to simultaneously view bright and occlusion‐capable volumetric images from any direction.
APA, Harvard, Vancouver, ISO, and other styles
26

Gildenberg, Philip L., and Jeffrey Labuz. "Use of a Volumetric Target for Image-guided Surgery." Neurosurgery 59, no. 3 (2006): 651–59. http://dx.doi.org/10.1227/01.neu.0000227474.21048.f1.

Full text
Abstract:
Abstract A VIRTUAL REALITY system has been devised to superimpose a computer-generated rendering of a volumetric target to be surgically approached or resected on a real-time video image of the surgical field. A stereotactic frame is used to register the image from the video camera with the image of the target volume for accurate localization. The volumetric target is obtained from preoperative imaging studies and can be modified to adjust the intended line of resection or to avoid eloquent vascular or neural tissue. The computer-generated image is updated throughout surgery to visualize only
APA, Harvard, Vancouver, ISO, and other styles
27

McIlvenny, Paul. "The future of ‘video’ in video-based qualitative research is not ‘dumb’ flat pixels! Exploring volumetric performance capture and immersive performative replay." Qualitative Research 20, no. 6 (2020): 800–818. http://dx.doi.org/10.1177/1468794120905460.

Full text
Abstract:
Qualitative research that focuses on social interaction and talk has been increasingly based, for good reason, on collections of audiovisual recordings in which 2D flat-screen video and mono/stereo audio are the dominant recording media. This article argues that the future of ‘video’ in video-based qualitative studies will move away from ‘dumb’ flat pixels in a 2D screen. Instead, volumetric performance capture and immersive performative replay rely on a procedural camera/spectator-independent representation of a dynamic real or virtual volumetric space over time. It affords analytical practic
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Shih-Syun, Chao-Hung Lin, Yu-Hsuan Kuo, and Tong-Yee Lee. "Consistent Volumetric Warping Using Floating Boundaries for Stereoscopic Video Retargeting." IEEE Transactions on Circuits and Systems for Video Technology 26, no. 5 (2016): 801–13. http://dx.doi.org/10.1109/tcsvt.2015.2409711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Schreer, Oliver, Ingo Feldmann, Peter Kauff, et al. "Lessons Learned During One year of Commercial Volumetric Video Production." SMPTE Motion Imaging Journal 129, no. 9 (2020): 31–37. http://dx.doi.org/10.5594/jmi.2020.3010399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wakahara, Yuma, Toshie Misu, and Kensuke Hisatomi. "Advanced Volumetric Video Format for Enhancing Photo-Realistic Lighting Reproduction." SMPTE Motion Imaging Journal 133, no. 3 (2024): 17–25. http://dx.doi.org/10.5594/jmi.2024/zrbf6236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cen, Yunchi, Qifan Zhang, and Xiaohui Liang. "Physics-Based Differentiable Rendering for Efficient and Plausible Fluid Modeling from Monocular Video." Entropy 25, no. 9 (2023): 1348. http://dx.doi.org/10.3390/e25091348.

Full text
Abstract:
Realistic fluid models play an important role in computer graphics applications. However, efficiently reconstructing volumetric fluid flows from monocular videos remains challenging. In this work, we present a novel approach for reconstructing 3D flows from monocular inputs through a physics-based differentiable renderer coupled with joint density and velocity estimation. Our primary contributions include the proposed efficient differentiable rendering framework and improved coupled density and velocity estimation strategy. Rather than relying on automatic differentiation, we derive the differ
APA, Harvard, Vancouver, ISO, and other styles
32

Tinguely, Marc, Matthew G. Hennessy, Angelo Pommella, Omar K. Matar, and Valeria Garbin. "Surface waves on a soft viscoelastic layer produced by an oscillating microbubble." Soft Matter 12, no. 18 (2016): 4247–56. http://dx.doi.org/10.1039/c5sm03084f.

Full text
Abstract:
An ultrasound-driven microbubble undergoing volumetric oscillations deforms a soft viscoelastic layer causing propagation of a surface elastic wave. High-speed video microscopy reveals characteristics of the elliptical particle trajectories that depend on the rheological properties of the layer.
APA, Harvard, Vancouver, ISO, and other styles
33

Mouy, Xavier, Morgan Black, Kieran Cox, Jessica Qualley, Stan Dosso, and Francis Juanes. "Comparison of three portable volumetric arrays to localize and identify fish sounds in the wild." Journal of the Acoustical Society of America 151, no. 4 (2022): A148. http://dx.doi.org/10.1121/10.0010929.

Full text
Abstract:
We describe three portable volumetric audio/video arrays capable of identifying species-specific fish sounds in the wild. Each array can record fish sounds, acoustically localize the fish in three-dimensions (using linearized or fully non-linear inversion), and record video to identify the species and observe their behavior. The design of each array accommodates specific logistical and financial constraints, covering a range of nearshore habitats and applications. The first platform is composed of six hydrophones, an acoustic recorder, and two video cameras secured to a 2 × 2 × 3 m PVC frame.
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Rongwen, Wenzhi Sun, Yajie Liang, et al. "Video-rate volumetric functional imaging of the brain at synaptic resolution." Nature Neuroscience 20, no. 4 (2017): 620–28. http://dx.doi.org/10.1038/nn.4516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hilsmann, Anna, Philipp Fechteler, Wieland Morgenstern, et al. "Going beyond free viewpoint: creating animatable volumetric video of human performances." IET Computer Vision 14, no. 6 (2020): 350–58. http://dx.doi.org/10.1049/iet-cvi.2019.0786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Exner, Wibke, Alexandra Kühn, Artur Szewieczek, et al. "Determination of volumetric shrinkage of thermally cured thermosets using video-imaging." Polymer Testing 49 (February 2016): 100–106. http://dx.doi.org/10.1016/j.polymertesting.2015.11.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, A.-young, Eun-bin An, and Kwang-deok Seo. "Design and Implementation of a Point Cloud-Based Volumetric Video Player." Journal of Korean Institute of Communications and Information Sciences 47, no. 10 (2022): 1660–68. http://dx.doi.org/10.7840/kics.2022.47.10.1660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Li, S., B. Haiti, D. Serratore, et al. "SU-D-213CD-03: Live Video-Guided Volumetric Tracking of Respiration Motion." Medical Physics 39, no. 6Part3 (2012): 3618. http://dx.doi.org/10.1118/1.4734688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Young, Gareth W., Néill O’Dwyer, Mauricio Flores Vargas, Rachel Mc Donnell, and Aljosa Smolic. "Feel the Music!—Audience Experiences of Audio–Tactile Feedback in a Novel Virtual Reality Volumetric Music Video." Arts 12, no. 4 (2023): 156. http://dx.doi.org/10.3390/arts12040156.

Full text
Abstract:
The creation of imaginary worlds has been the focus of philosophical discourse and artistic practice for millennia. Humans have long evolved to use media and imagination to express their inner worlds outwardly via artistic practice. As a fundamental factor of fantasy world-building, the imagination can produce novel objects, virtual sensations, and unique stories related to previously unlived experiences. The expression of the imagination often takes a narrative form that applies some medium to facilitate communication, for example, books, statues, music, or paintings. These virtual realities
APA, Harvard, Vancouver, ISO, and other styles
40

Low, Pei Jing, Bo Yan Ng, Nur Insyirah Mahzan, Jing Tian, and Cheung-Chi Leung. "Video-Based Plastic Bag Grabbing Action Recognition: A New Video Dataset and a Comparative Study of Baseline Models." Sensors 25, no. 1 (2025): 255. https://doi.org/10.3390/s25010255.

Full text
Abstract:
Recognizing the action of plastic bag taking from CCTV video footage represents a highly specialized and niche challenge within the broader domain of action video classification. To address this challenge, our paper introduces a novel benchmark video dataset specifically curated for the task of identifying the action of grabbing a plastic bag. Additionally, we propose and evaluate three distinct baseline approaches. The first approach employs a combination of handcrafted feature extraction techniques and a sequential classification model to analyze motion and object-related features. The secon
APA, Harvard, Vancouver, ISO, and other styles
41

Newman, Andrew J., Paul A. Kucera, and Larry F. Bliven. "Presenting the Snowflake Video Imager (SVI)." Journal of Atmospheric and Oceanic Technology 26, no. 2 (2009): 167–79. http://dx.doi.org/10.1175/2008jtecha1148.1.

Full text
Abstract:
Abstract Herein the authors introduce the Snowflake Video Imager (SVI), which is a new instrument for characterizing frozen precipitation. An SVI utilizes a video camera with sufficient frame rate, pixels, and shutter speed to record thousands of snowflake images. The camera housing and lighting produce little airflow distortion, so SVI data are quite representative of natural conditions, which is important for volumetric data products such as snowflake size distributions. Long-duration, unattended operation of an SVI is feasible because datalogging software provides data compression and the h
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Ryan P., Prasad Vagdargi, Ali Uneri, Jeffrey Siewerdsen, and Mark G. Luciano. "183 A Novel Technique for Neuro-Navigation During Ventricular Endoscopy: Volumetric 3D Reconstruction Based on the 2D Ventricular Endoscopy Video." Neurosurgery 71, Supplement_1 (2025): 44. https://doi.org/10.1227/neu.0000000000003360_183.

Full text
Abstract:
INTRODUCTION: Ventricular endoscopy is a common procedure in neurosurgery. Traditional neuro-navigation techniques require pre-operative registration and do not account for deformation and shift due to cerebrospinal fluid loss intra-operatively. Artificial intelligence-based interpretation of the video feed could serve as a tool to account for deformation, and provide 3D video reconstruction for adaptive neuro-navigation without conventional tracking. This technology further serves as a foundation for additional capabilities such as augmented reality overlay in the endoscopic scene. METHODS: T
APA, Harvard, Vancouver, ISO, and other styles
43

Pavlik, John V. "Drones, Augmented Reality and Virtual Reality Journalism: Mapping Their Role in Immersive News Content." Media and Communication 8, no. 3 (2020): 137–46. http://dx.doi.org/10.17645/mac.v8i3.3031.

Full text
Abstract:
Drones are shaping journalism in a variety of ways including in the production of immersive news content. This article identifies, describes and analyzes, or maps out, four areas in which drones are impacting immersive news content. These include: 1) enabling the possibility of providing aerial perspective for first-person perspective flight-based immersive journalism experiences; 2) providing geo-tagged audio and video for flight-based immersive news content; 3) providing the capacity for both volumetric and 360 video capture; and 4) generating novel content types or content based on data acq
APA, Harvard, Vancouver, ISO, and other styles
44

Hong, Wenzhi, Terry Wright, Hugh Sparks, et al. "Adaptive light-sheet fluorescence microscopy with a deformable mirror for video-rate volumetric imaging." Applied Physics Letters 121, no. 19 (2022): 193703. http://dx.doi.org/10.1063/5.0125946.

Full text
Abstract:
Light-sheet fluorescence microscopy (LSFM) achieves optically sectioned imaging with the relatively low photobleaching and phototoxic effect. To achieve high-speed volumetric LSFM imaging without perturbing the sample, it is necessary to use some form of remote refocusing in the detection beam path. Previous work used electrically tunable lenses, tunable acoustic gradient index of refraction lenses, or the remote-refocusing approach of Botcherby et al. [Opt. Lett. 32(14), 2007 (2007)] to achieve remote refocusing. However, these approaches generally only provide low-order defocus correction, w
APA, Harvard, Vancouver, ISO, and other styles
45

Fox-Gieg, Nick. "Lightning Artist Toolkit: A Hand-Drawn Volumetric Animation Pipeline." Proceedings of the ACM on Computer Graphics and Interactive Techniques 7, no. 4 (2024): 1–7. http://dx.doi.org/10.1145/3664221.

Full text
Abstract:
We propose a set of methods for freely integrating live-action volumetric video with hand-drawn volumetric animation, which our research develops as the Lightning Artist Toolkit (Latk)---a complete pipeline for hand-drawn volumetric animation, as far as we know the only open-source example of its kind. Our goal with this project is to make creation in 3D as expressive and intuitive as creation in 2D, retaining the human gesture from its origins in hand-drawn animation on paper. This effort is less a computer vision challenge with an objective goal, as with for example point cloud segmentation,
APA, Harvard, Vancouver, ISO, and other styles
46

Jiang, Yuheng, Zhehao Shen, Yu Hong, et al. "Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos." ACM Transactions on Graphics 43, no. 6 (2024): 1–15. http://dx.doi.org/10.1145/3687926.

Full text
Abstract:
Volumetric video represents a transformative advancement in visual media, enabling users to freely navigate immersive virtual experiences and narrowing the gap between digital and real worlds. However, the need for extensive manual intervention to stabilize mesh sequences and the generation of excessively large assets in existing workflows impedes broader adoption. In this paper, we present a novel Gaussian-based approach, dubbed DualGS , for real-time and high-fidelity playback of complex human performance with excellent compression ratios. Our key idea in DualGS is to separately represent mo
APA, Harvard, Vancouver, ISO, and other styles
47

Singh, Vikramjeet. "Next-Gen Media: How Hardware Video Encoders Are Shaping Content Creation." European Journal of Computer Science and Information Technology 13, no. 47 (2025): 125–33. https://doi.org/10.37745/ejcsit.2013/vol13n47125133.

Full text
Abstract:
This article examines the transformative impact of hardware video encoders on contemporary content creation across various industries. As demand for high-quality video content continues to surge across digital platforms, dedicated encoding hardware integrated into system-on-chip technology has become an essential component, enabling real-time, power-efficient video processing directly on mobile and edge devices. The evolution from software-based encoding to specialized silicon solutions has dramatically reduced computational demands while improving compression efficiency. These advancements, p
APA, Harvard, Vancouver, ISO, and other styles
48

Revinskaya, I. I., P. V. Kamlach, and Yu I. Liashchevich. "Hardware-software complex for studying of breathing volume parameters." Proceedings of the National Academy of Sciences of Belarus, Physical-Technical Series 68, no. 2 (2023): 149–55. http://dx.doi.org/10.29235/1561-8358-2023-68-2-149-155.

Full text
Abstract:
In this paper, a developed hardware-software complex for studying volume parameters of breathing is considered. To estimate the volumetric parameters of breathing, a method for registering the movement of the chest and abdominal walls by changing the overall dimensions of the chest and abdomen with ranking according to the anatomical features of a person is proposed. A technique for researching the volumetric parameters of breathing based on the method of video recording of the movements of the chest and abdominal wall of a person was developed. The proposed method was used to estimate volume
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Qihang, Yingwei Li, Jieru Mei, Yuyin Zhou, and Alan Yuille. "CAKES: Channel-wise Automatic KErnel Shrinking for Efficient 3D Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (2021): 3225–33. http://dx.doi.org/10.1609/aaai.v35i4.16433.

Full text
Abstract:
3D Convolution Neural Networks (CNNs) have been widely applied to 3D scene understanding, such as video analysis and volumetric image recognition. However, 3D networks can easily lead to over-parameterization which incurs expensive computation cost. In this paper, we propose Channel-wise Automatic KErnel Shrinking (CAKES), to enable efficient 3D learning by shrinking standard 3D convolutions into a set of economic operations (e.g., 1D, 2D convolutions). Unlike previous methods, CAKES performs channel-wise kernel shrinkage, which enjoys the following benefits: 1) enabling operations deployed in
APA, Harvard, Vancouver, ISO, and other styles
50

Park, Ji Hun. "Volumetric Model Body Outline Computation for an Object Tracking in a Video Stream." Applied Mechanics and Materials 479-480 (December 2013): 897–900. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.897.

Full text
Abstract:
This paper presents a new outline contour generation method to track a rigid body in single video stream taken using a varying focal length and moving camera. We assume feature points and background eliminated images are provided, and we get different views of a tracked object when the object is stationary. Using different views of a tracked object, we volume-reconstruct a 3D model body after 3D scene analysis. For computing camera parameters and target object movement for a scene with a moving target object, we use fixed feature background points, and convert as a parameter optimization probl
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!