Academic literature on the topic 'Video transformation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video transformation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video transformation"

1

Wang, Ning, Wengang Zhou, and Houqiang Li. "Contrastive Transformation for Self-supervised Correspondence Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 10174–82. http://dx.doi.org/10.1609/aaai.v35i11.17220.

Full text
Abstract:
In this paper, we focus on the self-supervised learning of visual correspondence using unlabeled videos in the wild. Our method simultaneously considers intra- and inter-video representation associations for reliable correspondence estimation. The intra-video learning transforms the image contents across frames within a single video via the frame pair-wise affinity. To obtain the discriminative representation for instance-level separation, we go beyond the intra-video analysis and construct the inter-video affinity to facilitate the contrastive transformation across different videos. By forcin
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed, Dhrgham Hani, and Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation." Webology 18, SI05 (2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Full text
Abstract:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after c
APA, Harvard, Vancouver, ISO, and other styles
3

Anggraini, Sazkia Noor. "Aesthetic Transformation of Video4Change Project Through Postmodernism Studies." International Journal of Creative and Arts Studies 1, no. 1 (2017): 44. http://dx.doi.org/10.24821/ijcas.v1i1.1571.

Full text
Abstract:
Related research on community videos commonly limited in the social domain. This may happen because making video community is not classified as work of art, but rather as a tool to convey messages on community organizing method. Video4Change (v4c) project here consist different organizations in four countries; Indonesia, India, America and Israel. The review of videos conducted in textual and visual ethnography. This method used to specify all the things captured in the sense, the visual, the voice (audio) and the symbol on each video. Video as a medium in the postmodernism era considered as a
APA, Harvard, Vancouver, ISO, and other styles
4

Salim, Fahim A., Fasih Haider, Saturnino Luz, and Owen Conlan. "Automatic Transformation of a Video Using Multimodal Information for an Engaging Exploration Experience." Applied Sciences 10, no. 9 (2020): 3056. http://dx.doi.org/10.3390/app10093056.

Full text
Abstract:
Exploring the content of a video is typically inefficient due to the linear streamed nature of its media and the lack of interactivity. While different approaches have been proposed for enhancing the exploration experience of video content, the general view of video content has remained basically the same, that is, a continuous stream of images. It is our contention that such a conservative view on video limits its potential value as a content source. This paper presents An Alternative Representation of Video via feature Extraction (RAAVE), a novel approach to transform videos from a linear st
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Lei, Xiao-Quan Chen, Xin-Yi Kong, and Hua Huang. "Geodesic Video Stabilization in Transformation Space." IEEE Transactions on Image Processing 26, no. 5 (2017): 2219–29. http://dx.doi.org/10.1109/tip.2017.2676354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vinod, Malavika, M. Pallavi, Sreelakshmi Ajith, and Padmamala Sriram. "Reversible Data Hiding in Encrypted Video Using Reversible Image Transformation." Journal of Computational and Theoretical Nanoscience 17, no. 1 (2020): 136–40. http://dx.doi.org/10.1166/jctn.2020.8640.

Full text
Abstract:
This work focuses on a method to hide images in videos in a manner that the secret image can be losslessly recovered from the target image with minimal distortion. This lossless recovery can be done by using Reversible Data Hiding (RDH). To ensure the privacy of the video owner, the target media is encrypted before applying RDH. Reversible Image Transformation (RIT) is a framework used to ensure that. Audio Steganography techniques are used for further encryption. This can be used in cloud technology so that the cloud may add information into the target video without compromising its integrity
APA, Harvard, Vancouver, ISO, and other styles
7

C, Chanjal. "Feature Re-Learning for Video Recommendation." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 3143–49. http://dx.doi.org/10.22214/ijraset.2021.35350.

Full text
Abstract:
Predicting the relevance between two given videos with respect to their visual content is a key component for content-based video recommendation and retrieval. The application is in video recommendation, video annotation, Category or near-duplicate video retrieval, video copy detection and so on. In order to estimate video relevance previous works utilize textual content of videos and lead to poor performance. The proposed method is feature re-learning for video relevance prediction. This work focus on the visual contents to predict the relevance between two videos. A given feature is projecte
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Henan, Tanfeng Sun, Xinghao Jiang, Yi Dong, and Ke Xu. "A HEVC Video Steganalysis Against DCT/DST-Based Steganography." International Journal of Digital Crime and Forensics 13, no. 3 (2021): 19–33. http://dx.doi.org/10.4018/ijdcf.20210501.oa2.

Full text
Abstract:
The development of video steganography has put forward a higher demand for video steganalysis. This paper presents a novel steganalysis against discrete cosine/sine transform (DCT/DST)-based steganography for high efficiency video coding (HEVC) videos. The new steganalysis employs special frames extraction (SFE) and accordion unfolding (AU) transformation to target the latest DCT/DST domain HEVC video steganography algorithms by merging temporal and spatial correlation. In this article, the distortion process of DCT/DST-based HEVC steganography is firstly analyzed. Then, based on the analysis,
APA, Harvard, Vancouver, ISO, and other styles
9

Sowmyayani, S., and P. Arockia Jansi Rani. "An Efficient Temporal Redundancy Transformation for Wavelet Based Video Compression." International Journal of Image and Graphics 16, no. 03 (2016): 1650015. http://dx.doi.org/10.1142/s0219467816500157.

Full text
Abstract:
The objective of this work is to propose a novel idea of transforming temporal redundancies present in videos. Initially, the frames are divided into sub-blocks. Then, the temporally redundant blocks are grouped together thus generating new frames with spatially redundant temporal data. The transformed frames are given to compression in the wavelet domain. This new approach greatly reduces the computational time. The reason is that the existing video codecs use block matching methods for motion estimation which is a time consuming process. The proposed method avoids the use of block matching m
APA, Harvard, Vancouver, ISO, and other styles
10

Andriolo, Umberto. "Nearshore Wave Transformation Domains from Video Imagery." Journal of Marine Science and Engineering 7, no. 6 (2019): 186. http://dx.doi.org/10.3390/jmse7060186.

Full text
Abstract:
Within the nearshore area, three wave transformation domains can be distinguished based on the wave properties: shoaling, surf, and swash zones. The identification of these distinct areas is relevant for understanding nearshore wave propagation properties and physical processes, as these zones can be related, for instance, to different types of sediment transport. This work presents a technique to automatically retrieve the nearshore wave transformation domains from images taken by coastal video monitoring stations. The technique exploits the pixel intensity variation of image acquisitions, an
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!