Academic literature on the topic 'Video compositing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video compositing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video compositing"

1

Rüegg, Jan, Oliver Wang, Aljoscha Smolic, and Markus Gross. "DuctTake: Spatiotemporal Video Compositing." Computer Graphics Forum 32, no. 2pt1 (May 2013): 51–61. http://dx.doi.org/10.1111/cgf.12025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yun, Louis C., and David G. Messerschmitt. "On architectures for video compositing." Multimedia Systems 2, no. 4 (October 1994): 181–90. http://dx.doi.org/10.1007/bf01210449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ardiyan, Ardiyan. "Video Tracking dalam Digital Compositing untuk Paska Produksi Video." Humaniora 3, no. 1 (April 30, 2012): 1. http://dx.doi.org/10.21512/humaniora.v3i1.3227.

Full text
Abstract:
Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussion.
APA, Harvard, Vancouver, ISO, and other styles
4

de Lima, Edirlei Soares, Bruno Feijó, and Antonio L. Furtado. "Video-based interactive storytelling using real-time video compositing techniques." Multimedia Tools and Applications 77, no. 2 (February 2, 2017): 2333–57. http://dx.doi.org/10.1007/s11042-017-4423-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

김준수. "Chroma Keying in Video Compositing with Matting." Journal of Korea Design Knowledge ll, no. 34 (June 2015): 265–74. http://dx.doi.org/10.17246/jkdk.2015..34.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, Shih-Fu, and David G. Messerschmitt. "Compositing motion-compensated video within the network." ACM SIGCOMM Computer Communication Review 22, no. 3 (July 1992): 16–17. http://dx.doi.org/10.1145/142267.142272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nicolas, Henri, and Franck Denoual. "Semi-automatic modifications of video object trajectories for video compositing applications." Signal Processing 85, no. 10 (October 2005): 1970–83. http://dx.doi.org/10.1016/j.sigpro.2005.02.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rokita, Przemyslaw. "Compositing computer graphics and real world video sequences." Computer Networks and ISDN Systems 30, no. 20-21 (November 1998): 2047–57. http://dx.doi.org/10.1016/s0169-7552(98)00206-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xueying Qin, E. Nakamae, and K. Tadamura. "Automatically compositing still images and landscape video sequences." IEEE Computer Graphics and Applications 22, no. 1 (2002): 68–78. http://dx.doi.org/10.1109/38.974520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shih-Fu Chang and D. G. Messerschmitt. "Manipulation and compositing of MC-DCT compressed video." IEEE Journal on Selected Areas in Communications 13, no. 1 (1995): 1–11. http://dx.doi.org/10.1109/49.363151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Video compositing"

1

Meigneux, Guillaume. "Le territoire à l'épreuvre du compositing : pratiques vidéographiques et ambiances urbaines." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAH010/document.

Full text
Abstract:
Cette recherche interroge les enjeux heuristiques et opérationnels de la vidéo dans la pratique de l'urbanisme. Pour ce faire, elle opère une rencontre entre la notion d'ambiances architecturales et urbaines, telle qu'elle est développée au CRESSON, et les concepts d'image-mouvement et d'image-temps développés par Deleuze. Puis elle propose de rendre cette rencontre effective dans le cadre de la pratique de l'urbanisme à travers le compositing numérique, technique de manipulation des images animées.L'hypothèse qui guide cette recherche est qu'il est possible de définir une image-composite capable de faire valoir et de mettre en débat des phénomènes d'ambiances spécifiques aux territoires étudiés. Cette hypothèse se formalise autour de deux corpus, le premier est issu d'une pratique artistique de la vidéo qui motiva la mise en place de ce projet de thèse, le deuxième est issu d'une pratique de la vidéo en agence d'urbanisme qui s'effectua tout au long de cette recherche.Ce travail permet de valoriser la vidéo comme support de connaissance d'un côté et comme posture de projet de l'autre. Support de connaissance, car la vidéo offre la possibilité de renouveler l'approche phénoménologique en vigueur dans le champ des ambiances par une appréhension des phénomènes sensibles dans le temps de leur actualisation. Posture de projet, car la vidéo est susceptible de reconfigurer les modalités relationnelles en œuvre dans les dynamiques d'analyse et de conception de l'espace et du territoire
This research questions the heuristic and operational issues of the video in the practice of urban planning. To do this, it operates a meeting between the notion of architectural and urban environments, as developed in Cresson, with image-mouvement and image-temps concepts developed by Deleuze. Then it proposes to give effect to this meeting in the practice of urban planning through digital compositing, technical handling of moving images.The hypothesis guiding this research is that it is possible to define a image-composite able to argue and to debate specific environments territories studied phenomena. This assumption is formalized around two corpus, the first comes from an artistic practice of video that motivated the development of this thesis, the second is from a practice of video in urban planning agency was carried out throughout this research.This work adds value to the video as knowledge media on one side and posture as a project of the other. Support for knowledge, because video offers the possibility of renewing the phenomenological approach in force in the atmospheres of the field by a sensitive understanding of phenomena in time of updating. Project posture, because the video is likely to reconfigure relational modalities implemented in dynamic analysis and design of the space and territory
APA, Harvard, Vancouver, ISO, and other styles
2

Grundhöfer, Anselm [Verfasser], and Oliver [Akademischer Betreuer] Bimber. "Synchronized Illumination Modulation for Digital Video Compositing / Anselm Grundhöfer ; Betreuer: Oliver Bimber." Weimar : Juniorprofessur Augmented Reality, 2010. http://d-nb.info/1115342398/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Subramanian, Anbumani. "Layer Extraction and Image Compositing using a Moving-aperture Lens." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28152.

Full text
Abstract:
Image layers are two-dimensional planes, each comprised of objects extracted from a two-dimensional (2D) image of a scene. Multiple image layers together make up a given 2D image, similar to the way a stack of transparent sheets with drawings together make up a scene in an animation. Extracting layers from 2D images continues to be a difficult task. Image compositing is the process of superimposing two or more image layers to create a new image which often appears real, although it was made from one or more images. This technique is commonly used to create special visual effects in movies, videos and television broadcast. In the widely used "blue screen" method of compositing, a video of a person in front of a blue screen is first taken. Then the image of the person is extracted from the video by subtracting the blue portion in the video, and this image is then superimposed on to another image of a different scene, like a weather map. In the resulting image, the person appears to be in front of a weather map, although the image was digitally created. This technique, although popular, imposes constraints on the object color and reflectance properties and severely restricts the scene setup. Therefore layer extraction and image compositing remains a challenge in the field of computer vision and graphics. In this research, a novel method of layer extraction and image compositing is conceived using a moving-aperture lens, and a prototype of the system is developed. In an image sequence captured with this lens attached to a standard camera, stationary objects in a scene appear to move. The apparent motion in images is created due to planar parallax between objects in a scene. The parallax information is exploited in this research to extract objects from an image of a scene, as layers, to perform image compositing. The developed technique relaxes constraints on object color, properties and requires no special components in a scene to perform compositing. Results from various indoor and outdoor stationary scenes, convincingly demonstrate the efficacy of the developed technique. The knowledge of some basic information about the camera parameters also enables passive range estimation. Other potential uses of this method include surveillance, autonomous vehicle navigation, video content manipulation and video compression.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Hirsh, David E. "Photorealistic Rendering for LIve-Action Video Integration." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/honors/399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mustafa, Mohammad. "Video-Based 3D Textures." Thesis, University of Gävle, Department of Mathematics, Natural and Computer Sciences, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-163.

Full text
Abstract:

A new approach for object replacement in 3D space is presented. Introducing a technique that replaces the older two dimensional (2D) based facial replacement method performed by compositing artist in motion picture productions and video commercial industry.

This method uses 4 digital video cameras filming an actor from 360 degrees, the cameras are placed with 90 degrees in between, the video footage acquired is then used to produce a 3D video texture consisting of video segments taken from different angles representing the object from 3D point of view.

The video texture is then applied to a 3D modelled head matching the geometry of the original object.

Offering the freedom of showing the object from any point of view from 3D space, which is not possible using the current two dimensional method where the actormust at all time face the camera.

The method is described in details with images showing every stage of the process.

Results are presented as still frames taken from the final video footage and as a video file demonstrating them.

APA, Harvard, Vancouver, ISO, and other styles
6

Szabados, Luke. "Splat! Fragmented Space in Experimental Cinema." Ohio University Honors Tutorial College / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1461940887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sunkavalli, Kalyan. "Models of Visual Appearance for Analyzing and Editing Images and Videos." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10285.

Full text
Abstract:
The visual appearance of an image is a complex function of factors such as scene geometry, material reflectances and textures, illumination, and the properties of the camera used to capture the image. Understanding how these factors interact to produce an image is a fundamental problem in computer vision and graphics. This dissertation examines two aspects of this problem: models of visual appearance that allow us to recover scene properties from images and videos, and tools that allow users to manipulate visual appearance in images and videos in intuitive ways. In particular, we look at these problems in three different applications. First, we propose techniques for compositing images that differ significantly in their appearance. Our framework transfers appearance between images by manipulating the different levels of a multi-scale decomposition of the image. This allows users to create realistic composites with minimal interaction in a number of different scenarios. We also discuss techniques for compositing and replacing facial performances in videos. Second, we look at the problem of creating high-quality still images from low-quality video clips. Traditional multi-image enhancement techniques accomplish this by inverting the camera’s imaging process. Our system incorporates feature weights into these image models to create results that have better resolution, noise, and blur characteristics, and summarize the activity in the video. Finally, we analyze variations in scene appearance caused by changes in lighting. We develop a model for outdoor scene appearance that allows us to recover radiometric and geometric infor- mation about the scene from images. We apply this model to a variety of visual tasks, including color-constancy, background subtraction, shadow detection, scene reconstruction, and camera geo-location. We also show that the appearance of a Lambertian scene can be modeled as a combi- nation of distinct three-dimensional illumination subspaces — a result that leads to novel bounds on scene appearance, and a robust uncalibrated photometric stereo method.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
8

Poledník, Tomáš. "Detektor ohně ve videu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234923.

Full text
Abstract:
{This thesis deals with fire detection in video by colour analysis and machine learning, specifically deep convolutional neural networks, using Caffe framework. The aim is to create a vast set of data that could be used as the base element of machine learning detection and create a detector usable in real application. For the purposes of the project a set of tools for fire sequences creation, their segmentation and automatic labeling is proposed and created together with a large test set of short sequences with artificial modelled fire.
APA, Harvard, Vancouver, ISO, and other styles
9

Pu, Ruonan. "Target-sensitive video segmentation for seamless video composition /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20PU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Omizo, Ryan Masaaki. "Facing Vernacular Video." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339184415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Video compositing"

1

Digital compositing for film and video. Boston: Focal, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Steve, Wright. Digital compositing for film and video. 3rd ed. Amsterdam: Focal Press/Elsevier, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Steve, Wright. Digital compositing for film and video. Boston: Focal Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paolini, Marco. Shake 3: [professional compositing and special effects]. Berkeley, CA: Peachpit Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Colby, Richard, Matthew S. S. Johnson, and Rebekah Shultz Colby, eds. Rhetoric/Composition/Play through Video Games. New York: Palgrave Macmillan US, 2013. http://dx.doi.org/10.1057/9781137307675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

1944-, Rawling Keith, ed. Play and say with Paddy and Pip: Video activity book. London: Macmillan, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lancaster, Kurt. DSLR cinema: Crafting the film look with video. 2nd ed. Waltham, MA: Focal Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

DSLR Cinema: Crafting the film look with video. Amsterdam: Focal Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fulwiler, Betsy Rupp. Writing in science in action: Strategies, tools, and classroom video. Portsmouth, NH: Heinemann, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

DSLR cinema: Crafting the film look with video. 2nd ed. Waltham, MA: Focal Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Video compositing"

1

Jackman, John. "Video Format Problems." In Bluescreen Compositing, 93–100. Routledge, 2007. http://dx.doi.org/10.4324/9780080948973-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jackman, John. "Video Format Problems." In Bluescreen Compositing, 93–100. Elsevier, 2007. http://dx.doi.org/10.1016/b978-1-57820-283-6.50010-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wright, Steve. "Working with Video." In Compositing Visual Effects, 199–215. Elsevier, 2011. http://dx.doi.org/10.1016/b978-0-240-81781-1.10011-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

WRIGHT, S. "Working with Video." In Compositing Visual Effects, 189–204. Elsevier, 2008. http://dx.doi.org/10.1016/b978-0-240-80963-2.50014-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Nonlinear Video Editor." In Foundation Blender Compositing, 391–421. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1977-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Working with Video." In Compositing Visual Effects, 203–18. Routledge, 2012. http://dx.doi.org/10.4324/9780080555058-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Working with Video." In Compositing Visual Effects, 213–30. Routledge, 2013. http://dx.doi.org/10.4324/9780240817828-16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wright, Steve. "Video." In Digital Compositing for Film and Video, 209–37. Elsevier, 2001. http://dx.doi.org/10.1016/b978-0-240-80455-2.50013-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Video." In Digital Compositing for Film and Video, 319–56. Elsevier, 2010. http://dx.doi.org/10.1016/b978-0-240-81309-7.00012-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Video." In Digital Compositing for Film and Video, 219–48. Routledge, 2001. http://dx.doi.org/10.4324/9780080504360-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video compositing"

1

Skupin, Robert, Yago Sanchez, and Thomas Schierl. "Compressed domain video compositing with HEVC." In 2015 Picture Coding Symposium (PCS). IEEE, 2015. http://dx.doi.org/10.1109/pcs.2015.7170092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hoseini, S. A., and S. Jafari. "Automatic video mosaicing using hierarchical compositing." In 2011 International Conference on Graphic and Image Processing. SPIE, 2011. http://dx.doi.org/10.1117/12.913507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fukuda, Akihiro, Hirotaka Tanaka, Yuji Waizumi, and Tadashi Kasezawa. "Smoke matting and compositing in video sequences." In International Workshop on Advanced Image Technology, edited by Phooi Yee Lau, Kazuya Hayase, Qian Kemao, Wen-Nung Lie, Yung-Lyul Lee, Sanun Srisuk, and Lu Yu. SPIE, 2019. http://dx.doi.org/10.1117/12.2521260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

DuVall, Matthew, John Flynn, Michael Broxton, and Paul Debevec. "Compositing light field video using multiplane images." In SIGGRAPH '19: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3306214.3338614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yun, Louis C., and David G. Messerschmidt. "Architectures for multi-source multi-user video compositing." In the first ACM international conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/166266.166291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schnyder, Lars, Manuel Lang, Oliver Wang, and Aljoscha Smolic. "Depth image based compositing for stereo 3D." In 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2012). IEEE, 2012. http://dx.doi.org/10.1109/3dtv.2012.6365451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chu, Yu, Chunxia Xiao, Yong Tian, Xunhua Yang, and Guangpu Feng. "Fast gradient-domain video compositing using hierarchical data structure." In 2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics). IEEE, 2009. http://dx.doi.org/10.1109/cadcg.2009.5246909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hunt, C. B. "Digital film workstations - evolution in the video compositing suite." In International Broadcasting Convention - IBC '94. IEE, 1994. http://dx.doi.org/10.1049/cp:19940757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Kai, Xiaowu Chen, and Qinping Zhao. "Automatic Compositing Soccer Video Highlights with Core-Around Event Model." In 2011 12th International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics). IEEE, 2011. http://dx.doi.org/10.1109/cad/graphics.2011.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gopalakrishnan, Uma, P. Venkat Rangan, N. Ramkumar, and Balaji Hariharan. "Spatio-Temporal Compositing of Video Elements for Immersive eLearning Classrooms." In 2017 IEEE International Symposium on Multimedia (ISM). IEEE, 2017. http://dx.doi.org/10.1109/ism.2017.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video compositing"

1

Davis, Cabell S., and Scott M. Gallager. Automated Analysis of Zooplankton Size and Taxonomic Composition Using the Video Plankton Recorder. Fort Belvoir, VA: Defense Technical Information Center, December 1994. http://dx.doi.org/10.21236/ada289725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baral, Aniruddha, Jeffery Roesler, and Junryu Fu. Early-age Properties of High-volume Fly Ash Concrete Mixes for Pavement: Volume 2. Illinois Center for Transportation, September 2021. http://dx.doi.org/10.36501/0197-9191/21-031.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) is more cost-efficient, sustainable, and durable than conventional concrete. This report presents a state-of-the-art review of HVFAC properties and different fly ash characterization methods. The main challenges identified for HVFAC for pavements are its early-age properties such as air entrainment, setting time, and strength gain, which are the focus of this research. Five fly ash sources in Illinois have been repeatedly characterized through x-ray diffraction, x-ray fluorescence, and laser diffraction over time. The fly ash oxide compositions from the same source but different quarterly samples were overall consistent with most variations observed in SO3 and MgO content. The minerals present in various fly ash sources were similar over multiple quarters, with the mineral content varying. The types of carbon present in the fly ash were also characterized through x-ray photoelectron spectroscopy, loss on ignition, and foam index tests. A new computer vision–based digital foam index test was developed to automatically capture and quantify a video of the foam layer for better operator and laboratory reliability. The heat of hydration and setting times of HVFAC mixes for different cement and fly ash sources as well as chemical admixtures were investigated using an isothermal calorimeter. Class C HVFAC mixes had a higher sulfate imbalance than Class F mixes. The addition of chemical admixtures (both PCE- and lignosulfonate-based) delayed the hydration, with the delay higher for the PCE-based admixture. Both micro- and nano-limestone replacement were successful in accelerating the setting times, with nano-limestone being more effective than micro-limestone. A field test section constructed of HVFAC showed the feasibility and importance of using the noncontact ultrasound device to measure the final setting time as well as determine the saw-cutting time. Moreover, field implementation of the maturity method based on wireless thermal sensors demonstrated its viability for early opening strength, and only a few sensors with pavement depth are needed to estimate the field maturity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography