To see the other types of publications on this topic, follow the link: Video compositing.

Journal articles on the topic 'Video compositing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video compositing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rüegg, Jan, Oliver Wang, Aljoscha Smolic, and Markus Gross. "DuctTake: Spatiotemporal Video Compositing." Computer Graphics Forum 32, no. 2pt1 (May 2013): 51–61. http://dx.doi.org/10.1111/cgf.12025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yun, Louis C., and David G. Messerschmitt. "On architectures for video compositing." Multimedia Systems 2, no. 4 (October 1994): 181–90. http://dx.doi.org/10.1007/bf01210449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ardiyan, Ardiyan. "Video Tracking dalam Digital Compositing untuk Paska Produksi Video." Humaniora 3, no. 1 (April 30, 2012): 1. http://dx.doi.org/10.21512/humaniora.v3i1.3227.

Full text
Abstract:
Video Tracking is one of the processes in video postproduction and motion picture digitally. The ability of video tracking method in the production is helpful to realize the concept of the visual. It is considered in the process of visual effects making. This paper presents how the tracking process and its benefits in visual needs, especially for video and motion picture production. Some of the things involved in the process of tracking such as failure to do so are made clear in this discussion.
APA, Harvard, Vancouver, ISO, and other styles
4

de Lima, Edirlei Soares, Bruno Feijó, and Antonio L. Furtado. "Video-based interactive storytelling using real-time video compositing techniques." Multimedia Tools and Applications 77, no. 2 (February 2, 2017): 2333–57. http://dx.doi.org/10.1007/s11042-017-4423-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

김준수. "Chroma Keying in Video Compositing with Matting." Journal of Korea Design Knowledge ll, no. 34 (June 2015): 265–74. http://dx.doi.org/10.17246/jkdk.2015..34.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, Shih-Fu, and David G. Messerschmitt. "Compositing motion-compensated video within the network." ACM SIGCOMM Computer Communication Review 22, no. 3 (July 1992): 16–17. http://dx.doi.org/10.1145/142267.142272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nicolas, Henri, and Franck Denoual. "Semi-automatic modifications of video object trajectories for video compositing applications." Signal Processing 85, no. 10 (October 2005): 1970–83. http://dx.doi.org/10.1016/j.sigpro.2005.02.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rokita, Przemyslaw. "Compositing computer graphics and real world video sequences." Computer Networks and ISDN Systems 30, no. 20-21 (November 1998): 2047–57. http://dx.doi.org/10.1016/s0169-7552(98)00206-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xueying Qin, E. Nakamae, and K. Tadamura. "Automatically compositing still images and landscape video sequences." IEEE Computer Graphics and Applications 22, no. 1 (2002): 68–78. http://dx.doi.org/10.1109/38.974520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shih-Fu Chang and D. G. Messerschmitt. "Manipulation and compositing of MC-DCT compressed video." IEEE Journal on Selected Areas in Communications 13, no. 1 (1995): 1–11. http://dx.doi.org/10.1109/49.363151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jones, Stephen. "Synthetics: A History of the Electronically Generated Image in Australia." Leonardo 36, no. 3 (June 2003): 187–95. http://dx.doi.org/10.1162/002409403321921389.

Full text
Abstract:
This paper takes a brief look at the early years of computer-graphic and video-synthesizer–driven image production in Australia. It begins with the first (known) Australian data visualization, in 1957, and proceeds through the compositing of computer graphics and video effects in the music videos of the late 1980s. The author surveys the types of work produced by workers on the computer graphics and video synthesis systems of the early period and draws out some indications of the influences and interactions among artists and engineers and the technical systems they had available, which guided the evolution of the field for artistic production.
APA, Harvard, Vancouver, ISO, and other styles
12

Ardiyan, Ardiyan, Satria Mahardika, Melki Sadekh Mansuan, and Veronica Wijayanti. "TRACKING DAN CHROMAKEY SEBAGAI ELEMEN TEKNIK DESAIN EFEK VISUAL Studi Kasus: Efek Visual Video Klip." Jurnal Dimensi DKV Seni Rupa dan Desain 5, no. 1 (April 1, 2020): 85. http://dx.doi.org/10.25105/jdd.v5i1.6864.

Full text
Abstract:
<p><strong>Abstrak</strong></p><p>Tracking and Chromakey as Visual Effect Design Technique (Study Case: Video Clip Visual Effect. Visual effect in audio visual media and animation is commonly use, especially chromakey technique or usually described as green screen or blue screen, a certain color element to acknowledge the alpha channel. In compositing technique, the using of tracking and chromakey technique are both known as visual effect technique. But on online editing works, the result seldom not at its best. This problem is usually occurred from early production process until the final process in the process of video making. The quality of final result based on early raw footage used. The objective of this research is to know the technical factors to achieve better method appliance in order to enhance the quality of final result. The study case is video clip that has short duration, more variative and has natural compositing result. The literature studies of software and user approached is the method use to analyses the problem. The result of this research is the understanding of tracking and chromakey method in video clip making process to achieve the visual effect needed. </p><p> </p><p><strong> Abstrak</strong></p><p>Tracking dan Chromakey Sebagai Elemen Teknik Desain Efek Visual (Studi Kasus: Efek Visual Video Klip). Efek visual dalam media audio visual maupun animasi sudah sangat lazim digunakan, khususnya penggunaan teknik chromakey atau yang lebih umum disebut green screen atau blue screen, yaitu penggunaan elemen warna tertentu untuk menentukan alpha channel. Dalam lingkup teknik compositing, penggunaan teknik tracking dan chromakey ini dapat digolongkan sebagai teknik efek visual. Dalam pengerjaan online editing, hasil akhir terkadang terasa kurang maksimal apabila ditinjau dari hasil kedua teknik ini. Hal ini dapat diartikan adanya permasalahan yang selalu terkait dengan pengolahan produksi awal sampai akhir dalam pengerjaan sebuah video. Kualitas hasil akhir akan ditentukan dari raw footage awal yang akan digunakan. Penelitian ini bertujuan untuk mengetahui faktor teknis yang ada, sehingga lebih baik dalam penggunaan metode yang dilakukan serta akan meningkatkan kualitas hasil akhir. Sebagai studi kasus adalah video klip dengan mempertimbangkan durasi yang tidak lama, lebih variatif dan mengarah dalam pendekatan hasil compositing yang natural. Selain itu metode studi literatur tentang perangkat lunak juga dilakukan, sehingga pendekatan metode dan pola pikir pengguna perangkat lunak lebih memahami dalam melakukan kedua proses ini. Hasil dari penelitian ini adalah pemahaman terhadap metode tracking dan chromakey dalam pembuatan video klip untuk menghasilkan efek visual yang dibutuhkan. Kata kunci: efek visual, compositing, video klip, tracking, chromakey<br />*) Jurusan Desain Komunikasi Visual, Program Animasi School Of Design Universitas Bina Nusantara e-mail: iyan.animasi@gmail.com</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Afifi, Mahmoud, and Khaled F. Hussain. "What is the Truth: A Survey of Video Compositing Techniques." International Journal of Image, Graphics and Signal Processing 7, no. 8 (July 8, 2015): 13–27. http://dx.doi.org/10.5815/ijigsp.2015.08.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kwon, Soon-Chul, Won-Young Kang, Yeong-Hu Jeong, and Seung-Hyun Lee. "Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect." Journal of Korea Information and Communications Society 38C, no. 10 (October 31, 2013): 920–27. http://dx.doi.org/10.7840/kics.2013.38c.10.920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

황규현 and SANGHUN PARK. "Feature-Based Light and Shadow Estimation for Video Compositing and Editing." Journal of the Korea Computer Graphics Society 18, no. 1 (March 2012): 1–9. http://dx.doi.org/10.15701/kcgs.2012.18.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Guofeng, Xueying Qin, Xiaobo An, Wei Chen, and Hujun Bao. "As-consistent-As-possible compositing of virtual objects and video sequences." Computer Animation and Virtual Worlds 17, no. 3-4 (2006): 305–14. http://dx.doi.org/10.1002/cav.134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Jaehyun, Jahanzeb Hafeez, Kwangjib Kim, Seunghyun Lee, and Soonchul Kwon. "A Novel Real-Time Match-Moving Method with HoloLens." Applied Sciences 9, no. 14 (July 19, 2019): 2889. http://dx.doi.org/10.3390/app9142889.

Full text
Abstract:
With the advancement of media and computing technologies, video compositing techniques have improved to a great extent. These techniques have been used not only in the entertainment industry but also in advertisement and new media. Match-moving is a cinematic technology in virtual-real image synthesis that allows the insertion of computer graphics (virtual objects) into real world scenes. To make a realistic virtual-real image synthesis, it is important to obtain internal parameters (such as focal length) and external parameters (position and rotation) from an Red-Green-Blue(RGB) camera. Conventional methods recover these parameters by extracting feature points from recorded video frames to guide the virtual camera. These methods fail when there is occlusion or motion blur in the recorded scene. In this paper, we propose a novel method (system) for pre-visualization and virtual-real image synthesis that overcomes the limitations of conventional methods. This system uses the spatial understanding principle of Microsoft HoloLens to perform the match-moving of virtual-real video scenes. Experimental results demonstrate that our system is much more accurate and efficient than existing systems for video compositing.
APA, Harvard, Vancouver, ISO, and other styles
18

Hasche, Eberhard, Dominik Benning, Oliver Karaschewski, Florian Carstens, and Reiner Creutzburg. "Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques." Electronic Imaging 2020, no. 3 (January 26, 2020): 206–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.3.mobmu-206.

Full text
Abstract:
360-degree image and movie content have been gaining popularity in the last few years all over the media industry. There are two main reasons for this development. On the one hand, it is the immersive character of this media form and, on the other hand, the development of recording and presentation technology has made great progress in terms of resolution and quality. 360-degree panoramas are particularly widespread in VR and AR technology. Despite the high immersive potential, these forms of presentation have the disadvantage that the users are isolated and have no social contact during the presentation. Efforts have therefore been made to project 360-degree content in specially equipped rooms or planetariums to enable a shared experience for the audience. One area of application for 360- degree single-line cylindrical panorama with moving imagery included are modern conference rooms in hotels, which create an immersive environment for their clients to encourage creativity. The aim of this work is to generate high-resolution 25K 360-degree videos for projection in a conference room. The creation of the panoramas uses the single-line cylinder technique and is based on composition technologies that are used in the film industry. Video sequences are partially composed into a still image panorama in order to enable a high native resolution of the final film. A main part of this work is the comparison of different film, video and DSLR cameras, in which different image parameters are examined with respect to the quality of the images. Finally, the advantages, disadvantages and limitations of the procedure are examined.
APA, Harvard, Vancouver, ISO, and other styles
19

Berger, M. O., C. Chevrier, and G. Simon. "Compositing Computer and Video Image Sequences: Robust Algorithms for the Reconstruction of the Camera Parameters." Computer Graphics Forum 15, no. 3 (August 1996): 23–32. http://dx.doi.org/10.1111/1467-8659.1530023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Brečević, Geska Helena, and Robert Brečević. "Verfünfungseffekt: Delving into the Enchanted World of Fivefold-portraits and Self-partitioning." Magic, Vol. 5, no. 1 (2020): 66–71. http://dx.doi.org/10.47659/m8.066.ess.

Full text
Abstract:
Discovered during a media-archeological investigation into optical illusions, trick photography, and discarded memorabilia, the photo-multigraph technique opened the door to an enchanted world of cloned appearances orbiting in a self-reflective solar system. Shapeshifting into our preferred artistic medium, this turn-of-the-century photographic technique becomes the video-multigraph. It is bizarrely noteworthy that self-isolation would become not only the subject of the piece, but also – due to the unforeseen spread of a recently mutated virus – the prevailing circumstances under which the work was to be completed. In Verfünfungseffekt, we use the medium of video to create a kaleidoscopic portrait-in-motion where the perspective-shifting shards of ego are recorded in a synchronized performance of solipsist intersubjectivity. The video-multigraph allows for the compositing of tiny offsets in time-shifting delays applied to one, or several, of the mirrored selves – shattering the cloned perfection, as well as the conformity, of the multiple presences. This optical illusion necessitates reflection on how media alters our perceptions of time and space; it thereby arouses wonder about our place in existence. Keywords: Photo-multigraph, fivefold-portrait, mirror photography, video-multigraph, crisis of presence
APA, Harvard, Vancouver, ISO, and other styles
21

Hayashi, Masaki, Kazuo Fukui, Yasumasa Ito, and Nobuyuki Yagi. "New Image/Video Media and It's Application. Image Compositing System Capable of Long-range Camera Movement." Journal of the Institute of Television Engineers of Japan 50, no. 10 (1996): 1549–57. http://dx.doi.org/10.3169/itej1978.50.1549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

ZHAO, WENYI. "FLEXIBLE IMAGE BLENDING FOR IMAGE MOSAICING WITH REDUCED ARTIFACTS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 04 (June 2006): 609–28. http://dx.doi.org/10.1142/s0218001406004806.

Full text
Abstract:
Image mosaicing involves geometric alignment among video frames and image compositing or blending. For dynamic mosaicing, image mosaics are constructed dynamically along with incoming video frames. Consequently, dynamic mosaicing demands efficient operations for both alignment and blending in order to achieve real-time performance. In this paper, we focus on efficient image blending methods that create good-quality image mosaics from any number of overlapping frames. One of the driving forces for efficient image processing is the huge market of mobile devices such as cell phones, PDAs that have image sensors and processors. In particular, we show that it is possible to have efficient sequential implementations of blending methods that simultaneously involve all accumulated video frames. The choices of image blending include traditional averaging, overlapping and flexible ones that take into consideration temporal order of video frames and user control inputs. In addition, we show that artifacts due to mis-alignment, image intensity difference can be significantly reduced by efficiently applying weighting functions when blending video frames. These weighting functions are based on pixel locations in a frame, view perspective and temporal order of this frame. One interesting application of flexible blending is to visualize moving objects on a mosaiced stationary background. Finally, to correct for significant exposure difference in video frames, we propose a pyramid extension based on intensity matching of aligned images at the coarsest resolution. Our experiments with real image sequences demonstrate the advantages of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Yuqing, Tianqiang Huang, and Yanfang Liu. "A novel video forgery detection algorithm for blue screen compositing based on 3-stage foreground analysis and tracking." Multimedia Tools and Applications 77, no. 6 (April 8, 2017): 7405–27. http://dx.doi.org/10.1007/s11042-017-4652-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shenai, Mahesh B., R. Shane Tubbs, Barton L. Guthrie, and Aaron A. Cohen-Gadol. "Virtual interactive presence for real-time, long-distance surgical collaboration during complex microsurgical procedures." Journal of Neurosurgery 121, no. 2 (August 2014): 277–84. http://dx.doi.org/10.3171/2014.4.jns131805.

Full text
Abstract:
Object The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed “Virtual Interactive Presence” (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. Methods The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Results Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Conclusions Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.
APA, Harvard, Vancouver, ISO, and other styles
25

Fathoni, Ahmad Faisal Choiril Anam. "Mengkaji Penggunaan Software Apple Color untuk Color Grading saat Pasca Produksi." Humaniora 2, no. 1 (April 30, 2011): 590. http://dx.doi.org/10.21512/humaniora.v2i1.3072.

Full text
Abstract:
In post-production process, there is one process that is not as well known as the video editing process, the addition of animation, special effects enrichment, motion graphics or audio editing and audio mixing, an important process which is rarely realized called Color Correction or Color Grading. Various software have been made to handle this process, ranging from additional filters are already available for free in any editing software, to high-end devices worth billions of dollars dedicated for specifically conducting Color Correction. Apple Color is one of the software included in the purchase of Final Cut Studio package which also include Final Cut Pro for Video Editing, Soundtrack Pro for Sound Editing and Mixing, and Motion for compositing. Apple's Color is specially designed for color correction tasks after previously edited in Final Cut Pro. This paper is designed to introduce Apple's software as well as analyze the feasibility of Apple Color as a professional device in the world of production, especially post-production. Some professional color correction software will be compared briefly with Apple Color to get an objective conclusion.
APA, Harvard, Vancouver, ISO, and other styles
26

Wilhelmina, Grace, and Ardoni Ardoni. "Pembuatan Iklan Promosi Perpustakaan Padang Panjang untuk Anak Melalui Media Motion Graphic." Ilmu Informasi Perpustakaan dan Kearsipan 8, no. 1 (October 29, 2019): 180. http://dx.doi.org/10.24036/107343-0934.

Full text
Abstract:
AbstractThis article discusses the making of Padang Panjang library promotion advertisement for children through motion graphic media. The purpose of writing describes the making of motion graphics to introduce to children. There are 6 stages of making motion graphic as follows: (1) concept or concept, is the stage before making motion graphic, (2) drawing or design, is the stage of making motion graphic more specifically visualized in the form of images, (3) Collection of material or material collecting, is a collection of material in the making of motion graphics, (4) assemblies or assemblies, which are made in several stages, namely designing, animating, compositing, and rendering carried out by using applications, (5) testing or testing the video made, (6) distribution or distribution, is the final stage of packaging and distribution of the product.Keywords: children; library; motion graphic.
APA, Harvard, Vancouver, ISO, and other styles
27

Bode, Lisa. "Deepfaking Keanu: YouTube deepfakes, platform visual effects, and the complexity of reception." Convergence: The International Journal of Research into New Media Technologies 27, no. 4 (July 21, 2021): 919–34. http://dx.doi.org/10.1177/13548565211030454.

Full text
Abstract:
On July 14, 2019, a 3-minute 36-second video titled “Keanu Reeves Stops A ROBBERY!” was released on YouTube visual effects (VFX) channel, Corridor. The video’s click-bait title ensured it was quickly shared by users across platforms such as Facebook, Twitter, and Reddit. Comments on the video suggest that the vast majority of viewers categorised it as fiction. What seemed less universally recognised, though, was that the performer in the clip was not Keanu Reeves himself. It was voice actor and stuntman Reuben Langdon, and his face was digitally replaced with that of Reeves, through the use of an AI generated deepfake, an open access application, Faceswap, and compositing in Adobe After Effects. This article uses Corridor’s deepfake Keanu video (hereafter shorted to CDFK) as a case study which allows the fleshing out of an, as yet, under-researched area of deepfakes: the role of framing contexts in shaping how viewers evaluate, categorise, make sense of and discuss these images. This research draws on visual effects scholarship, celebrity studies, cognitive film studies, social media theory, digital rhetoric, and discourse analysis. It is intended to serve as a starting point of a larger study that will eventually map types of online manipulated media creation on a continuum from the professional to the vernacular, across different platforms, and attending to their aesthetic, ethical, cultural and reception dimensions. The focus on context (platform, creator channel, and comments) also reveals the emergence of an industrial and aesthetic category of visual effects, which I call here “platform VFX,” a key term that provides us with more nuanced frames for illuminating and analysing a range of manipulated media practices as VFX software becomes ever more accessible and lends itself to more vernacular uses, such as we see with various face swap apps.
APA, Harvard, Vancouver, ISO, and other styles
28

Prayoonrat, Chinnawat. "Raising Energy Awareness through 3D Computer Animation." Applied Mechanics and Materials 752-753 (April 2015): 1116–20. http://dx.doi.org/10.4028/www.scientific.net/amm.752-753.1116.

Full text
Abstract:
In the current situation, the number of energy sources in the world become less because of inefficient usage and the lack of realization on energy saving. Consequences from overusing energy include global warming problems. To encourage and promote efficient use of energy and energy reduction, this research was to improve energy awareness through 3D computer animation. The animation was created in the form of video animation format and has production process as follows: 1) Pre-production: problem defines, story board, turntable animation and backdrop design 2) Production: 3D model character, shading and texturing, lighting and shadowing, rendering and compositing and 3) Post-production: combining the story, inserting voice record and sound. A statistical examination, the research validation was evaluated by 5 experts and resulted in good level (= 4.11, S.D. = 0.64). When testing understanding and perspective on animation media with the sample group of 50 students from Sripatum University Chonburi Campus, the result of the survey that conducted before presenting the campaign (pre-test) was in moderate level (= 3.32, S.D. = 0.56) while the result that conducted after presenting the campaign (post-test) was in very good level (= 4.67, S.D. = 0.48). Therefore, the 3D computer animation could help samples understand more about importance of energy reduction.
APA, Harvard, Vancouver, ISO, and other styles
29

Daly, C. J., J. M. Bulloch, M. Ma, and D. Aidulis. "A comparison of animated versus static images in an instructional multimedia presentation." Advances in Physiology Education 40, no. 2 (June 2016): 201–5. http://dx.doi.org/10.1152/advan.00053.2015.

Full text
Abstract:
Sophisticated three-dimensional animation and video compositing software enables the creation of complex multimedia instructional movies. However, if the design of such presentations does not take account of cognitive load and multimedia theories, then their effectiveness as learning aids will be compromised. We investigated the use of animated images versus still images by creating two versions of a 4-min multimedia presentation on vascular neuroeffector transmission. One version comprised narration and animations, whereas the other animation comprised narration and still images. Fifty-four undergraduate students from level 3 pharmacology and physiology undergraduate degrees participated. Half of the students watched the full animation, and the other half watched the stills only. Students watched the presentation once and then answered a short essay question. Answers were coded and marked blind. The “animation” group scored 3.7 (SE: 0.4; out of 11), whereas the “stills” group scored 3.2 (SE: 0.5). The difference was not statistically significant. Further analysis of bonus marks, awarded for appropriate terminology use, detected a significant difference in one class (pharmacology) who scored 0.6 (SE: 0.2) versus 0.1 (SE: 0.1) for the animation versus stills group, respectively ( P = 0.04). However, when combined with the physiology group, the significance disappeared. Feedback from students was extremely positive and identified four main themes of interest. In conclusion, while increasing student satisfaction, we do not find strong evidence in favor of animated images over still images in this particular format. We also discuss the study design and offer suggestions for further investigations of this type.
APA, Harvard, Vancouver, ISO, and other styles
30

Mukhortikova, Elena A. "Features of the Formation of Visual Series of Music Videos." Observatory of Culture 18, no. 3 (July 22, 2021): 264–71. http://dx.doi.org/10.25281/2072-3156-2021-18-3-264-271.

Full text
Abstract:
This article explores the typology of visual means of modern music videos on the television screen and their impact on the audience. For the first time, there are studied the distinctive features of building visual series of music videos basing on three types of musical compositions. The structure of the visual series is based on their connection with the plot features of the work and their impact on the viewer based on the sense of belonging of the audience to what is happening on the screen, attracting their attention to the aesthetic opportunities of music video products.The article examines the features of building visual series in combination with visual means of music videos. There are considered the origin, periods of development and features of the visual series of music videos. The basis for drawing the foundations of cinema to the visual and expressive means of music videos is its aesthetics, which contributes to constructive transformations in the development of new, modern means of computer technologies. The use of cinematic aesthetics contributes to the development of the narrative thought of music videos. In this study, the following typology of visual video series is highlighted: plot, illustrative, parallel; some examples of music videos are given. There is established that each of the considered visual series of videos is based on the features of the musical content of the work and its plot construction. This is based on a specific perception by the viewer, which causes penetration into the image, a sense of belonging to what is happening on the screen, and feelings for the characters. The article concludes that the viewer’s perception is influenced, to a certain, extent by the features of the visual series of videos, which have their own contrapuntal aesthetics, coupled with the general idea of the music video and the differences in the performer’s vision of the composition.
APA, Harvard, Vancouver, ISO, and other styles
31

Hsu, Wenhua. "The effects of audiovisual support on EFL learners’ productive vocabulary." ReCALL 26, no. 1 (November 22, 2013): 62–79. http://dx.doi.org/10.1017/s0958344013000220.

Full text
Abstract:
AbstractThis study concerned multiple exposures to English before writing and aimed to explore the possibility of an increase in free active vocabulary with a focus on latent productive vocabulary beyond the first 2,000 most frequent words. The researcher incorporated online video into her college freshman composition class and examined its effects on non-basic vocabulary use. To activate previously known vocabulary, a variety of audiovisual modes before writing were applied to four groups alternately: (1) video with captions, (2) video without captions, (3) silent video with captions, and (4) video with screen off (soundtrack only). The results show that the writing involving non-captioned videos contained a higher percentage of advanced vocabulary than that with the other three conditions (specifically, 12.45% versus 11.33% with captioned videos, 5.2% with silent but captioned videos and 8.63% with audio only). Drawing upon the dual-coding theory, this study also points out some pedagogical implications for a video-based writing course.
APA, Harvard, Vancouver, ISO, and other styles
32

Golovanova, Elena Aleksandrovna. "Creation of musical content based on the plotline of music videos." Человек и культура, no. 6 (June 2020): 175–83. http://dx.doi.org/10.25136/2409-8744.2020.6.30990.

Full text
Abstract:
This article examines the visual means of modern music videos and their impact on perception of the audience. The author is first to study the peculiarities of creation of visual imagery along with artistic means of music videos and their impact on the viewership. The conception, development stages, and specifics of the visual imagery of music videos on TV &ndash; from the &ldquo;Little Blue Light&rdquo; and musical films to modern music videos &ndash; are considered. Each of the is based on the balance between musical content of the composition, based on the compositional differences of the plot, and viewership perception, arousing the feeling of immersion into the image, connectedness to the events on TV, and affection with &nbsp;the heroes. The scientific novelty consists in the attempt to correlate verbal content of the music video with its visual imagery. The conclusion is made that the audience perception in a certain way depends on the peculiarities of visual imagery of music videos, which feature contrapuntal aesthetics combined with the main idea of music video and differences within the framework of performer&rsquo;s representation of the composition. The common aspect for creation of all music videos is the depiction of motion, which again underlines its inextricable connection of music video and cinematography, which is based on creation of a certain image by the actor.
APA, Harvard, Vancouver, ISO, and other styles
33

Shafii, Kasim, Mustapha Aminu Bagiwa, A. A. Obiniyi, N. Sulaiman, A. M. Usman, C. M. Fatima, and S. Fatima. "BLUE SCREEN VIDEO FORGERY DETECTION AND LOCALIZATION USING AN ENHANCED 3-STAGE FOREGROUND ALGORITHM." FUDMA JOURNAL OF SCIENCES 5, no. 2 (July 1, 2021): 133–44. http://dx.doi.org/10.33003/fjs-2021-0501-526.

Full text
Abstract:
The availability of easy to use video editing software has made it easy for cyber criminals to combine different videos from different sources using blue screen composition technology. This, makes the authenticity of such digital videos questionable and needs to be verified especially in the court of law. Blue Screen Composition is one of the ways to carry out video forgery using simple to use and affordable video editing software. Detecting this type of video forgery aims at revealing and observing the facts about a video so as to conclude whether the contents of the video have undergone any unethical manipulation. In this work, we propose an enhanced 3-stage foreground algorithm to detect Blue Screen manipulation in digital video. The proposed enhanced detection technique contains three (3) phases, extraction, detection and tracking. In the extraction phase, a Gaussian Mixture Model (GMM) is used to extract foreground element from a target video. Entropy function as a descriptive feature of image is extracted and calculated from the target video in the detection phase. The tracking phase seeks to use Minimum Output Sum of Squared Error (MOSSE) object tracking algorithm to fast track forged blocks of small sizes in a digital video. The result of the experiments demonstrates that the proposed detection technique can adequately detect Blue Screen video forgery when the forged region is small with a true positive detection rate of 98.02% and a false positive detection rate of 1.99%. The result of this our research can be used to
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Chang, Han Yu, Yi Dong, Zhiqi Shen, Yingxue Yu, Ian Dixon, Zhanning Gao, et al. "Generating Engaging Promotional Videos for E-commerce Platforms (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13865–66. http://dx.doi.org/10.1609/aaai.v34i10.7205.

Full text
Abstract:
There is an emerging trend for sellers to use videos to promote their products on e-commerce platforms such as Taobao.com. Current video production workflow includes the production of visual storyline by human directors. We propose a system to automatically generate visual storyline based on the input set of visual materials (e.g. video clips or still images) and then produce a promotional video. In particular, we propose an algorithm called Shot Composition, Selection and Plotting (ShotCSP), which generates visual storylines leveraging film-making principles to improve viewing experience and perceived persuasiveness.
APA, Harvard, Vancouver, ISO, and other styles
35

Dong, Jing, and Yang Xia. "Real-Time Video Stabilization Based on Smoothing Feature Trajectories." Applied Mechanics and Materials 519-520 (February 2014): 640–43. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.640.

Full text
Abstract:
In this paper, a real-time video stabilization algorithm based on smoothing feature trajectories is proposed. For each input frame, our approach generates multiple feature trajectories by performing inter-frame template match and optical flow. A Kalman filter is then performed to smooth these feature trajectories. Finally, at the stage of image composition, the motion consistency of the feature trajectory is considered for achieving a visually plausible stabilized video. The proposed method can offer real-time video stabilization and its removed the delays for caching coming images. Experiments show that our approach can offer real-time stabilizing for videos with various complicated scenes.
APA, Harvard, Vancouver, ISO, and other styles
36

Woolard*, Derek D., Judy Fugiel, F. Paul Silverman, and Peter D. Petracek. "Use of Time-lapse Video to Demonstrate Plant Growth Regulator (PGR) Responses." HortScience 39, no. 4 (July 2004): 875A—875. http://dx.doi.org/10.21273/hortsci.39.4.875a.

Full text
Abstract:
Tables, graphs, and photographs can effectively convey detailed results of a PGR experiment. However, we have observed that demonstrating PGR treatment effects by time-lapse video creates a strong impact on both scientists and non-technical audiences. Time-lapse video also provides a method for obtaining a continuous visual record that can be used to establish the precise chronology of a slow process. Recent advances in notebook computers, inexpensive digital cameras (e.g. 3Com HomeConnect™), and time-lapse software (e.g. Picture WorkLive™) allow scientists and teachers to inexpensively prepare time-lapse videos. Important considerations for the production of quality time-lapse videos include: 1. treatment effects should be substantial, consistent, and visible, 2. digital camera images should be clear, 3. lighting should be constant and provide adequate brightness and proper color, 4. camera movement such as those due to vibrations should be minimal, 5. camera placement should simplify composition. Time-lapse videos of PGR treatment effects will be shown, and methods of production will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

On, Kyoung-Woon, Eun-Sol Kim, Yu-Jung Heo, and Byoung-Tak Zhang. "Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5315–22. http://dx.doi.org/10.1609/aaai.v34i04.5978.

Full text
Abstract:
Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset (Abu-El-Haija et al. 2016)) and Video Question and Answering (TVQA dataset(Lei et al. 2018)). The experimental results show that our model efficiently learns the semantic compositional structure of video data. Furthermore, our model achieves the highest performance in comparison to other baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Cousins, Stephen, Mark J. Kennard, and Brendan C. Ebner. "Corrigendum to: Depth-related composition and structuring of tropical riverine fish assemblages revealed by baited video." Marine and Freshwater Research 68, no. 10 (2017): 1976. http://dx.doi.org/10.1071/mf16278_co.

Full text
Abstract:
The aim of the present study was to determine whether boat-based deployment of remote underwater video cameras is effective for surveying fish assemblages in the deepest reaches of two large tropical rivers in north-eastern Australia. In addition, we compared fish assemblages recorded on baited versus unbaited cameras, and evaluated the sampling effort (duration of recording) required to estimate fish assemblages using remote underwater videos. We found that fish assemblages differed according to the depth, with statistically significant differences largely attributable to the prevalence of small-bodied species (Ambassis sp., Melanotaenia sp. and Pseudomugil signifer recorded in shallow (0.4–2.0m) and intermediate (2.1–4.9m) depths, and larger-bodied fish species (>10cm TL), such as Lutjanus argentimaculatus, Mesopristes argenteus and Caranx sexfasciatus, in deep water (>5.0m). Estimates of fish assemblage attributes generally stabilised after 60min recording duration, suggesting that interrogation of video footage beyond this duration may not be cost-effective. We conclude that depth is an important consideration when surveying large and deep river fish assemblages and that where water clarity is favourable, underwater video provides one of the means by which an assemblage can be investigated across the entire depth profile.
APA, Harvard, Vancouver, ISO, and other styles
39

Cousins, Stephen, Mark J. Kennard, and Brendan C. Ebner. "Depth-related composition and structuring of tropical riverine fish assemblages revealed by baited video." Marine and Freshwater Research 68, no. 10 (2017): 1965. http://dx.doi.org/10.1071/mf16278.

Full text
Abstract:
The aim of the present study was to determine whether boat-based deployment of remote underwater video cameras is effective for surveying fish assemblages in the deepest reaches of two large tropical rivers in north-eastern Australia. In addition, we compared fish assemblages recorded on baited versus unbaited cameras, and evaluated the sampling effort (duration of recording) required to estimate fish assemblages using remote underwater videos. We found that fish assemblages differed according to the depth, with statistically significant differences largely attributable to the prevalence of small-bodied species (<10-cm total length, TL), such as Ambassis sp., Melanotaenia sp. and Pseudomugil signifer recorded in shallow (0.4–2.0m) and intermediate (2.1–4.9m) depths, and larger-bodied fish species (>10cm TL), such as Lutjanus argentimaculatus, Mesopristes argenteus and Caranx sexfasciatus, in deep water (>5.0m). Estimates of fish assemblage attributes generally stabilised after 60min recording duration, suggesting that interrogation of video footage beyond this duration may not be cost-effective. We conclude that depth is an important consideration when surveying large and deep river fish assemblages and that where water clarity is favourable, underwater video provides one of the means by which an assemblage can be investigated across the entire depth profile.
APA, Harvard, Vancouver, ISO, and other styles
40

Torkkeli, Kaisa, Johanna Mäkelä, and Mari Niva. "Elements of practice in the analysis of auto-ethnographical cooking videos." Journal of Consumer Culture 20, no. 4 (March 14, 2018): 543–62. http://dx.doi.org/10.1177/1469540518764248.

Full text
Abstract:
This article analyses cooking videos recorded at home by means of the practice-theoretical approach. It employs two conceptualisations of the elements of practice that have stood out in recent applications of practice theories in sociological consumption and food studies. The first conceptualisation comprises understandings, procedures and engagements and the second materials, competences and meanings. To study cooking as a situationally performed mundane practice, auto-ethnographical videos of cooking were filmed using the first author’s family. To analyse the practice of cooking as a composition of doings and sayings, the videos were coded with a video analysis program, Interact, into visual charts, and the discussions related to cooking performances were transcribed. The analysis suggests that the cooking practice involves interplay among the elements of the two conceptualisations: procedures join materials with competences, engagements link competences with meanings and understandings connect meanings with materials. This is visualised as a triangle in which understandings, procedures and engagements represent the sides of the triangle between the apexes of materials, competences and meanings. By combining an auto-ethnographical perspective with a video method and by analysing the practice of cooking as a situational and embodied performance, the study contributes to the current understanding of the elements of practice and introduces a novel empirical application of practice theory.
APA, Harvard, Vancouver, ISO, and other styles
41

Ikeda, Yusuke, Sakae Okubo, Jiro Katto, and Kenta Kimura. "Minimizing Video Switching/Composition Delay by Controlling Video Sync Phases." Journal of the Institute of Image Information and Television Engineers 60, no. 11 (2006): 1789–95. http://dx.doi.org/10.3169/itej.60.1789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pérez-Rufí, José-Patricio, and Águeda-María Valverde-Maestre. "The spatial-temporal fragmentation of live television video clips: analysis of the television production of the Eurovision Song Contest." Communication & Society 33, no. 2 (April 20, 2020): 17–31. http://dx.doi.org/10.15581/003.33.2.17-31.

Full text
Abstract:
Multicamera television production’s similarity to the video clip has become evident in the production of the EBU’s Eurovision Song Contest, where various musical numbers representing public television stations from the organizing countries compete against each other in terms of spectacularity and originality. The main objective of this research is to analyze several acts to identify such appropriation. We will apply a textual analysis to the audiovisual discourse of a sample chosen through subjective sampling. We divide our analysis into four sections: preliminary phase, formal audiovisual analysis, staging and performance. The investigation leads to the conclusion that the characteristic fragmentation of space and time of video clips can also be identified in live music videos. This fragmentation is seen in the break in spatial continuity, resulting from recreating sets on stage and the abstraction of the stage thanks to screens and an avant-garde composition shot. We also consider that the production imitates the time fragmentation and fast shot speed of the video clip.
APA, Harvard, Vancouver, ISO, and other styles
43

Nikolova, Mariya. "EDUCATIONAL VIDEO FOOTAGE THROUGH THE LENS OF THE ART ANIMATED CLASSROOM." Education and Technologies Journal 11, no. 1 (August 1, 2020): 168–74. http://dx.doi.org/10.26883/2010.201.2253.

Full text
Abstract:
The article presents a unique, unusual and original experience of educational practice based on the innovative axioms of art animation in education designed to achieve educational goals. The analysis treats of a variety of concepts and trends in the application of video clips and educational videos footage in the classroom to achieve high-order goals – levels of analysis, synthesis and evaluation of the cognitive taxonomy of Benjamin Bloom. The text considers an attractive training practice for creating video production by the trainees, presented as an original composition of creative and conceptual design; learning content – conceptual selection and technical implementation – role entry, frame dynamics, synchronization, processing and editing. The analytical exposition sheds light on the learning effects for the students involved in the art animated learning process. Methodological recommendations are made for the application of specific formats of art animation in education, in particular the production of video footage to achieve educational goals.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Qiang, Meng Wang, Zhongyang Huang, Yang Hua, Zheng Song, and Shuicheng Yan. "VideoPuzzle: Descriptive One-Shot Video Composition." IEEE Transactions on Multimedia 15, no. 3 (April 2013): 521–34. http://dx.doi.org/10.1109/tmm.2012.2236306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ahanger, G., and T. D. C. Little. "Automatic composition techniques for video production." IEEE Transactions on Knowledge and Data Engineering 10, no. 6 (1998): 967–87. http://dx.doi.org/10.1109/69.738360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tao Chen, Jun-Yan Zhu, A. Shamir, and Shi-Min Hu. "Motion-Aware Gradient Domain Video Composition." IEEE Transactions on Image Processing 22, no. 7 (July 2013): 2532–44. http://dx.doi.org/10.1109/tip.2013.2251642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Jinjun, Changsheng Xu, Engsiong Chng, Hanqing Lu, and Qi Tian. "Automatic composition of broadcast sports video." Multimedia Systems 14, no. 4 (March 11, 2008): 179–93. http://dx.doi.org/10.1007/s00530-008-0112-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Jian Zhai, and De Wen Hu. "Multiple Instance Learning of Visual Event Models." Applied Mechanics and Materials 321-324 (June 2013): 1030–34. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1030.

Full text
Abstract:
In this paper we propose a powerful visual event pattern learning method to address the issue of high-level video understanding. We first model the deformable temporal structure of the action event in videos by a temporal composition of several primitive motions. Moreover, we describe each action class by multiple temporal models to deal with the significant intra-class variability. We implement a multiple instance learning method to train the models in the weakly supervised setting. We have conducted experiments on three major benchmarks. The results are comparative to the state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
49

Stohr, Denny, Iva Toteva, Stefan Wilk, Wolfgang Effelsberg, and Ralf Steinmetz. "User-Generated Video Composition Based on Device Context Measurements." International Journal of Semantic Computing 11, no. 01 (March 2017): 65–84. http://dx.doi.org/10.1142/s1793351x17400049.

Full text
Abstract:
Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow, Facebook.Live or uStream. Yet, providing such services with a high QoE for viewers is still challenging, given that mobile upload speed and capacities are limited, and the recording quality on mobile devices greatly depends on the users’ capabilities. One proposed solution to address these issues is video composition. It allows to switch between multiple recorded video streams, selecting the best source at any given time, for composing a live video with a better overall quality for the viewers. Previous approaches have required an in-depth visual analysis of the video streams, which usually limited the scalability of these systems. In contrast, our work allows the stream selection to be realized solely on context information, based on video- and service-quality aspects from sensor and network measurements. The implemented monitoring service for a context-aware upload of video streams is evaluated in different network conditions, with diverse user behavior, including camera shaking and user mobility. We have evaluated the system’s performance based on two studies. First, in a user study, we show that a higher efficiency for the video upload as well as a better QoE for viewers can be achieved when using our proposed system. Second, by examining the overall delay for the switching between streams based on sensor readings, we show that a composition view change can efficiently be achieved in approximately four seconds.
APA, Harvard, Vancouver, ISO, and other styles
50

Weiss, R., A. Duda, and D. K. Gifford. "Composition and search with a video algebra." IEEE Multimedia 2, no. 1 (1995): 12–25. http://dx.doi.org/10.1109/93.368596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography