Academic literature on the topic 'Keyframes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Keyframes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Keyframes"

1

Younessian, Ehsan, and Deepu Rajan. "Content-Based Keyframe Clustering Using Near Duplicate Keyframe Identification." International Journal of Multimedia Data Engineering and Management 2, no. 1 (2011): 1–21. http://dx.doi.org/10.4018/jmdem.2011010101.

Full text
Abstract:
In this paper, the authors propose an effective content-based clustering method for keyframes of news video stories using the Near Duplicate Keyframe (NDK) identification concept. Initially, the authors investigate the near-duplicate relationship, as a content-based visual similarity across keyframes, through the Near-Duplicate Keyframe (NDK) identification algorithm presented. The authors assign a near-duplicate score to each pair of keyframes within the story. Using an efficient keypoint matching technique followed by matching pattern analysis, this NDK identification algorithm can handle extreme zooming and significant object motion. In the second step, the weighted adjacency matrix is determined for each story based on assigned near duplicate score. The authors then use the spectral clustering scheme to remove outlier keyframes and partition remainders. Two sets of experiments are carried out to evaluate the NDK identification method and assess the proposed keyframe clustering method performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Hongda, Marc Christie, Xi Wang, Libin Liu, Bin Wang, and Baoquan Chen. "Camera keyframing with style and control." ACM Transactions on Graphics 40, no. 6 (2021): 1–13. http://dx.doi.org/10.1145/3478513.3480533.

Full text
Abstract:
We present a novel technique that enables 3D artists to synthesize camera motions in virtual environments following a camera style , while enforcing user-designed camera keyframes as constraints along the sequence. To solve this constrained motion in-betweening problem, we design and train a camera motion generator from a collection of temporal cinematic features (camera and actor motions) using a conditioning on target keyframes. We further condition the generator with a style code to control how to perform the interpolation between the keyframes. Style codes are generated by training a second network that encodes different camera behaviors in a compact latent space, the camera style space. Camera behaviors are defined as temporal correlations between actor features and camera motions and can be extracted from real or synthetic film clips. We further extend the system by incorporating a fine control of camera speed and direction via a hidden state mapping technique. We evaluate our method on two aspects: i) the capacity to synthesize style-aware camera trajectories with user defined keyframes; and ii) the capacity to ensure that in-between motions still comply with the reference camera style while satisfying the keyframe constraints. As a result, our system is the first style-aware keyframe in-betweening technique for camera control that balances style-driven automation with precise and interactive control of keyframes.
APA, Harvard, Vancouver, ISO, and other styles
3

Sharma, R. Rajesh. "Two-Stage Frame Extraction in Video Analysis for Accurate Prediction of Object Tracking by Improved Deep Learning." Journal of Innovative Image Processing 3, no. 4 (2021): 322–35. http://dx.doi.org/10.36548/jiip.2021.4.004.

Full text
Abstract:
Recently, the information extraction from graphics and video summarizing using keyframes have benefited from a recent look at the visual content-based method. Analysis of keyframes in a movie may be done by extracting visual elements from the video clips. In order to accurately anticipate the path of an item in real-time, the visible components are utilized. The frame variations with low-level properties such as color and structure are the basis of the rapid and reliable approach. This research work contains 3 phases: preprocessing, two-stage extraction, and video prediction module. Besides, this framework on object track estimation uses the probabilistic deterministic process to arrive at an estimate of the object. Keyframes for the whole video sequence are extracted using a proposed two-stage feature extraction approach by CNN feature extraction. An alternate sequence is first constructed by comparing the color characteristics of neighboring frames in the original series to those of the generated one. When an alternate arrangement is compared to the final keyframe sequence, it is found that there are substantial structural changes between consecutive frames. Three keyframe extraction techniques based on on-time behavior have been employed in this study. A keyframe extraction optimization phase termed as "Adam" optimizer, dependent on the number of final keyframes is then introduced. The proposed technique outperforms the prior methods in computational cost and resilience across a wide range of video formats, video resolutions, and other parameters. Finally, this research compares SSIM, MAE, and RMSE performance metrics with the traditional approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Dong, Xiaoning Sun, Huaijiang Sun, et al. "Enhanced Fine-Grained Motion Diffusion for Text-Driven Human Motion Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 5876–84. http://dx.doi.org/10.1609/aaai.v38i6.28401.

Full text
Abstract:
The emergence of text-driven motion synthesis technique provides animators with great potential to create efficiently. However, in most cases, textual expressions only contain general and qualitative motion descriptions, while lack fine depiction and sufficient intensity, leading to the synthesized motions that either (a) semantically compliant but uncontrollable over specific pose details, or (b) even deviates from the provided descriptions, bringing animators with undesired cases. In this paper, we propose DiffKFC, a conditional diffusion model for text-driven motion synthesis with KeyFrames Collaborated, enabling realistic generation with collaborative and efficient dual-level control: coarse guidance at semantic level, with only few keyframes for direct and fine-grained depiction down to body posture level. Unlike existing inference-editing diffusion models that incorporate conditions without training, our conditional diffusion model is explicitly trained and can fully exploit correlations among texts, keyframes and the diffused target frames. To preserve the control capability of discrete and sparse keyframes, we customize dilated mask attention modules where only partial valid tokens participate in local-to-global attention, indicated by the dilated keyframe mask. Additionally, we develop a simple yet effective smoothness prior, which steers the generated frames towards seamless keyframe transitions at inference. Extensive experiments show that our model not only achieves state-of-the-art performance in terms of semantic fidelity, but more importantly, is able to satisfy animator requirements through fine-grained guidance without tedious labor.
APA, Harvard, Vancouver, ISO, and other styles
5

Duan, Ran, Yurong Feng, and Chih-Yung Wen. "Deep Pose Graph-Matching-Based Loop Closure Detection for Semantic Visual SLAM." Sustainability 14, no. 19 (2022): 11864. http://dx.doi.org/10.3390/su141911864.

Full text
Abstract:
This work addresses the loop closure detection issue by matching the local pose graphs for semantic visual SLAM. We propose a deep feature matching-based keyframe retrieval approach. The proposed method treats the local navigational maps as images. Thus, the keyframes may be considered keypoints of the map image. The descriptors of the keyframes are extracted using a convolutional neural network. As a result, we convert the loop closure detection problem to a feature matching problem so that we can solve the keyframe retrieval and pose graph matching concurrently. This process in our work is carried out by modified deep feature matching (DFM). The experimental results on the KITTI and Oxford RobotCar benchmarks show the feasibility and capabilities of accurate loop closure detection and the potential to extend to multiagent applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Nam Hee, Hung Yu Ling, Zhaoming Xie, and Michiel van de Panne. "Flexible Motion Optimization with Modulated Assistive Forces." Proceedings of the ACM on Computer Graphics and Interactive Techniques 4, no. 3 (2021): 1–25. http://dx.doi.org/10.1145/3480144.

Full text
Abstract:
Animated motions should be simple to direct while also being plausible. We present a flexible keyframe-based character animation system that generates plausible simulated motions for both physically-feasible and physically-infeasible motion specifications. We introduce a novel control parameterization, optimizing over internal actions, external assistive-force modulation, and keyframe timing. Our method allows for emergent behaviors between keyframes, does not require advance knowledge of contacts or exact motion timing, supports the creation of physically impossible motions, and allows for near-interactive motion creation. The use of a shooting method allows for the use of any black-box simulator. We present results for a variety of 2D and 3D characters and motions, using sparse and dense keyframes. We compare our control parameterization scheme against other possible approaches for incorporating external assistive forces.
APA, Harvard, Vancouver, ISO, and other styles
7

Fang, Q. S., Z. Peng, and P. Yan. "Fire Detection and Localization Method Based on Deep Learning in Video Surveillance." Journal of Physics: Conference Series 2278, no. 1 (2022): 012024. http://dx.doi.org/10.1088/1742-6596/2278/1/012024.

Full text
Abstract:
Abstract Fire detection and localization in video surveillance had become a particularly important part of disaster rescue. Considering the fire detection and localization of slow detection speed, low detection accuracy, and low localization precision in video surveillance, we proposed a fire detection and localization method based on deep learning. The first thing we improved the SuperPoint method to extract video keyframe in video surveillance. The next thing we employed Convolutional Neural Network (CNN) model to detect the fire on the extracted video keyframes. The last thing we located the fire via superpixel and CNN on the extracted video keyframes which broke out a fire. The experimental results on open fire dataset revealed that the recall of keyframe extraction reached 0.83, the precision of fire detection reached 0.96 and the F1-score of fire localization reached 0.90. Our method realized rapid and accurate detection and precise localization of fire in video surveillance.
APA, Harvard, Vancouver, ISO, and other styles
8

D., Rajeshwari, and Victoria Priscilla C. "An Enhanced Spatio-Temporal Human Detected Keyframe Extraction." International journal of electrical and computer engineering systems 14, no. 9 (2023): 985–92. http://dx.doi.org/10.32985/ijeces.14.9.3.

Full text
Abstract:
Due to the immense availability of Closed-Circuit Television surveillance, it is quite difficult for crime investigation due to its huge storage and complex background. Content-based video retrieval is an excellent method to identify the best Keyframes from these surveillance videos. As the crime surveillance reports numerous action scenes, the existing keyframe extraction is not exemplary. At this point, the Spatio-temporal Histogram of Oriented Gradients - Support Vector Machine feature method with the combination of Background Subtraction is appended over the recovered crime video to highlight the human presence in surveillance frames. Additionally, the Visual Geometry Group trains these frames for the classification report of human-detected frames. These detected frames are processed to extract the keyframe by manipulating an inter-frame difference with its threshold value to favor the requisite human-detected keyframes. Thus, the experimental results of HOG-SVM illustrate a compression ratio of 98.54%, which is preferable to the proposed work's compression ratio of 98.71%, which supports the criminal investigation.
APA, Harvard, Vancouver, ISO, and other styles
9

Man, Guangyi, and Xiaoyan Sun. "Interested Keyframe Extraction of Commodity Video Based on Adaptive Clustering Annotation." Applied Sciences 12, no. 3 (2022): 1502. http://dx.doi.org/10.3390/app12031502.

Full text
Abstract:
Keyframe recognition in video is very important for extracting pivotal information from videos. Numerous studies have been successfully carried out on identifying frames with motion objectives as keyframes. The definition of “keyframe” can be quite different for different requirements. In the field of E-commerce, the keyframes of the products videos should be those interested by a customer and help the customer make correct and quick decisions, which is greatly different from the existing studies. Accordingly, here, we first define the key interested frame of commodity video from the viewpoint of user demand. As there are no annotations on the interested frames, we develop a fast and adaptive clustering strategy to cluster the preprocessed videos into several clusters according to the definition and make an annotation. These annotated samples are utilized to train a deep neural network to obtain the features of key interested frames and achieve the goal of recognition. The performance of the proposed algorithm in effectively recognizing the key interested frames is demonstrated by applying it to some commodity videos fetched from the E-commerce platform.
APA, Harvard, Vancouver, ISO, and other styles
10

Saqib, Shazia, and Syed Kazmi. "Video Summarization for Sign Languages Using the Median of Entropy of Mean Frames Method." Entropy 20, no. 10 (2018): 748. http://dx.doi.org/10.3390/e20100748.

Full text
Abstract:
Multimedia information requires large repositories of audio-video data. Retrieval and delivery of video content is a very time-consuming process and is a great challenge for researchers. An efficient approach for faster browsing of large video collections and more efficient content indexing and access is video summarization. Compression of data through extraction of keyframes is a solution to these challenges. A keyframe is a representative frame of the salient features of the video. The output frames must represent the original video in temporal order. The proposed research presents a method of keyframe extraction using the mean of consecutive k frames of video data. A sliding window of size k / 2 is employed to select the frame that matches the median entropy value of the sliding window. This is called the Median of Entropy of Mean Frames (MME) method. MME is mean-based keyframes selection using the median of the entropy of the sliding window. The method was tested for more than 500 videos of sign language gestures and showed satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Keyframes"

1

Holden, Daniel. "Reducing animator keyframes." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28998.

Full text
Abstract:
The aim of this doctoral thesis is to present a body of work aimed at reducing the time spent by animators manually constructing keyframed animation. To this end we present a number of state of the art machine learning techniques applied to the domain of character animation. Data-driven tools for the synthesis and production of character animation have a good track record of success. In particular, they have been adopted thoroughly in the games industry as they allow designers as well as animators to simply specify the high-level descriptions of the animations to be created, and the rest is produced automatically. Even so, these techniques have not been thoroughly adopted in the film industry in the production of keyframe based animation [Planet, 2012]. Due to this, the cost of producing high quality keyframed animation remains very high, and the time of professional animators is increasingly precious. We present our work in four main chapters. We first tackle the key problem in the adoption of data-driven tools for key framed animation - a problem called the inversion of the rig function. Secondly, we show the construction of a new tool for data-driven character animation called the motion manifold - a representation of motion constructed using deep learning that has a number of properties useful for animation research. Thirdly, we show how the motion manifold can be extended as a general tool for performing data-driven animation synthesis and editing. Finally, we show how these techniques developed for keyframed animation can also be adapted to advance the state of the art in the games industry.
APA, Harvard, Vancouver, ISO, and other styles
2

Sillén, Erik. "Robustifying SLAM using multiple cameras." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191247.

Full text
Abstract:
This master thesis is about modifying a certain monocular visual SLAM algorithm to address some of its limitations. The SLAM algorithm is not robust to quick camera motions and input images in which there are few visible features. A second camera and an inertial measurement unit was added to the hardware. Then a method for selecting the appropriate camera for tracking depending on the estimated number of features was implemented to solve the issue of few features. Experiments and results show that this method works well for slow motions. A gyrometer threshold method along with a motion model to solve the issue of quick motions was implemented and reviewed in this thesis.<br>Detta examensarbete handlar om att ta itu med några begränsningar som en viss monokulär visuell SLAM-algoritm har. SLAM-algoritmen är inte robust mot snabba kamerarörelser och indatabilder som innehåller få karaktärsdrag. Genom att introducera en extra kamera, en accelerometer och en gyrometer, behandlas dessa problem i denna rapport. En metod för att välja kamera att hämta indatabilder från, baserat på det skattade antalet karaktärsdrag i respektive bild implementerades. Denna metod är tänkt att lösa problemet då indatabilder har få karaktärsdrag. Experiment visar att denna metod fungerar bra för långsamma rörelser. En metod som jämför gyrometerdata med ett tröskelvärde tillsammans med en rörelsemodell implementerades för att lösa problemen vid snabb rörelse. Dessa metoder undersöks och diskuteras i rapporten.
APA, Harvard, Vancouver, ISO, and other styles
3

Portari, Júnior Sérgio Carlos [UNESP]. "Um sistema para extração automática de keyframes a partir de fluxos de vídeo direcionado à reconstrução tridimensional de cenários virtuais." Universidade Estadual Paulista (UNESP), 2013. http://hdl.handle.net/11449/89563.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:24:06Z (GMT). No. of bitstreams: 0 Previous issue date: 2013-04-25Bitstream added on 2014-06-13T20:09:04Z : No. of bitstreams: 1 portarijunior_sc_me_bauru.pdf: 1375429 bytes, checksum: 55957c89fe49d551d28ecadf740b82ff (MD5)<br>Utilizar um cenário virtual em TV Digital tornou-se comum com o avanço das tecnologias de hardware e software. Mas para se obter um cenário virtual que convença o telespectador pode-se utilizar reconstruções tridimensionais foto-realísticas como uma possível alternativa. Este trabalho apresenta uma proposta de um método para o pré-processamento de um vídeo, capturado, no mundo real, onde são extraídos frames adequados à reconstrução 3D (keyframes) pelo método SFM (Structure From Motion). Desta forma o processamento para a reconstrução 3D utiliza apenas os frames considerados essenciais, diminuindo as redundâncias e falhas. Com a utilização deste método conseguiu-se reduzir o tempo de processamento durante a reconstrução 3D. Neste trabalho também comparou-se o método proposto com os métodos tradicionais, onde não existe uma prévia seleção de keyframes, utilizando-se diferentes ferramentas de reconstrução baseadas no método SFM<br>Using a virtual scenario on Digital TV became common with the advance of hardware and software technologies. However, to obtain a virtual scenario that persuades the viewer, photorealistic three-dimensional reconstructions can be used as a possible alternative. This work proposes a method for the pre-processing of a video, captured in the real world, where frames which are appropriate for 3D reconstruction (keyframes) by the method SFM (Structure From Motion) are extracted. Thus the processing for 3D reconstruction use only the frames considered essential, reducing redundancies and gaps. Using this method it was possible to reduce the processing time for the 3D reconstruction. In this work, the proposed method was compared to traditional methods, where there is no prior selection of keyframes, using different tools of reconstruction based on the SFM method
APA, Harvard, Vancouver, ISO, and other styles
4

Portari, Júnior Sérgio Carlos. "Um sistema para extração automática de keyframes a partir de fluxos de vídeo direcionado à reconstrução tridimensional de cenários virtuais /." Bauru, 2013. http://hdl.handle.net/11449/89563.

Full text
Abstract:
Orientador: Antonio Carlos Sementille<br>Banca: Wilson Massashiro Yonezawa<br>Banca: Idelberto Aparecido Rodello<br>Resumo: Utilizar um cenário virtual em TV Digital tornou-se comum com o avanço das tecnologias de hardware e software. Mas para se obter um cenário virtual que convença o telespectador pode-se utilizar reconstruções tridimensionais foto-realísticas como uma possível alternativa. Este trabalho apresenta uma proposta de um método para o pré-processamento de um vídeo, capturado, no mundo real, onde são extraídos frames adequados à reconstrução 3D (keyframes) pelo método SFM (Structure From Motion). Desta forma o processamento para a reconstrução 3D utiliza apenas os frames considerados essenciais, diminuindo as redundâncias e falhas. Com a utilização deste método conseguiu-se reduzir o tempo de processamento durante a reconstrução 3D. Neste trabalho também comparou-se o método proposto com os métodos tradicionais, onde não existe uma prévia seleção de keyframes, utilizando-se diferentes ferramentas de reconstrução baseadas no método SFM<br>Abstract: Using a virtual scenario on Digital TV became common with the advance of hardware and software technologies. However, to obtain a virtual scenario that persuades the viewer, photorealistic three-dimensional reconstructions can be used as a possible alternative. This work proposes a method for the pre-processing of a video, captured in the real world, where frames which are appropriate for 3D reconstruction (keyframes) by the method SFM (Structure From Motion) are extracted. Thus the processing for 3D reconstruction use only the frames considered essential, reducing redundancies and gaps. Using this method it was possible to reduce the processing time for the 3D reconstruction. In this work, the proposed method was compared to traditional methods, where there is no prior selection of keyframes, using different tools of reconstruction based on the SFM method<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
5

Klement, Martin. "Fúze procedurální a keyframe animace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-235456.

Full text
Abstract:
The goal of this work is to create an application, which will combine procedural and keyfram animations with subsequent visualization. Composition of this two different animations techniques is used to animate a virtual character. To combine this two techniques one starts with interpolations from keyframe animation and then enchance them by procedural animations to properly fit into the characters surroundings. This procedural part of animation is obtained by using forward and inverse kinematics. Whole application is written in C++, uses GLM math library for computations and OpenGL and GLUT for final visualization.
APA, Harvard, Vancouver, ISO, and other styles
6

Marlow, Gregory. "Week 05, Video 01: Keyframe Manipulation." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/digital-animation-videos-oer/37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eckstrand, Eric C. "Automatic keyframe summarization of user-generated video." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/42614.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>The explosive growth of user-generated video presents opportunities and challenges. The videos may possess valuable information that was once unavailable. On the other hand, the information may be buried or difficult to access with traditional methods. Automatic keyframe video summarization technologies exist that attempt to address this problem. A keyframe summary can often be viewed quicker than the underlying video. However, a theoretical framework for objectively assessing keyframe summary quality has been absent. The work presented here bridges this gap by presenting a semantically high-level, stakeholder-centered evaluation frame-work. The framework can capture common stakeholder concerns and quantitatively measure the extent to which they are satisfied by keyframe summaries.With this framework, keyframe summary stakeholders and algorithm developers can now identify when success has been achieved. This work also develops a number of novel keyframe summarization algorithms and shows, using the evaluation framework, that they outperform baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Clarkson, Adam James. "Keyframe tagging : unambiguous content delivery for augmented reality environments." Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11332/.

Full text
Abstract:
Context: When considering the use of Augmented Reality to provide navigation cues in a completely unknown environment, the content must be delivered into the environment with a repeatable level of accuracy such that the navigation cues can be understood and interpreted correctly by the user. Aims: This thesis aims to investigate whether a still image based reconstruction of an Augmented Reality environment can be used to develop a content delivery system that providers a repeatable level of accuracy for content placement. It will also investigate whether manipulation of the properties of a Spatial Marker object is sufficient to reduce object selection ambiguity in an Augmented Reality environment. Methods: A series of experiments were conducted to test the separate aspects of these aims. Participants were required to use the developed Keyframe Tagging tool to introduce virtual navigation markers into an Augmented Reality environment, and also to identify objects within an Augmented Reality environment that was signposted using different Virtual Spatial Markers. This tested the accuracy and repeatability of content placement of the approach, while also testing participants’ ability to reliably interpret virtual signposts within an Augmented Reality environment. Finally the Keyframe Tagging tool was tested by an expert user against a pre-existing solution to evaluate the time savings offered by this approach against the overall accuracy of content placement. Results: The average accuracy score for content placement across 20 participants was 64%, categorised as “Good” when compared with an expert benchmark result, while no tags were considered “incorrect” and only 8 from 200 tags were considered to have “Poor” accuracy, supporting the Keyframe Tagging approach. In terms of object identification from virtual cues, some of the predicted cognitive links between virtual marker property and target object did not surface, though participants reliably identified the correct objects across several trials. Conclusions: This thesis has demonstrated that accurate content delivery can be achieved through the use of a still image based reconstruction of an Augmented Reality environment. By using the Keyframe Tagging approach, content can be placed quickly and with a sufficient level of accuracy to demonstrate its utility in the scenarios outlined within this thesis. There are some observable limitations to the approach, which are discussed with the proposals for further work in this area.
APA, Harvard, Vancouver, ISO, and other styles
9

Degenhardt, Richard Kennedy III. "Self-collision avoidance through keyframe interpolation and optimization-based posture prediction." Thesis, University of Iowa, 2014. https://ir.uiowa.edu/etd/1446.

Full text
Abstract:
Simulating realistic human behavior on a virtual avatar presents a difficult task. Because the simulated environment does not adhere to the same scientific principles that we do in the existent world, the avatar becomes capable of achieving infeasible postures. In an attempt to obtain realistic human simulation, real world constraints are imposed onto the non-sentient being. One such constraint, and the topic of this thesis, is self-collision avoidance. For the purposes of this topic, a posture will be defined solely as a collection of angles formed by each joint on the avatar. The goal of self-collision avoidance is to eliminate the formation of any posture where multiple body parts are attempting to occupy the exact same space. My work necessitates an extension of this definition to also include collision avoidance with objects attached to the body, such as a backpack or armor. In order to prevent these collisions from occurring, I have implemented an effort-based approach for correcting afflicted postures. This technique specifically pertains to postures that are sequenced together with the objective of animating the avatar. As such, the animation's coherence and defining characteristics must be preserved. My approach to this problem is unique in that it strategically blends the concept of keyframe interpolation with an optimization-based strategy for posture prediction. Although there has been considerable work done with methods for keyframe interpolation, there has been minimal progress towards integrating a realistic collision response strategy. Additionally, I will test this optimization-based approach through the use of a complex kinematic human model and investigate the use of the results as input to an existing dynamic motion prediction system.
APA, Harvard, Vancouver, ISO, and other styles
10

Concha, Edison Kleiber Titito. "Map point optimization in keyframe-based SLAM using covisibbility graph and information fusion." Universidade Federal do Rio Grande do Sul, 2018. http://hdl.handle.net/10183/180265.

Full text
Abstract:
SLAM (do inglês Simultaneous Localization and Mapping) Monocular baseado em Keyframes é uma das principais abordagens de SLAM Visuais, usado para estimar o movimento da câmera juntamente com a reconstrução do mapa sobre frames selecionados. Estas técnicas representam o ambiente por pontos no mapa localizados em um espaço tri-dimensional, que podem ser reconhecidos e localizados no frame. Contudo, estas técnicas não podem decidir quando um ponto do mapa se torna um outlier ou uma informação obsoleta e que pode ser descartada, ou combinar pontos do mapa que correspondem ao mesmo ponto tri-dimensional. Neste trabalho, apresentamos um método robusto para manter um mapa refinado. Esta abordagem usa o grafo de covisibilidade e um algoritmo baseado na fusão de informações para construir um mapa probabilístico, que explicitamente modela medidas de outlier. Além disso, incorporamos um mecanismo de poda para reduzir informações redundantes e remover outliers. Desta forma, nossa abordagem gerencia a redução do tamanho do mapa, mantendo informações essenciais do ambiente. Finalmente, a fim de avaliar a performance do nosso método, ele foi incorporado ao sistema do ORB-SLAM e foi medido a acurácia alcançada em datasets publicamente disponíveis que contêm sequências de imagens de ambientes internos gravados com uma câmera monocular de mão.<br>Keyframe-based monocular SLAM (Simultaneous Localization and Mapping) is one of the main visual SLAM approaches, used to estimate the camera motion together with the map reconstruction over selected frames. These techniques based on keyframes represent the environment by map points located in the three-dimensional space that can be recognized and located in the frames. However, many of these techniques cannot combine map points corresponding to the same three-dimensional point or detect when a map point becomes outlier and an obsolete information. In this work, we present a robust method to maintain a refined map that uses the covisibility graph and an algorithm based on information fusion to build a probabilistic map, which explicitly models outlier measurements. In addition, we incorporate a pruning mechanism to reduce redundant information and remove outliers. In this way our approach manages the map size maintaining essential information of the environment. Finally, in order to evaluate the performance of our method, we incorporate it into an ORB-SLAM system and measure the accuracy achieved on publicly available benchmark datasets which contain indoor images sequences recorded with a hand-held monocular camera.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Keyframes"

1

Tinkcom, Matthew, and Amy Villarejo, eds. Keyframes. Taylor & Francis, 2001. http://dx.doi.org/10.4324/9780203279632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Matthew, Tinkcom, and Villarejo Amy, eds. Keyframes: Popular cinema and cultural studies. Routledge, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tinkcom, Matthew, and Amy Villarejo, eds. Keyframes: Popular Cinema and Cultural Studies. Routledge, 2003. http://dx.doi.org/10.4324/9780203165195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tinkcom, M. Keyframes: Popular Cinema and Cultural Studies. Routledge, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tinkcom, M. Keyframes: Popular Cinema and Cultural Studies. Routledge, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Villarejo, Amy, and Matthew Tinkcom. Keyframes: Popular Cinema and Cultural Studies. Taylor & Francis Group, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Villarejo, Amy, and Matthew Tinkcom. Keyframes: Popular Cinema and Cultural Studies. Taylor & Francis Group, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Villarejo, Amy, and Matthew Tinkcom. Keyframes: Popular Cinema and Cultural Studies. Taylor & Francis Group, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Keyframes: Popular Cinema and Cultural Studies. Routledge, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Keyframes: Popular cinema and cultural studies. New York, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Keyframes"

1

Shaw, Austin, John Colette, and Danielle Shaw. "Between the Keyframes." In Motion Design Toolkit. Routledge, 2022. http://dx.doi.org/10.4324/9781003200529-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Powers, David. "Animating with CSS Keyframes." In Beginning CSS3. Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4474-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Krishan, Deepti D. Shrimankar, and Navjot Singh. "Key-Lectures: Keyframes Extraction in Video Lectures." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0923-6_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Varona, X., J. Gonzàlez, F. X. Roca, and J. J. Villanueva. "Automatic Selection of Keyframes for Activity Recognition." In Articulated Motion and Deformable Objects. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10722604_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pei, Yunhua, Zhiyi Huang, Wenjie Yu, Meili Wang, and Xuequan Lu. "A Cascaded Approach for Keyframes Extraction from Videos." In Communications in Computer and Information Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63426-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lokoč, Jakub, Klaus Schoeffmann, and Manfred del Fabro. "Dynamic Hierarchical Visualization of Keyframes in Endoscopic Video." In MultiMedia Modeling. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14442-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gan, Timothy S. Y., and Tom W. Drummond. "Vision-Based Augmented Reality Visual Guidance with Keyframes." In Advances in Computer Graphics. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11784203_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Costa, Bernardo F., and Claudio Esperança. "Motion Capture Analysis and Reconstruction Using Spatial Keyframes." In Communications in Computer and Information Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41590-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Yan, Chunlin Xu, and Mei Wang. "Secondary Filter Keyframes Extraction Algorithm Based on Adaptive Top-K." In 2nd EAI International Conference on Robotic Sensor Networks. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17763-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Feiyan, and Alan F. Smeaton. "Image Aesthetics and Content in Selecting Memorable Keyframes from Lifelogs." In MultiMedia Modeling. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73603-7_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Keyframes"

1

Huang, Hsin-Ping, Yu-Chuan Su, and Ming-Hsuan Yang. "Generating Long-Take Videos via Effective Keyframes and Guidance." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Bingqing, Ningning Wan, Xinyi Su, and Quanfu Yang. "Few-shot action recognition based on optical flow keyframes and hourglass convolution." In 2024 4th International Conference on Electronic Information Engineering and Computer Communication (EIECC). IEEE, 2024. https://doi.org/10.1109/eiecc64539.2024.10929582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yeola, Manjusha, and Sunita Barve. "Comparative Study of Video Summarization Using Keyframes with State-of-the-Art Techniques." In 2024 IEEE 16th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2024. https://doi.org/10.1109/cicn63059.2024.10847399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sato, Ayumu, Ryo Moriai, and Suguru Saito. "One-to-many line matching across keyframes using minimum weight edge cover problem." In 2024 International Conference on Cyberworlds (CW). IEEE, 2024. https://doi.org/10.1109/cw64301.2024.00015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fassold, Hannes. "Faster than real-time detection of shot boundaries, sampling structure and dynamic keyframes in video." In 2024 8th International Conference on Imaging, Signal Processing and Communications (ICISPC). IEEE, 2024. http://dx.doi.org/10.1109/icispc63824.2024.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

N, Sabarivasan, and Senthil Kumar Thangavel. "Keyframe Extraction Approaches for Intelligent Video Analysis." In 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL). IEEE, 2025. https://doi.org/10.1109/icsadl65848.2025.10933308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Zhenzhen, Zhenshan Wang, Zijie Song, and Richang Hong. "Dual Video Summarization: From Frames to Captions." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/94.

Full text
Abstract:
Video summarization and video captioning both condense the video content from the perspective of visual and text modes, i.e. the keyframe selection and language description generation. Existing video-and-language learning models commonly sample multiple frames for training instead of observing all. These sampled deputies greatly improve computational efficiency, but do they represent the original video content enough with no more redundancy? In this work, we propose a dual video summarization framework and verify it in the context of video captioning. Given the video frames, we firstly extract the visual representation based on the ViT model fine-tuned on the video-text domain. Then we summarize the keyframes according to the frame-lever score. To compress the number of keyframes as much as possible while ensuring the quality of captioning, we learn a cross-modal video summarizer to select the most semantically consistent frames according to the pseudo score label. Top K frames ( K is no more than 3% of the entire video.) are chosen to form the video representation. Moreover, to evaluate the static appearance and temporal information of video, we design the ranking scheme of video representation from two aspects: feature-oriented and sequence-oriented. Finally, we generate the descriptions with a lightweight LSTM decoder. The experiment results on the MSR-VTT and MSVD dataset reveal that, for the generative task as video captioning, a small number of keyframes can convey the same semantic information to perform well on captioning, or even better than the original sampling.
APA, Harvard, Vancouver, ISO, and other styles
8

Akgun, Baris, Maya Cakmak, Jae Wook Yoo, and Andrea Lockerd Thomaz. "Trajectories and keyframes for kinesthetic teaching." In the seventh annual ACM/IEEE international conference. ACM Press, 2012. http://dx.doi.org/10.1145/2157689.2157815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xiaojun Guo and Fangxia Shi. "Quick extracting keyframes from compressed video." In 2010 2nd International Conference on Computer Engineering and Technology. IEEE, 2010. http://dx.doi.org/10.1109/iccet.2010.5485659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Krosnick, Rebecca, Sang Won Lee, Walter S. Laseck, and Steve Onev. "Expresso: Building Responsive Interfaces with Keyframes." In 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 2018. http://dx.doi.org/10.1109/vlhcc.2018.8506516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!