To see the other types of publications on this topic, follow the link: Keyframe.

Journal articles on the topic 'Keyframe'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Keyframe.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Younessian, Ehsan, and Deepu Rajan. "Content-Based Keyframe Clustering Using Near Duplicate Keyframe Identification." International Journal of Multimedia Data Engineering and Management 2, no. 1 (2011): 1–21. http://dx.doi.org/10.4018/jmdem.2011010101.

Full text
Abstract:
In this paper, the authors propose an effective content-based clustering method for keyframes of news video stories using the Near Duplicate Keyframe (NDK) identification concept. Initially, the authors investigate the near-duplicate relationship, as a content-based visual similarity across keyframes, through the Near-Duplicate Keyframe (NDK) identification algorithm presented. The authors assign a near-duplicate score to each pair of keyframes within the story. Using an efficient keypoint matching technique followed by matching pattern analysis, this NDK identification algorithm can handle ex
APA, Harvard, Vancouver, ISO, and other styles
2

Qu, Zhong, and Teng Fei Gao. "An Improved Algorithm of Keyframe Extraction for Video Summarization." Advanced Materials Research 225-226 (April 2011): 807–11. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.807.

Full text
Abstract:
Video segmentation and keyframe extraction are the basis of Content-based Video Retrieval (CBVR), in which keyframe selection plays the central role in CBVR. In this paper, as the initialization of keyframe extraction, we proposed an improved approach of key-frame extraction for video summarization. In our approach, videos were firstly segmented into shots according to video content, by our improved histogram-based method, with the use of histogram intersection and nonuniform partitioning and weighting. Then, within each shot, keyframes were determined with the calculation of image entropy as
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Lingchen, Ju Liu, and Aixi Qu. "Performance Evaluation and Scheme Selection of Shot Boundary Detection and Keyframe Extraction in Content-Based Video Retrieval." International Journal of Digital Crime and Forensics 9, no. 4 (2017): 15–29. http://dx.doi.org/10.4018/ijdcf.2017100102.

Full text
Abstract:
The advancement of multimedia technology has contributed to a large number of videos, so it is important to know how to retrieve information from video, especially for crime prevention and forensics. For the convenience of retrieving video data, content-based video retrieval (CBVR) has got great publicity. Aiming at improving the retrieval performance, we focus on the two key technologies: shot boundary detection and keyframe extraction. After being compared with pixel analysis and chi-square histogram, histogram-based method is chosen in this paper. Then we combine it with adaptive threshold
APA, Harvard, Vancouver, ISO, and other styles
4

Kaavya, S., and G. G. Lakshmi Priya. "Static Shot based Keyframe Extraction for Multimedia Event Detection." International Journal of Computer Vision and Image Processing 6, no. 1 (2016): 28–40. http://dx.doi.org/10.4018/ijcvip.2016010103.

Full text
Abstract:
Nowadays, processing of Multimedia information leads to high computational cost due its larger size especially for video processing. In order to reduce the size of the video and to save the user's time in spending their attention on whole video, video summarization is adopted. However, it can be performed using keyframe extraction from the video. To perform this task, a new simple keyframe extraction method is proposed using divide and conquer strategy in which, Scale Invariant Feature Transform (SIFT) based feature representation vector is extracted and the whole video is categorized into sta
APA, Harvard, Vancouver, ISO, and other styles
5

Saqib, Shazia, and Syed Kazmi. "Video Summarization for Sign Languages Using the Median of Entropy of Mean Frames Method." Entropy 20, no. 10 (2018): 748. http://dx.doi.org/10.3390/e20100748.

Full text
Abstract:
Multimedia information requires large repositories of audio-video data. Retrieval and delivery of video content is a very time-consuming process and is a great challenge for researchers. An efficient approach for faster browsing of large video collections and more efficient content indexing and access is video summarization. Compression of data through extraction of keyframes is a solution to these challenges. A keyframe is a representative frame of the salient features of the video. The output frames must represent the original video in temporal order. The proposed research presents a method
APA, Harvard, Vancouver, ISO, and other styles
6

Chao, Gwo-Cheng, Yu-Pao Tsai, and Shyh-Kang Jeng. "Augmented keyframe." Journal of Visual Communication and Image Representation 21, no. 7 (2010): 682–92. http://dx.doi.org/10.1016/j.jvcir.2010.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Y., C. Lan, Q. Shi, Z. Cui, and W. Sun. "VIDEO IMAGE TARGET RECOGNITION AND GEOLOCATION METHOD FOR UAV BASED ON LANDMARKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (September 17, 2019): 285–91. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-285-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Relying on landmarks for robust geolocation of drone and targets is one of the most important ways in GPS-denied environments. For small drones,there is no direct orientation capability without high-precision IMU. This paper presents an automated real-time matching and geolocation algorithm between video keyframes and landmark database based on the integration of visual SALM and YOLOv3 deep learning network method. The algorithm mainly extracts the landmarks from the drone video keyframe images to improve target geolocation accuracy, and designs
APA, Harvard, Vancouver, ISO, and other styles
8

Rashmi B S and Nagendraswamy H S. "Effective Video Shot Boundary Detection and Keyframe Selection using Soft Computing Techniques." International Journal of Computer Vision and Image Processing 8, no. 2 (2018): 27–48. http://dx.doi.org/10.4018/ijcvip.2018040102.

Full text
Abstract:
The amount of video data generated and made publicly available has been tremendously increased in today's digital era. Analyzing these huge video repositories require effective and efficient content-based video analysis systems. Shot boundary detection and Keyframe extraction are the two major tasks in video analysis. In this direction, a method for detecting abrupt shot boundaries and extracting representative keyframe from each video shot is proposed. These objectives are achieved by incorporating the concepts of fuzzy sets and intuitionistic fuzzy sets. Shot boundaries are detected using co
APA, Harvard, Vancouver, ISO, and other styles
9

Khan, Jalaluddin, Jian Ping Li, Amin Ul Haq, et al. "Efficient secure surveillance on smart healthcare IoT system through cosine-transform encryption." Journal of Intelligent & Fuzzy Systems 40, no. 1 (2021): 1417–42. http://dx.doi.org/10.3233/jifs-201770.

Full text
Abstract:
The emerging technologies with IoT (Internet of Things) systems are elevated as a prototype and combination of the smart connectivity ecosystem. These ecosystems are appropriately connected in a smart healthcare system which are generating finest monitoring activities among the patients, well-organized diagnosis process, intensive support and care against the traditional healthcare operations. But facilitating these highly technological adaptations, the preserving personal information of the patients are on the risk with data leakage and privacy theft in the current revolution. Concerning secu
APA, Harvard, Vancouver, ISO, and other styles
10

Saitoh, Takeshi, Kazutoshi Morishita, and Ryosuke Konishi. "Keyframe Extraction from Utterance Scene and Keyframe-based Word Lip Reading." IEEJ Transactions on Electronics, Information and Systems 131, no. 2 (2011): 418–24. http://dx.doi.org/10.1541/ieejeiss.131.418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Guan, Genliang, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, and David Dagan Feng. "Keypoint-Based Keyframe Selection." IEEE Transactions on Circuits and Systems for Video Technology 23, no. 4 (2013): 729–34. http://dx.doi.org/10.1109/tcsvt.2012.2214871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Poomhiran, L., P. Meesad, and S. Nuanmeesri. "Improving the Recognition Performance of Lip Reading Using the Concatenated Three Sequence Keyframe Image Technique." Engineering, Technology & Applied Science Research 11, no. 2 (2021): 6986–92. http://dx.doi.org/10.48084/etasr.4102.

Full text
Abstract:
This paper proposes a lip reading method based on convolutional neural networks applied to Concatenated Three Sequence Keyframe Image (C3-SKI), consisting of (a) the Start-Lip Image (SLI), (b) the Middle-Lip Image (MLI), and (c) the End-Lip Image (ELI) which is the end of the pronunciation of that syllable. The lip area’s image dimensions were reduced to 32×32 pixels per image frame and three keyframes concatenate together were used to represent one syllable with a dimension of 96×32 pixels for visual speech recognition. Every three concatenated keyframes representing any syllable are selected
APA, Harvard, Vancouver, ISO, and other styles
13

Van Wallendael, Glenn, Hannes Mareen, Johan Vounckx, and Peter Lambert. "Keyframe Insertion: Enabling Low-Latency Random Access and Packet Loss Repair." Electronics 10, no. 6 (2021): 748. http://dx.doi.org/10.3390/electronics10060748.

Full text
Abstract:
From a video coding perspective, there are two challenges when performing live video distribution over error-prone networks, such as wireless networks: random access and packet loss repair. There is a scarceness of solutions that do not impact steady-state usage and users with reliable connections. The proposed solution minimizes this impact by complementing a compression-efficient video stream with a companion stream solely consisting of keyframes. Although the core idea is not new, this paper is the first work to provide restrictions and modifications necessary to make this idea work using t
APA, Harvard, Vancouver, ISO, and other styles
14

Khattak, Shehryar, Christos Papachristos, and Kostas Alexis. "Keyframe‐based thermal–inertial odometry." Journal of Field Robotics 37, no. 4 (2019): 552–79. http://dx.doi.org/10.1002/rob.21932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Treuille, Adrien, Antoine McNamara, Zoran Popović, and Jos Stam. "Keyframe control of smoke simulations." ACM Transactions on Graphics 22, no. 3 (2003): 716–23. http://dx.doi.org/10.1145/882262.882337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Akgun, Baris, Maya Cakmak, Karl Jiang, and Andrea L. Thomaz. "Keyframe-based Learning from Demonstration." International Journal of Social Robotics 4, no. 4 (2012): 343–55. http://dx.doi.org/10.1007/s12369-012-0160-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Schoeffmann, Klaus, Manfred Del Fabro, Tibor Szkaliczki, Laszlo Böszörmenyi, and Jörg Keckstein. "Keyframe extraction in endoscopic video." Multimedia Tools and Applications 74, no. 24 (2014): 11187–206. http://dx.doi.org/10.1007/s11042-014-2224-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

H, Moh Taufik, M. Suyanto, and Hanif Al Fatta. "PERBANDINGAN METODE SCRIPT DAN KEYFRAME PADA PEMBUATAN ANIMASI TIGA DIMENSI." Jurnal Informa 6, no. 1 (2020): 20–22. http://dx.doi.org/10.46808/informa.v6i1.168.

Full text
Abstract:
Pada perkembanganya industri animasi sangat banyak permintaan dalam proses pembuatanya dari bahan dan proses, animasi dibagi menjadi dua yaitu animasi dua dimensi dan tiga dimensi ada banyak macam pembuatan animasi tiga dimensi, stop-motion, cut-out, motion capture, puppet, claymotion, keyframe, dan script. Saat ini animator 3D masih menggunakan teknik keyframe dan script.Pada perbandigngan ini bertujuan untuk mengetahui metode mana yang lebih tepat untuk membuat animasi tiga dimensi dengan tanpa jeda, file yang lebih kecil.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Xiaoxi, Ju Liu, Lingchen Gu, and Yannan Ren. "Keyframe-Based Vehicle Surveillance Video Retrieval." International Journal of Digital Crime and Forensics 10, no. 4 (2018): 52–61. http://dx.doi.org/10.4018/ijdcf.2018100104.

Full text
Abstract:
This article describes how due to the diversification of electronic equipment in public security forensics, vehicle surveillance video as a burgeoning way attracts us attention. The vehicle surveillance videos contain useful evidence, and video retrieval can help us find evidence contained in them. In order to get the evidence videos accurately and effectively, a convolution neural network (CNN) is widely applied to improve performance in surveillance video retrieval. In this article, it is proposed that a vehicle surveillance video retrieval method with deep feature derived from CNN and with
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Xue, and Zhicheng Wei. "Genetic Keyframe Extraction for Soccer Video." Procedia Engineering 23 (2011): 713–17. http://dx.doi.org/10.1016/j.proeng.2011.11.2570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

K, Jayasree, and Sumam Mary Idicula. "Enhanced Video Classification System Using a Block-Based Motion Vector." Information 11, no. 11 (2020): 499. http://dx.doi.org/10.3390/info11110499.

Full text
Abstract:
The main objective of this work was to design and implement a support vector machine-based classification system to classify video data into predefined classes. Video data has to be structured and indexed for any video classification methodology. Video structure analysis involves shot boundary detection and keyframe extraction. Shot boundary detection is performed using a two-pass block-based adaptive threshold method. The seek spread strategy is used for keyframe extraction. In most of the video classification methods, selection of features is important. The selected features contribute to th
APA, Harvard, Vancouver, ISO, and other styles
22

Bi, Shusheng, Dongsheng Yang, and Yueri Cai. "Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera." Sensors 18, no. 9 (2018): 3097. http://dx.doi.org/10.3390/s18093097.

Full text
Abstract:
This paper simultaneously calibrates odometry parameters and the relative pose between a monocular camera and a robot automatically. Most camera pose estimation methods use natural features or artificial landmark tools. However, there are mismatches and scale ambiguity for natural features; the large-scale precision landmark tool is also challenging to make. To solve these problems, we propose an automatic process to combine multiple composite targets, select keyframes, and estimate keyframe poses. The composite target consists of an aruco marker and a checkerboard pattern. First, an analytica
APA, Harvard, Vancouver, ISO, and other styles
23

Rizali, Muhammad, and Nurfansyah Nurfansyah. "GALERI ANIMASI DI BANJARBARU." LANTING JOURNAL OF ARCHITECTURE 9, no. 1 (2020): 204–11. http://dx.doi.org/10.20527/lanting.v9i1.558.

Full text
Abstract:
The development of animation is growing rapidly every year, the use of animation which currently covers all fields, ranging from the world of entertainment to the world of business. The purpose of designing this Animation Gallery is to further introduce animation to the public. Animation Gallery will be designed using analogy methods and themed Hi-Tech Architecture. Animation is analogous to an architectural design by borrowing the principle of movement in the animation itself, namely how the stages of an object moving in an animation. Generally, movement in animation works with 2 elements, na
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Ye Jun, Jae Hyung Jung, and Chan Gook Park. "Adaptive Keyframe-Threshold Based Visual-Inertial Odometry." Journal of Institute of Control, Robotics and Systems 26, no. 9 (2020): 747–53. http://dx.doi.org/10.5302/j.icros.2020.20.0075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mundur, Padmavathi, Yong Rao, and Yelena Yesha. "Keyframe-based video summarization using Delaunay clustering." International Journal on Digital Libraries 6, no. 2 (2006): 219–32. http://dx.doi.org/10.1007/s00799-005-0129-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Şaykol, Ediz. "Keyframe labeling technique for surveillance event classification." Optical Engineering 49, no. 11 (2010): 117203. http://dx.doi.org/10.1117/1.3509270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Agarwala, Aseem, Aaron Hertzmann, David H. Salesin, and Steven M. Seitz. "Keyframe-based tracking for rotoscoping and animation." ACM Transactions on Graphics 23, no. 3 (2004): 584–91. http://dx.doi.org/10.1145/1015706.1015764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Jungho, Kuk-Jin Yoon, and In So Kweon. "Bayesian filtering for keyframe-based visual SLAM." International Journal of Robotics Research 34, no. 4-5 (2014): 517–31. http://dx.doi.org/10.1177/0278364914550215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Girgensohn, A., J. Boreczky, and L. Wilcox. "Keyframe-based user interfaces for digital video." Computer 34, no. 9 (2001): 61–67. http://dx.doi.org/10.1109/2.947093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dong, Zilong, Guofeng Zhang, Jiaya Jia, and Hujun Bao. "Efficient keyframe-based real-time camera tracking." Computer Vision and Image Understanding 118 (January 2014): 97–110. http://dx.doi.org/10.1016/j.cviu.2013.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jin, Chao, Thomas Fevens, and Sudhir Mudur. "Optimized keyframe extraction for 3D character animations." Computer Animation and Virtual Worlds 23, no. 6 (2012): 559–68. http://dx.doi.org/10.1002/cav.1471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

ŞAYKOL, Ediz. "Keyframe-based video mosaicing for historical Ottoman documents." TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 24 (2016): 4254–66. http://dx.doi.org/10.3906/elk-1502-230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Deshpande, Akshay, Vedang Bamnote, Bhakti Patil, and Ashvini A. "Review of Keyframe Extraction Techniques for Video Summarization." International Journal of Computer Applications 180, no. 39 (2018): 40–43. http://dx.doi.org/10.5120/ijca2018917042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kumar, Krishan. "EVS-DK: Event video skimming using deep keyframe." Journal of Visual Communication and Image Representation 58 (January 2019): 345–52. http://dx.doi.org/10.1016/j.jvcir.2018.12.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Voulodimos, Athanasios, Ioannis Rallis, and Nikolaos Doulamis. "Physics-based keyframe selection for human motion summarization." Multimedia Tools and Applications 79, no. 5-6 (2018): 3243–59. http://dx.doi.org/10.1007/s11042-018-6935-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Priya, G. G. Lakshmi, and S. Domnic. "Shot boundary-based keyframe extraction for video summarisation." International Journal of Computational Intelligence Studies 3, no. 2/3 (2014): 157. http://dx.doi.org/10.1504/ijcistudies.2014.062728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kusumoto, Katsutoshi, Yoshinori Dobashi, and Tsuyoshi Yamamoto. "Keyframe Control of Cumuliform Clouds with Feedback Control." Journal of The Institute of Image Information and Television Engineers 63, no. 3 (2009): 355–60. http://dx.doi.org/10.3169/itej.63.355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

LEE, S. H., and K. R. KWON. "3D Keyframe Animation Watermarking Based on Orientation Interpolator." IEICE Transactions on Information and Systems E90-D, no. 11 (2007): 1751–61. http://dx.doi.org/10.1093/ietisy/e90-d.11.1751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chao, Gwo-Cheng, Yu-Pao Tsai, and Shyh-Kang Jeng. "Augmented 3-D Keyframe Extraction for Surveillance Videos." IEEE Transactions on Circuits and Systems for Video Technology 20, no. 11 (2010): 1395–408. http://dx.doi.org/10.1109/tcsvt.2010.2087491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Leutenegger, Stefan, Simon Lynen, Michael Bosse, Roland Siegwart, and Paul Furgale. "Keyframe-based visual–inertial odometry using nonlinear optimization." International Journal of Robotics Research 34, no. 3 (2014): 314–34. http://dx.doi.org/10.1177/0278364914554813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Terra, Sílvio César Lizana, and Ronald Anthony Metoyer. "A performance-based technique for timing keyframe animations." Graphical Models 69, no. 2 (2007): 89–105. http://dx.doi.org/10.1016/j.gmod.2006.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Ke-Sen, Chun-Fa Chang, Yu-Yao Hsu, and Shi-Nine Yang. "Key Probe: a technique for animation keyframe extraction." Visual Computer 21, no. 8-10 (2005): 532–41. http://dx.doi.org/10.1007/s00371-005-0316-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Borth, Damian, Adrian Ulges, Christian Schulze, and Thomas M. Breuel. "Keyframe Extraktion für Video-Annotation und Video-Zusammenfassung." Informatik-Spektrum 32, no. 1 (2008): 50–53. http://dx.doi.org/10.1007/s00287-008-0264-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Roberts, Richard, J. P. Lewis, Ken Anjyo, Jaewoo Seo, and Yeongho Seol. "Optimal and interactive keyframe selection for motion capture." Computational Visual Media 5, no. 2 (2019): 171–91. http://dx.doi.org/10.1007/s41095-019-0138-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yasin, Hashim, Mazhar Hussain, and Andreas Weber. "Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network." Sensors 20, no. 8 (2020): 2226. http://dx.doi.org/10.3390/s20082226.

Full text
Abstract:
In this paper, we propose a novel and efficient framework for 3D action recognition using a deep learning architecture. First, we develop a 3D normalized pose space that consists of only 3D normalized poses, which are generated by discarding translation and orientation information. From these poses, we extract joint features and employ them further in a Deep Neural Network (DNN) in order to learn the action model. The architecture of our DNN consists of two hidden layers with the sigmoid activation function and an output layer with the softmax function. Furthermore, we propose a keyframe extra
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Chaofan, Yong Liu, Fan Wang, Yingwei Xia, and Wen Zhang. "VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation." Sensors 18, no. 11 (2018): 4036. http://dx.doi.org/10.3390/s18114036.

Full text
Abstract:
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and
APA, Harvard, Vancouver, ISO, and other styles
47

Ji, Hyesung, Danial Hooshyar, Kuekyeng Kim, and Heuiseok Lim. "A semantic-based video scene segmentation using a deep neural network." Journal of Information Science 45, no. 6 (2018): 833–44. http://dx.doi.org/10.1177/0165551518819964.

Full text
Abstract:
Video scene segmentation is very important research in the field of computer vision, because it helps in efficient storage, indexing and retrieval of videos. Achieving this kind of scene segmentation cannot be done by just calculating the similarity of low-level features presented in the video; high-level features should also be considered to achieve a better performance. Even though much research has been conducted on video scene segmentation, most of these studies failed to semantically segment a video into scenes. Thus, in this study, we propose a Deep-learning Semantic-based Scene-segmenta
APA, Harvard, Vancouver, ISO, and other styles
48

Li, M., and F. Rottensteiner. "VISION-BASED INDOOR LOCALIZATION VIA A VISUAL SLAM APPROACH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 827–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-827-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> With an increasing interest in indoor location based services, vision-based indoor localization techniques have attracted many attentions from both academia and industry. Inspired by the development of simultaneous localization and mapping technique (SLAM), we present a visual SLAM-based approach to achieve a 6 degrees of freedom (DoF) pose in indoor environment. Firstly, the indoor scene is explored by a keyframe-based global mapping technique, which generates a database from a sequence of images covering the entire scene. After the exploration,
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Eun-Sung, and Gon-Woo Kim. "Merged Keyframe Extraction Method for 3D LiDAR-based GraphSLAM." Journal of Institute of Control, Robotics and Systems 23, no. 3 (2017): 213–21. http://dx.doi.org/10.5302/j.icros.2017.16.8009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

XIN, Wei, Toshihiro KONMA, Kunio KONDO, Tetsuya SHIMAMURA, and Kei TATENO. "Wavelet Based Keyframe Extraction Method for Motion Capture Data." Journal of Graphic Science of Japan 41, Supplement1 (2007): 191–96. http://dx.doi.org/10.5989/jsgs.41.supplement1_191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!