To see the other types of publications on this topic, follow the link: Video summary.

Journal articles on the topic 'Video summary'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video summary.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lessick, Susan. "SURA/ViDe Digital Video Workshop: A Summary." Library Hi Tech News 21, no. 5 (June 2004): 12–13. http://dx.doi.org/10.1108/07419050410546338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fujita, H. "Summary of Video-symposium 2." Nihon Kikan Shokudoka Gakkai Kaiho 58, no. 2 (2007): 169–70. http://dx.doi.org/10.2468/jbes.58.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu Li, G. M. Schuster, A. K. Katsaggelos, and B. Gandhi. "Rate-distortion optimal video summary generation." IEEE Transactions on Image Processing 14, no. 10 (October 2005): 1550–60. http://dx.doi.org/10.1109/tip.2005.854477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Priya, G. G. Lakshmi, and S. Domnic. "Medical Video Summarization using Central Tendency-Based Shot Boundary Detection." International Journal of Computer Vision and Image Processing 3, no. 1 (January 2013): 55–65. http://dx.doi.org/10.4018/ijcvip.2013010105.

Full text
Abstract:
Due to the advancement in multimedia technologies and wide spread usage of internet facilities; there is rapid increase in availability of video data. More specifically, enormous collections of Medical videos are available which has its applications in various aspects like medical imaging, medical diagnostics, training the medical professionals, medical research and education. Due to abundant availability of information in the form of videos, it needs an efficient and automatic technique to manage, analyse, index, access and retrieve the information from the repository. The aim of this paper is to extract good visual content representatives – Summary of keyframes. In order to achieve this, the authors propose a new method for video shot segmentation which in turn leads to extraction of better keyframes as representative for summary. The proposed method is experimented and evaluated using publically available medical videos. As a result, better precision and recall is obtained for shot detection when compared to that of the recent related methods. Evaluation of video summary is done using fidelity measure and compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
5

Yoon, Ui-Nyoung, Myung-Duk Hong, and Geun-Sik Jo. "Interp-SUM: Unsupervised Video Summarization with Piecewise Linear Interpolation." Sensors 21, no. 13 (July 2, 2021): 4562. http://dx.doi.org/10.3390/s21134562.

Full text
Abstract:
This paper addresses the problem of unsupervised video summarization. Video summarization helps people browse large-scale videos easily with a summary from the selected frames of the video. In this paper, we propose an unsupervised video summarization method with piecewise linear interpolation (Interp-SUM). Our method aims to improve summarization performance and generate a natural sequence of keyframes with predicting importance scores of each frame utilizing the interpolation method. To train the video summarization network, we exploit a reinforcement learning-based framework with an explicit reward function. We employ the objective function of the exploring under-appreciated reward method for training efficiently. In addition, we present a modified reconstruction loss to promote the representativeness of the summary. We evaluate the proposed method on two datasets, SumMe and TVSum. The experimental result showed that Interp-SUM generates the most natural sequence of summary frames than any other the state-of-the-art methods. In addition, Interp-SUM still showed comparable performance with the state-of-art research on unsupervised video summarization methods, which is shown and analyzed in the experiments of this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Dayong, and Shixin Sun. "Summary of research on scalable video coding." JOURNAL OF ELECTRONIC MEASUREMENT AND INSTRUMENT 2009, no. 8 (December 16, 2009): 78–84. http://dx.doi.org/10.3724/sp.j.1187.2009.08078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ci, Song, Dalei Wu, Yun Ye, Zhu Han, GUAN-MING Su, Haohong Wang, and Hui Tang. "Video summary delivery over cooperative wireless networks." IEEE Wireless Communications 19, no. 2 (April 2012): 80–87. http://dx.doi.org/10.1109/mwc.2012.6189417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lei, Shaoshuai, Gang Xie, and Gaowei Yan. "A Novel Key-Frame Extraction Approach for Both Video Summary and Video Index." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/695168.

Full text
Abstract:
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
APA, Harvard, Vancouver, ISO, and other styles
9

Krauss, John C., Vaibhav Sahai, Matthias Kirch, Diane M. Simeone, and Lawrence An. "Pilot Study of Personalized Video Visit Summaries for Patients With Cancer." JCO Clinical Cancer Informatics, no. 2 (December 2018): 1–8. http://dx.doi.org/10.1200/cci.17.00086.

Full text
Abstract:
Purpose The treatment of cancer is complex, which can overwhelm patients and lead to poor comprehension and recall of the specifics of the cancer stage, prognosis, and treatment plan. We hypothesized that an oncologist can feasibly record and deliver a custom video summary of the consultation that covers the diagnosis, recommended testing, treatment plan, and follow-up in < 5 minutes. The video summary allows the patient to review and share the most important part of a cancer consultation with family and caregivers. Methods At the conclusion of the office visit, oncologists recorded the most important points of the consultation, including the diagnosis and management plan as a short video summary. Patients were then e-mailed a link to a secure Website to view and share the video. Patients and invited guests were asked to respond to an optional survey of 15 multiple-choice and four open-ended questions after viewing the video online. Results Three physicians recorded and sent 58 video visit summaries to patients seen in multidisciplinary GI cancer clinics. Forty-one patients logged into the secure site, and 38 viewed their video. Fourteen patients shared their video and invited a total of 46 visitors, of whom 36 viewed the videos. Twenty-six patients completed the survey, with an average overall video satisfaction score of 9 on a scale of 1 to 10, with 10 being most positive. Conclusion Video visit summaries provide a personalized education tool that patients and caregivers find highly useful while navigating complex cancer care. We are exploring the incorporation of video visit summaries into the electronic medical record to enhance patient and caregiver understanding of their specific disease and treatment.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Shih-Nung. "Storyboard-based accurate automatic summary video editing system." Multimedia Tools and Applications 76, no. 18 (November 26, 2016): 18409–23. http://dx.doi.org/10.1007/s11042-016-4160-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Limin. "Research on Summary Highlight Ranking of Sports Video." International Journal of Multimedia and Ubiquitous Engineering 9, no. 12 (December 31, 2014): 25–36. http://dx.doi.org/10.14257/ijmue.2014.9.12.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Andrew S., Erica R. Podolsky, Stephanie A. King, and Paul G. Curcillo. "Single Port Access (SPA™) technique: video summary." Surgical Endoscopy 24, no. 6 (December 8, 2009): 1473. http://dx.doi.org/10.1007/s00464-009-0752-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yuan, Li, Francis EH Tay, Ping Li, Li Zhou, and Jiashi Feng. "Cycle-SUM: Cycle-Consistent Adversarial LSTM Networks for Unsupervised Video Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9143–50. http://dx.doi.org/10.1609/aaai.v33i01.33019143.

Full text
Abstract:
In this paper, we present a novel unsupervised video summarization model that requires no manual annotation. The proposed model termed Cycle-SUM adopts a new cycleconsistent adversarial LSTM architecture that can effectively maximize the information preserving and compactness of the summary video. It consists of a frame selector and a cycle-consistent learning based evaluator. The selector is a bi-direction LSTM network that learns video representations that embed the long-range relationships among video frames. The evaluator defines a learnable information preserving metric between original video and summary video and “supervises” the selector to identify the most informative frames to form the summary video. In particular, the evaluator is composed of two generative adversarial networks (GANs), in which the forward GAN is learned to reconstruct original video from summary video while the backward GAN learns to invert the processing. The consistency between the output of such cycle learning is adopted as the information preserving metric for video summarization. We demonstrate the close relation between mutual information maximization and such cycle learning procedure. Experiments on two video summarization benchmark datasets validate the state-of-theart performance and superiority of the Cycle-SUM model over previous baselines.
APA, Harvard, Vancouver, ISO, and other styles
14

Et. al., R. P. Dahake,. "Face Recognition from Video using Threshold based Clustering." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (April 11, 2021): 272–85. http://dx.doi.org/10.17762/turcomat.v12i1s.1768.

Full text
Abstract:
Video processing has gained significant attention due to the rapid growth in video feed collected from a variety of domains. Face recognition and summary generation is gaining attention in the branch of video data processing. The recognition includes face identification from video frames and face authentication. The face authentication is nothing but labelling the faces. Face recognition strategies used in image processing techniques cannot be directly applied to video processing due to bulk data. The video processing techniques face multiple problems such as pose variation, expression variation, illumination variation, camera angles, etc. A lot of research work is done for face authentication in terms of accuracy and efficiency improvement. The second important aspect is the video summarization. Very few works have been done on the video summarization due to its complexity, computational overhead, and lack of appropriate training data. In some of the existing work analysing celebrity video for finding association in name node or face node of video dataset using graphical representation need script or dynamic caption details As well as there can be multiple faces of same person per frame so using K- Means clustering further for recognition purpose needs cluster count initially considering total person in the video. The proposed system works on video face recognition and summary generation. The system automatically identifies the front and profile faces of users. The similar faces are grouped together using threshold based a fixed-width clustering which is one of the novel approach in face recognition process best of our knowledge and only top k faces are used for authentication. This improves system efficiency. After face authentication, the occurrence count of each user is extracted and a visual co-occurrence graph is generated as a video summarization. The system is tested on the video dataset of multi persons occurring in different videos. Total 20 videos are consider for training and testing containing multiple person in one frame. To evaluate the accuracy of recognition. 80% of faces are correctly identified and authenticated from the video.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Dianting, Mei-Ling Shyu, Chao Chen, and Shu-Ching Chen. "Within and Between Shot Information Utilisation in Video Key Frame Extraction." Journal of Information & Knowledge Management 10, no. 03 (September 2011): 247–59. http://dx.doi.org/10.1142/s0219649211002961.

Full text
Abstract:
In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select representative key frames for a video. By analysing the differences between frames and utilising the clustering technique, a set of key frame candidates (KFCs) is first selected at the shot level, and then the information within a video shot and between video shots is used to filter the candidate set to generate the final set of key frames. Experimental results on the TRECVID 2007 video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the percentage of the extracted key frames and the retrieval precision.
APA, Harvard, Vancouver, ISO, and other styles
16

Sarikcioglu, Levent, Yesim Senol, Fatos B. Yildirim, and Arzu Hizay. "Correlation of the summary method with learning styles." Advances in Physiology Education 35, no. 3 (September 2011): 290–94. http://dx.doi.org/10.1152/advan.00130.2010.

Full text
Abstract:
The summary is the last part of the lesson but one of the most important. We aimed to study the relationship between the preference of the summary method (video demonstration, question-answer, or brief review of slides) and learning styles. A total of 131 students were included in the present study. An inventory was prepared to understand the students' learning styles, and a satisfaction questionnaire was provided to determine the summary method selection. The questionnaire and inventory were collected and analyzed. A comparison of the data revealed that the summary method with video demonstration received the highest score among all the methods tested. Additionally, there were no significant differences between learning styles and summary method with video demonstration. We suggest that such a summary method should be incorporated into neuroanatomy lessons. Since anatomy has a large amount of visual material, we think that it is ideally suited for this summary method.
APA, Harvard, Vancouver, ISO, and other styles
17

Xiao, Shuwen, Zhou Zhao, Zijian Zhang, Xiaohui Yan, and Min Yang. "Convolutional Hierarchical Attention Network for Query-Focused Video Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12426–33. http://dx.doi.org/10.1609/aaai.v34i07.6929.

Full text
Abstract:
Previous approaches for video summarization mainly concentrate on finding the most diverse and representative visual contents as video summary without considering the user's preference. This paper addresses the task of query-focused video summarization, which takes user's query and a long video as inputs and aims to generate a query-focused video summary. In this paper, we consider the task as a problem of computing similarity between video shots and query. To this end, we propose a method, named Convolutional Hierarchical Attention Network (CHAN), which consists of two parts: feature encoding network and query-relevance computing module. In the encoding network, we employ a convolutional network with local self-attention mechanism and query-aware global attention mechanism to learns visual information of each shot. The encoded features will be sent to query-relevance computing module to generate query-focused video summary. Extensive experiments on the benchmark dataset demonstrate the competitive performance and show the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
18

Dapeng Wu, Yiwei Thomas Hou, Wenwu Zhu, Hung-Ju Lee, Tihao Chiang, Ya-Qin Zhang, and H. J. Chao. "MPEG-4 video transport over the Internet: a summary." IEEE Circuits and Systems Magazine 2, no. 1 (2002): 43–46. http://dx.doi.org/10.1109/mcas.2002.1179709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

罗, 红. "Research Summary of Negative Effects of Violent Video Games." Advances in Psychology 06, no. 02 (2016): 188–94. http://dx.doi.org/10.12677/ap.2016.62023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Yu, Ju Liu, Xiaoxi Liu, and Xuesong Gao. "Video Summarization Based on Multimodal Features." International Journal of Multimedia Data Engineering and Management 11, no. 4 (October 2020): 60–76. http://dx.doi.org/10.4018/ijmdem.2020100104.

Full text
Abstract:
In this manuscript, the authors present a keyshots-based supervised video summarization method, where feature fusion and LSTM networks are used for summarization. The framework can be divided into three folds: 1) The authors formulate video summarization as a sequence to sequence problem, which should predict the importance score of video content based on video feature sequence. 2) By simultaneously considering visual features and textual features, the authors present the deep fusion multimodal features and summarize videos based on recurrent encoder-decoder architecture with bi-directional LSTM. 3) Most importantly, in order to train the supervised video summarization framework, the authors adopt the number of users who decided to select current video clip in their final video summary as the importance scores and ground truth. Comparisons are performed with the state-of-the-art methods and different variants of FLSum and T-FLSum. The results of F-score and rank correlation coefficients on TVSum and SumMe shows the outstanding performance of the method proposed in this manuscript.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Yang, Dingguo Yu, and Chen Yang. "Video transaction algorithm considering FISCO alliance chain and improved trusted computing." PeerJ Computer Science 7 (June 14, 2021): e594. http://dx.doi.org/10.7717/peerj-cs.594.

Full text
Abstract:
With the advent of the era of self media, the demand for video trading is becoming more and more obvious. Alliance blockchain has the characteristics of traceable transaction records, tamper proof transaction records, decentralized transactions and faster transaction speed than public chains. These features make it a trading platform. Trusted computing can solve the problem of non Byzantine attack in the aspect of hardware. This paper proposes a video transaction algorithm considering FISCO alliance chain and improved trusted computing. First, an improved trusted computing algorithm is used to prepare a trusted transaction environment. Second, the video summary information extraction algorithm is used to extract the summary information that can uniquely identify the video. Finally, based on the video transactions algorithm of FISCO alliance chain, the video summary information is traded on the chain. Experimental results show that the proposed algorithm is efficient and robust for video transactions. At the same time, the algorithm has low computational power requirements and algorithm complexity, which can provide technical support for provincial and county financial media centers and relevant media departments.
APA, Harvard, Vancouver, ISO, and other styles
22

Matthews, Clare E., Paria Yousefi, and Ludmila I. Kuncheva. "Using control charts for on-line video summarisation." MATEC Web of Conferences 277 (2019): 01012. http://dx.doi.org/10.1051/matecconf/201927701012.

Full text
Abstract:
Many existing methods for video summarisation are not suitable for on-line applications, where computational and memory constraints mean that feature extraction and frame selection must be simple and efficient. Our proposed method uses RGB moments to represent frames, and a control-chart procedure to identify shots from which keyframes are then selected. The new method produces summaries of higher quality than two state-of-the-art on-line video summarisation methods identified as the best among nine such methods in our previous study. The summary quality is measured against an objective ideal for synthetic data sets, and compared to user-generated summaries of real videos.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Li Wei, and Yong Li Gao. "Research and Realization Based on Content Video Retrieval." Applied Mechanics and Materials 55-57 (May 2011): 2163–68. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.2163.

Full text
Abstract:
Along with the multimedia technological development and the large-scale database widespread application, the content video retrieval technology obtains the rapid development, at the same time, the algorithms were also proposed. This article has carried on the induction summary to these new theory technologies based on the content video retrieval demonstration system. This article first introduced based on the content video retrieval essential technology including regards the lens the boundary examination and the division, the essential frame selection the characteristic withdraws, the similar match and the video frequency gathers the kind and so on. Simultaneously provided video frequency retrieval essential technology some algorithms,and carried on the summary regarding the recent years based on the content.
APA, Harvard, Vancouver, ISO, and other styles
24

Jianqiang, Huang, Wang Xiaoying, Cao Tengfei, and Wang Rui. "Speaking Video Summary Based on Face Detection in Moving Region." Open Cybernetics & Systemics Journal 8, no. 1 (December 31, 2014): 784–89. http://dx.doi.org/10.2174/1874110x01408010784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Whatley, Janice, and Amrey Ahmad. "Using Video to Record Summary Lectures to Aid Students' Revision." Interdisciplinary Journal of e-Skills and Lifelong Learning 3 (2007): 185–96. http://dx.doi.org/10.28945/393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dalei Wu, Song Ci, and Haohong Wang. "Cross-layer optimization for video summary transmission over wireless networks." IEEE Journal on Selected Areas in Communications 25, no. 4 (May 2007): 841–50. http://dx.doi.org/10.1109/jsac.2007.070519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

., Richa Mishra. "FACE DETECTION FOR VIDEO SUMMARY USING ENHANCEMENT BASED FUSION STRATEGY." International Journal of Research in Engineering and Technology 03, no. 15 (May 25, 2014): 69–74. http://dx.doi.org/10.15623/ijret.2014.0315014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Abdulrahman, Tryanti R., and Noni Basalama. "Promoting Students’ Motivation in Learning English Vocabulary through a Collaborative Video Project." Celt: A Journal of Culture, English Language Teaching & Literature 19, no. 1 (July 31, 2019): 107. http://dx.doi.org/10.24167/celt.v19i1.493.

Full text
Abstract:
The main objective of this study was to motivate EFL (English Foreign Language) students in learning English vocabulary by using collaborative video Project. This study followed a case study methodology to describe how video project experience can engage students to learn English and provide them an opportunity to participate in tasks as well as enrich their vocabulary. Twenty-five EFL students in the Vocabulary Building Course (VBC) participated in this study. This study used three phases for evaluations: the pre-production phase, production phase and post production phase. Data were collected from classroom observations, the video Project process and document analyses. A summary of the findings related to the video theme and narrative analysis of students’ videos are presented in this paper. Data analysis showed that students responded differently to their video project assignments and produced different types of collaborative videos with the help of a camcorder and computer application. Then, a survey was conducted to collect feedback from participants to learn their opinions and attitudes regarding the use of collaborative video project, students’ learning and motivation. Participants in this study expressed positive attitudes and opinions toward their video-project experiences. This study demonstrates that video Project can be a great tool for promoting students’ motivation and participation in learning English, enriching their vocabulary and can be an effective and powerful tool to create fun, interactive, and collaborative learning environments.
APA, Harvard, Vancouver, ISO, and other styles
29

Ghafoor, Humaira A., Ali Javed, Aun Irtaza, Hassan Dawood, Hussain Dawood, and Ameen Banjar. "Egocentric Video Summarization Based on People Interaction Using Deep Learning." Mathematical Problems in Engineering 2018 (November 29, 2018): 1–12. http://dx.doi.org/10.1155/2018/7586417.

Full text
Abstract:
The availability of wearable cameras in the consumer market has motivated the users to record their daily life activities and post them on the social media. This exponential growth of egocentric videos demand to develop automated techniques to effectively summarizes the first-person video data. Egocentric videos are commonly used to record lifelogs these days due to the availability of low cost wearable cameras. However, egocentric videos are challenging to process due to the fact that placement of camera results in a video which presents great deal of variation in object appearance, illumination conditions, and movement. This paper presents an egocentric video summarization framework based on detecting important people in the video. The proposed method generates a compact summary of egocentric videos that contains information of the people whom the camera wearer interacts with. Our proposed approach focuses on identifying the interaction of camera wearer with important people. We have used AlexNet convolutional neural network to filter the key-frames (frames where camera wearer interacts closely with the people). We used five convolutional layers and two completely connected hidden layers and an output layer. Dropout regularization method is used to reduce the overfitting problem in completely connected layers. Performance of the proposed method is evaluated onUT Egostandard dataset. Experimental results signify the effectiveness of the proposed method in terms of summarizing the egocentric videos.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Yujia, Michael Kampffmeyer, Xiaoguang Zhao, and Min Tan. "Deep Reinforcement Learning for Query-Conditioned Video Summarization." Applied Sciences 9, no. 4 (February 21, 2019): 750. http://dx.doi.org/10.3390/app9040750.

Full text
Abstract:
Query-conditioned video summarization requires to (1) find a diverse set of video shots/frames that are representative for the whole video, and that (2) the selected shots/frames are related to a given query. Thus it can be tailored to different user interests leading to a better personalized summary and differs from the generic video summarization which only focuses on video content. Our work targets this query-conditioned video summarization task, by first proposing a Mapping Network (MapNet) in order to express how related a shot is to a given query. MapNet helps establish the relation between the two different modalities (videos and query), which allows mapping of visual information to query space. After that, a deep reinforcement learning-based summarization network (SummNet) is developed to provide personalized summaries by integrating relatedness, representativeness and diversity rewards. These rewards jointly guide the agent to select the most representative and diversity video shots that are most related to the user query. Experimental results on a query-conditioned video summarization benchmark demonstrate the effectiveness of our proposed method, indicating the usefulness of the proposed mapping mechanism as well as the reinforcement learning approach.
APA, Harvard, Vancouver, ISO, and other styles
31

Farouk, Hesham, Kamal ElDahshan, and Amr Abd Elawed Abozeid. "Effective and Efficient Video Summarization Approach for Mobile Devices." International Journal of Interactive Mobile Technologies (iJIM) 10, no. 1 (January 18, 2016): 19. http://dx.doi.org/10.3991/ijim.v10i1.4827.

Full text
Abstract:
in the context of mobile computing and multimedia processing, video summarization plays an important role for video browsing, streaming, indexing and storing. In this paper, an effective and efficient video summarization approach for mobile devices is proposed. The goal of this approach is to generate a video summary (static and dynamic) based on Visual Attention Model (VAM) and new Fast Directional Motion Intensity Estimation (FDMIE) algorithm for mobile devices. The VAM is based on how to simulate the Human Vision System (HVS) to extract the salient areas that have more attention values from video contents. The evaluation results demonstrate that, the effectiveness rate up to 87% with respect to the manually generated summary and the state of the art approaches. Moreover, the efficiency of the proposed approach makes it suitable for online and mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Tergesen, Cori L., Dristy Gurung, Saraswati Dhungana, Ajay Risal, Prem Basel, Dipesh Tamrakar, Archana Amatya, Lawrence P. Park, and Brandon A. Kohrt. "Impact of Service User Video Presentations on Explicit and Implicit Stigma toward Mental Illness among Medical Students in Nepal: A Randomized Controlled Trial." International Journal of Environmental Research and Public Health 18, no. 4 (February 22, 2021): 2143. http://dx.doi.org/10.3390/ijerph18042143.

Full text
Abstract:
This study evaluated the impact of didactic videos and service user testimonial videos on mental illness stigma among medical students. Two randomized controlled trials were conducted in Nepal. Study 1 examined stigma reduction for depression. Study 2 examined depression and psychosis. Participants were Nepali medical students (Study 1: n = 94, Study 2: n = 213) randomized to three conditions: a didactic video based on the mental health Gap Action Programme (mhGAP), a service user video about living with mental illness, or a control condition with no videos. In Study 1, videos only addressed depression. In Study 2, videos addressed depression and psychosis. In Study 1, both didactic and service user videos reduced stigma compared to the control. In Study 2 (depression and psychosis), there were no differences among the three arms. When comparing Study 1 and 2, there was greater stigma reduction in the service user video arm with only depression versus service user videos describing depression and psychosis. In summary, didactic and service user videos were associated with decreased stigma when content addressed only depression. However, no stigma reduction was seen when including depression and psychosis. This calls for considering different strategies to address stigma based on types of mental illnesses. ClinicalTrials.gov identifier: NCT03231761.
APA, Harvard, Vancouver, ISO, and other styles
33

Paradela de la Morena, Marina, Mercedes De La Torre Bravos, Ricardo Fernandez Prado, Anna Minasyan, Alejandro Garcia-Perez, Luis Fernandez-Vago, and Diego Gonzalez-Rivas. "Standardized surgical technique for uniportal video-assisted thoracoscopic lobectomy." European Journal of Cardio-Thoracic Surgery 58, Supplement_1 (May 25, 2020): i23—i33. http://dx.doi.org/10.1093/ejcts/ezaa110.

Full text
Abstract:
Abstract Summary Uniportal video-assisted thoracoscopic surgery may be the approach for any thoracic procedure, from minor resections to complex reconstructive surgery. However, anatomical lobectomy represents its most common and clinically proven usage. A wide variety of information about uniportal video-assisted thoracoscopic lobectomies can be found in the literature and multimedia sources. This article focuses on updating the surgical technique and includes important aspects such as the geometric approach, anaesthesia considerations, operating room set-up, tips about the incision, instrumentation management and the operative technique to perform the 5 lobectomies. The following issues are explained for each lobectomy: anatomical considerations, surgical steps and technical advice. Medical illustrations and videos are included to clarify the text with the goal of describing a standard surgical practice.
APA, Harvard, Vancouver, ISO, and other styles
34

Shao, Jian, Dongming Jiang, Mengru Wang, Hong Chen, and Lu Yao. "Multi-video summarization using complex graph clustering and mining." Computer Science and Information Systems 7, no. 1 (2010): 85–98. http://dx.doi.org/10.2298/csis1001085s.

Full text
Abstract:
Multi-video summarization is a great theoretical and technical challenge due to the wider diversity of topics in multi-video than singlevideo as well as the multi-modality nature of multi-video over multidocument. In this paper, we propose an approach to analyze both visual and textual features across a set of videos and to create a so-called circular storyboard composed of topic-representative keyframes and keywords. We formulate the generation of circular storyboard as a problem of complex graph clustering and mining, in which each separated shot from visual data and each extracted keyword from speech transcripts are first structured into a complex graph and grouped into clusters; hidden topics in the representative keyframes and keywords are then mined from clustered complex graph while at the same time maximizing the coverage of the summary over the original video set. We also design experiments to evaluate the effectiveness of our approach and the proposed approach shows a better performance than two other storyboard baselines.
APA, Harvard, Vancouver, ISO, and other styles
35

Yun, Jae-Ung, Hyung-Jin Lee, Anjan Kumar Paul, and Joong-Hwan Baek. "Face detection for video summary using illumination-compensation and morphological processing." Pattern Recognition Letters 30, no. 9 (July 2009): 856–60. http://dx.doi.org/10.1016/j.patrec.2009.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Welling, J., A. Roennow, M. Sauvé, E. Brown, I. Galetti, A. Gonzalez, A. P. Portales Guiraud, et al. "PARE0009 COMMUNITY ADVISORY BOARD INPUT CAN MAKE LAY SUMMARIES OF CLINICAL TRIAL RESULTS MORE UNDERSTANDABLE." Annals of the Rheumatic Diseases 79, Suppl 1 (June 2020): 1290.2–1291. http://dx.doi.org/10.1136/annrheumdis-2020-eular.4340.

Full text
Abstract:
Background:Under European Union (EU) Clinical Trial regulations,1clinical research sponsors (CRSs) must ensure all studies performed in the EU are accompanied by a trial summary for laypersons, published within 1 year of study completion. These lay summaries should disseminate clinical trial results in an easy-to-understand way for trial participants, patient and caregiver communities, and the general public. The European Patients Forum (EPF)2and European Patients’ Academy on Therapeutic Innovation (EUPATI)3encourage CRSs to engage with patient organisations (POs) in the development of lay summaries. This recognises the patients’ contribution to clinical research and supports the development of patient-focused material.Objectives:We share learnings from a collaboration between scleroderma POs and a CRS to create the SENSCIS® trial (NCT02597933) written and video lay summaries.Methods:A community advisory board (CAB), comprising representatives from 11 scleroderma POs covering a range of countries/regions, was formed based on the EURORDIS charter for collaboration in clinical research.4Through three structured meetings, over a seven-month period, the CAB provided advice on lay summary materials (written and video) drafted by the CRS’ Lay Summary Group (Fig. 1). At each review cycle, the CAB advice was addressed to make content more understandable and more relevant for patients and the general public.Results:The CAB advised that the existence of lay summaries is not well known in the patient community and also recommended the development of trial-specific lay summary videos to further improve understandability of the clinical trial results for the general public. Videos are a key channel of communication, enabling access to information for people with specific health needs and lower literacy levels. Following CAB advice, the CRS developed a stand-alone video entitled“What are lay summaries?”and a trial-specific lay summary video. Revisions to lay summary content (written and video) included colour schemes, iconography and language changes to make content more understandable. For videos, adjustments to animation speed, script and voiceover were implemented to improve clarity and flow of information (Fig. 2). Approved final versions of lay summary materials are publicly available on the CRS website. Translation into languages representing trial-site countries is in progress to widen access to non-English speakers and, where possible, local versions are being reviewed by the patient community.Conclusion:Structured collection and implementation of CAB advice can make lay summary materials more understandable for the patient community and wider general public.References:[1]EU. Summaries of clinical trial results for laypersons. 2018[2]EPF. EPF position: clinical trial results – communication of the lay summary. 2015[3]EUPATI. Guidance for patient involvement in ethical review of clinical trials. 2018[4]EURORDIS. Charter for Collaboration in Clinical Research in Rare Diseases. 2009Disclosure of Interests:Joep Welling Speakers bureau: Four times as a patient advocate for employees of BII and BI MIDI with a fixed amount of € 150,00 per occasion., Annelise Roennow: None declared, Maureen Sauvé Grant/research support from: Educational grants from Boehringer Ingelheim and Janssen., EDITH BROWN: None declared, Ilaria Galetti: None declared, Alex Gonzalez Consultant of: Payment made to the patient organisation (Scleroderma Research Foundation) for participation in advisory boards, Alexandra Paula Portales Guiraud: None declared, Ann Kennedy Grant/research support from: AS FESCA aisbl, Catarina Leite: None declared, Robert J. Riggs: None declared, Alison Zheng Grant/research support from: We get grants from Lorem Vascular; BI China,; Jianke Pharmaceutical Co., Ltd.; Kangjing Biological Co., Ltd.; COFCO Coca-Cola to organize national scleroderma meetings, offer patients service, holding academic meetings and other public activities, there is also a small part of the grants used to pay the workers in our organization., Consultant of: I worked as a paid consultant for BI. Pay-per-job., Speakers bureau: I was invited once to be a speaker at BI China’s internal meeting and they paid me., Matea Perkovic Popovic: None declared, Annie Gilbert Consultant of: I have worked as a paid consultant with BI International for over 3 years, since Sept 2016., Lizette Moros Employee of: Lizette Moros is an employee of Boehringer Ingelheim, Kamila Sroka-Saidi Employee of: Paid employee of Boehringer Ingelheim., Thomas Schindler Employee of: Employee of Boehringer Ingelheim Pharma, Henrik Finnern Employee of: Paid employee of Boehringer Ingelheim.
APA, Harvard, Vancouver, ISO, and other styles
37

Singh, Sanjay, Srinivasa Murali Dunga, AS Mandal, Chandra Shekhar, and Santanu Chaudhury. "FPGA Based Embedded Implementation of Video Summary Generation Scheme in Smart Camera." Advanced Materials Research 403-408 (November 2011): 516–21. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.516.

Full text
Abstract:
In any remote surveillance scenario, smart cameras have to take intelligent decisions to generate summary frames to minimize communication and processing overhead. Video summary generation, in the context of smart camera, is the process of merging the information from multiple frames. A summary generation scheme based on clustering based change detection algorithm has been implemented in our smart camera system for generating frames to deliver requisite information. In this paper we propose an embedded platform based framework for implementing summary generation scheme using HW-SW Co-Design based methodology. The complete system is implemented on Xilinx XUP Virtex-II Pro FPGA board. The overall algorithm is running on PowerPC405 and some of the blocks which are computationally intensive and more frequently called are implemented in hardware using VHDL. The system is designed using Xilinx Embedded Design Kit (EDK).
APA, Harvard, Vancouver, ISO, and other styles
38

Li, WenLin, DeYu Qi, ChangJian Zhang, Jing Guo, and JiaJun Yao. "Video Summarization Based on Mutual Information and Entropy Sliding Window Method." Entropy 22, no. 11 (November 12, 2020): 1285. http://dx.doi.org/10.3390/e22111285.

Full text
Abstract:
This paper proposes a video summarization algorithm called the Mutual Information and Entropy based adaptive Sliding Window (MIESW) method, which is specifically for the static summary of gesture videos. Considering that gesture videos usually have uncertain transition postures and unclear movement boundaries or inexplicable frames, we propose a three-step method where the first step involves browsing a video, the second step applies the MIESW method to select candidate key frames, and the third step removes most redundant key frames. In detail, the first step is to convert the video into a sequence of frames and adjust the size of the frames. In the second step, a key frame extraction algorithm named MIESW is executed. The inter-frame mutual information value is used as a metric to adaptively adjust the size of the sliding window to group similar content of the video. Then, based on the entropy value of the frame and the average mutual information value of the frame group, the threshold method is applied to optimize the grouping, and the key frames are extracted. In the third step, speeded up robust features (SURF) analysis is performed to eliminate redundant frames in these candidate key frames. The calculation of Precision, Recall, and Fmeasure are optimized from the perspective of practicality and feasibility. Experiments demonstrate that key frames extracted using our method provide high-quality video summaries and basically cover the main content of the gesture video.
APA, Harvard, Vancouver, ISO, and other styles
39

Sun, Bei, Wu Sheng Luo, Lie Bo Du, and Qin Lu. "Storage Model Based on Oracle InterMedia for Surveillance Video." Applied Mechanics and Materials 644-650 (September 2014): 3318–21. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3318.

Full text
Abstract:
A video management way based on Oracle interMedia is proposed, to make up the shortages of single classify and low search efficiency based on files in traditional video surveillance system. The solution is designed on structural description model for video, proposed a classify way which described the video from attribute, character, main frame summary and video data. Adopt interMedia to create table in Oracle, achieve classify storage for video and description. At last, design the test software in VC. Experimental results show that the video can storage in Oracle well, compared with the traditional file storage way, it has a better classify storage and search efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Koroleva, Nina. "Listening as part of the teaching of simultaneous translation: how to program activities and put them into practice." Cuadernos Iberoamericanos, no. 3 (September 28, 2018): 39–42. http://dx.doi.org/10.46272/2409-3416-2018-3-39-42.

Full text
Abstract:
This study proposes some ideas about simultaneous translation teaching from Spanish into Russian, using the texts and speeches video pronounced in Spanish language with all multimedia information pertaining to the speaker: ondemand video(s), the country statement, a summary of the statement, audio files.
APA, Harvard, Vancouver, ISO, and other styles
41

Krasavina, Yuliya Vitalevna, Ekaterina Petrovna Ponomarenko, Olga Victorovna Zhuykova, and Yuliya Vadimovna Serebryakova. "Adaptation of Video Materials for Teaching Deaf and Hard of Hearing Students." Siberian Pedagogical Journal, no. 1 (March 3, 2020): 101–7. http://dx.doi.org/10.15293/1813-4718.2101.11.

Full text
Abstract:
Problem and aim. The paper deals with the problem of adapting educational video materials for teaching deaf and hard-of-hearing students. The paper is aimed at identifying and justifying the theoretical bases for adaptation of video materials for teaching students with hearing impairment both during in-class learning and self-study. Methodology. The study was conducted at the Centre for inclusive education of Kalashnikov Izhevsk State Technical University, the experiment involved 11 hearing-impaired students majoring in “Mechanical Engineering”. The participants of the experiment were offered short educational socio-cultural videos of equal complexity, while first video was dubbed with subtitles, and the second one – with a sign language translation. In the first part of the experiment, participants were asked to give a brief summary of the material presented in the video in a free form. In the second part of the experiment, participants were asked to answer test questions on the content of video materials related to some details of the material presented. In conclusion, students were asked to answer questions about their preferences for dubbing video materials and the reasons for their choice. Results and discussion. The results obtained during this experiment demonstrate the preferred use of subtitles when adapting video materials for deaf and hard of hearing students. However, when complex abstract concepts appear in the video, subtitles do not make them easier to understand. In this regard, when developing electronic resources that include video materials, it is possible to provide for the combined use of sign dubbing and subtitles.
APA, Harvard, Vancouver, ISO, and other styles
42

Shen, Bin, Nikil Pancha, Andrew Zhai, and Charles Rosenberg. "Practical Automatic Thumbnail Generation for Short Videos." Electronic Imaging 2021, no. 8 (January 18, 2021): 283–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.8.imawm-283.

Full text
Abstract:
With the availability of fast internet and convenient imaging devices such as smart phones, videos are becoming increasingly popular and important content on social media platforms recently. They are widely adopted for various purposes including, but not limited to, advertisement, education and entertainment. One important problem in understanding videos is thumbnail generation, which involves selecting one or a few images, typically frames, which are representative of the given video. These thumbnails can then be used not only as a summary display for videos, but also for representing them in downstream content models. Thus, thumbnail selection plays an important role in a user’s experience when exploring and consuming videos. Due to the large scale of data, automatic thumbnail generation methods are desired since it is impossible to manually select thumbnails for all videos. In this paper, we propose a practical thumbnail generation method. Our method is designed in a way that will select representative and high-quality frames as thumbnails. Specifically, to capture semantic information of video frames, we leverage the embeddings of video frames generated by a state of the art con-volutional neural network pretrained in a supervised manner on external image data, using them to find representative frames in a semantic space. To efficiently evaluate the quality of each frame, we train a linear model on top of the embeddings to predict quality instead of computing it from raw pixels. We conduct experiments on real videos and show the proposed algorithm is able to generate relevant and engaging thumbnails.
APA, Harvard, Vancouver, ISO, and other styles
43

Turtle, Beverley, Alison Porter-Armstrong, and May Stinson. "The reliability of the graded Wolf Motor Function Test for stroke." British Journal of Occupational Therapy 83, no. 9 (February 3, 2020): 585–94. http://dx.doi.org/10.1177/0308022620902697.

Full text
Abstract:
Introduction The graded Wolf Motor Function Test assesses upper limb function following stroke. Clinical utility is limited by the requirement to video record for scoring purposes. This study aimed to (a) assess whether video recording is required through examination of inter-rater reliability and agreement; and (b) assess intra-rater reliability and agreement. Method A convenience sample of 30 individuals were recruited following stroke. The graded Wolf Motor Function Test was administered within 2 weeks of rehabilitation commencement and at 3 months. Two occupational therapists scored participants through either direct observation or video. Inter- and intra-rater reliability and agreement were examined for item-level and summary scores. Results Excellent inter-rater reliability ( n = 28) was found between scoring through direct observation and by video (intraclass correlation coefficients >0.9), and excellent intra-rater reliability ( n = 21) was found (intraclass correlation coefficients >0.9) for item-level and summary scores. Low agreement was found between raters at the item level. Adequate agreement was found for total functional ability, with increased measurement error found for total performance time. Conclusion The graded Wolf Motor Function Test is a reliable measure of upper limb function. Video recording may not be required by therapists. In view of low agreement, future studies should assess the impact of standardised training.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Li-Fang, Qi Wang, Meng Jian, Yu Qiao, and Bo-Xuan Zhao. "A Comprehensive Review of Group Activity Recognition in Videos." International Journal of Automation and Computing 18, no. 3 (January 11, 2021): 334–50. http://dx.doi.org/10.1007/s11633-020-1258-8.

Full text
Abstract:
AbstractHuman group activity recognition (GAR) has attracted significant attention from computer vision researchers due to its wide practical applications in security surveillance, social role understanding and sports video analysis. In this paper, we give a comprehensive overview of the advances in group activity recognition in videos during the past 20 years. First, we provide a summary and comparison of 11 GAR video datasets in this field. Second, we survey the group activity recognition methods, including those based on handcrafted features and those based on deep learning networks. For better understanding of the pros and cons of these methods, we compare various models from the past to the present. Finally, we outline several challenging issues and possible directions for future research. From this comprehensive literature review, readers can obtain an overview of progress in group activity recognition for future studies.
APA, Harvard, Vancouver, ISO, and other styles
45

Ranjan, Rajnish K., and Anupam Agrawal. "Video Summary Based on F-Sift, Tamura Textural and Middle Level Semantic Feature." Procedia Computer Science 89 (2016): 870–76. http://dx.doi.org/10.1016/j.procs.2016.06.075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Meessen, J., L. Q. Xu, and B. Macq. "Content browsing and semantic context viewing through JPEG 2000-based scalable video summary." IEE Proceedings - Vision, Image, and Signal Processing 153, no. 3 (2006): 274. http://dx.doi.org/10.1049/ip-vis:20050066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gatlin, Patrick N., Merhala Thurai, V. N. Bringi, Walter Petersen, David Wolff, Ali Tokay, Lawrence Carey, and Matthew Wingo. "Searching for Large Raindrops: A Global Summary of Two-Dimensional Video Disdrometer Observations." Journal of Applied Meteorology and Climatology 54, no. 5 (May 2015): 1069–89. http://dx.doi.org/10.1175/jamc-d-14-0089.1.

Full text
Abstract:
AbstractA dataset containing 9637 h of two-dimensional video disdrometer observations consisting of more than 240 million raindrops measured at diverse climatological locations was compiled to help characterize underlying drop size distribution (DSD) assumptions that are essential to make precise retrievals of rainfall using remote sensing platforms. This study concentrates on the tail of the DSD, which largely impacts rainfall retrieval algorithms that utilize radar reflectivity. The maximum raindrop diameter was a median factor of 1.8 larger than the mass-weighted mean diameter and increased with rainfall rate. Only 0.4% of the 1-min DSD spectra were found to contain large raindrops exceeding 5 mm in diameter. Large raindrops were most abundant at the tropical locations, especially in Puerto Rico, and were largely concentrated during the spring, especially at subtropical locations. Giant raindrops exceeding 8 mm in diameter occurred at tropical, subtropical, and high-latitude continental locations. The greatest numbers of giant raindrops were found in the subtropical locations, with the largest being a 9.7-mm raindrop that occurred in northern Oklahoma during the passage of a hail-producing thunderstorm. These results suggest large raindrops are more likely to fall from clouds that contain hail, especially those raindrops exceeding 8 mm in diameter.
APA, Harvard, Vancouver, ISO, and other styles
48

Ferman, A. M., and A. M. Tekalp. "Two-stage hierarchical video summary extraction to match low-level user browsing preferences." IEEE Transactions on Multimedia 5, no. 2 (June 2003): 244–56. http://dx.doi.org/10.1109/tmm.2003.811617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Cho, Minwoo, Jee Hyun Kim, Hyoun Joong Kong, Kyoung Sup Hong, and Sungwan Kim. "A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information." International Journal of Colorectal Disease 33, no. 5 (March 8, 2018): 549–59. http://dx.doi.org/10.1007/s00384-018-2980-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kupka, F. "Round table discussion of session A: modelling convection and radiative transfer." Proceedings of the International Astronomical Union 2, S239 (August 2006): 64–67. http://dx.doi.org/10.1017/s1743921307000129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography