Academic literature on the topic 'Video summary'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video summary.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video summary"

1

Lessick, Susan. "SURA/ViDe Digital Video Workshop: A Summary." Library Hi Tech News 21, no. 5 (June 2004): 12–13. http://dx.doi.org/10.1108/07419050410546338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fujita, H. "Summary of Video-symposium 2." Nihon Kikan Shokudoka Gakkai Kaiho 58, no. 2 (2007): 169–70. http://dx.doi.org/10.2468/jbes.58.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu Li, G. M. Schuster, A. K. Katsaggelos, and B. Gandhi. "Rate-distortion optimal video summary generation." IEEE Transactions on Image Processing 14, no. 10 (October 2005): 1550–60. http://dx.doi.org/10.1109/tip.2005.854477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Priya, G. G. Lakshmi, and S. Domnic. "Medical Video Summarization using Central Tendency-Based Shot Boundary Detection." International Journal of Computer Vision and Image Processing 3, no. 1 (January 2013): 55–65. http://dx.doi.org/10.4018/ijcvip.2013010105.

Full text
Abstract:
Due to the advancement in multimedia technologies and wide spread usage of internet facilities; there is rapid increase in availability of video data. More specifically, enormous collections of Medical videos are available which has its applications in various aspects like medical imaging, medical diagnostics, training the medical professionals, medical research and education. Due to abundant availability of information in the form of videos, it needs an efficient and automatic technique to manage, analyse, index, access and retrieve the information from the repository. The aim of this paper is to extract good visual content representatives – Summary of keyframes. In order to achieve this, the authors propose a new method for video shot segmentation which in turn leads to extraction of better keyframes as representative for summary. The proposed method is experimented and evaluated using publically available medical videos. As a result, better precision and recall is obtained for shot detection when compared to that of the recent related methods. Evaluation of video summary is done using fidelity measure and compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
5

Yoon, Ui-Nyoung, Myung-Duk Hong, and Geun-Sik Jo. "Interp-SUM: Unsupervised Video Summarization with Piecewise Linear Interpolation." Sensors 21, no. 13 (July 2, 2021): 4562. http://dx.doi.org/10.3390/s21134562.

Full text
Abstract:
This paper addresses the problem of unsupervised video summarization. Video summarization helps people browse large-scale videos easily with a summary from the selected frames of the video. In this paper, we propose an unsupervised video summarization method with piecewise linear interpolation (Interp-SUM). Our method aims to improve summarization performance and generate a natural sequence of keyframes with predicting importance scores of each frame utilizing the interpolation method. To train the video summarization network, we exploit a reinforcement learning-based framework with an explicit reward function. We employ the objective function of the exploring under-appreciated reward method for training efficiently. In addition, we present a modified reconstruction loss to promote the representativeness of the summary. We evaluate the proposed method on two datasets, SumMe and TVSum. The experimental result showed that Interp-SUM generates the most natural sequence of summary frames than any other the state-of-the-art methods. In addition, Interp-SUM still showed comparable performance with the state-of-art research on unsupervised video summarization methods, which is shown and analyzed in the experiments of this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Dayong, and Shixin Sun. "Summary of research on scalable video coding." JOURNAL OF ELECTRONIC MEASUREMENT AND INSTRUMENT 2009, no. 8 (December 16, 2009): 78–84. http://dx.doi.org/10.3724/sp.j.1187.2009.08078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ci, Song, Dalei Wu, Yun Ye, Zhu Han, GUAN-MING Su, Haohong Wang, and Hui Tang. "Video summary delivery over cooperative wireless networks." IEEE Wireless Communications 19, no. 2 (April 2012): 80–87. http://dx.doi.org/10.1109/mwc.2012.6189417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lei, Shaoshuai, Gang Xie, and Gaowei Yan. "A Novel Key-Frame Extraction Approach for Both Video Summary and Video Index." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/695168.

Full text
Abstract:
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
APA, Harvard, Vancouver, ISO, and other styles
9

Krauss, John C., Vaibhav Sahai, Matthias Kirch, Diane M. Simeone, and Lawrence An. "Pilot Study of Personalized Video Visit Summaries for Patients With Cancer." JCO Clinical Cancer Informatics, no. 2 (December 2018): 1–8. http://dx.doi.org/10.1200/cci.17.00086.

Full text
Abstract:
Purpose The treatment of cancer is complex, which can overwhelm patients and lead to poor comprehension and recall of the specifics of the cancer stage, prognosis, and treatment plan. We hypothesized that an oncologist can feasibly record and deliver a custom video summary of the consultation that covers the diagnosis, recommended testing, treatment plan, and follow-up in < 5 minutes. The video summary allows the patient to review and share the most important part of a cancer consultation with family and caregivers. Methods At the conclusion of the office visit, oncologists recorded the most important points of the consultation, including the diagnosis and management plan as a short video summary. Patients were then e-mailed a link to a secure Website to view and share the video. Patients and invited guests were asked to respond to an optional survey of 15 multiple-choice and four open-ended questions after viewing the video online. Results Three physicians recorded and sent 58 video visit summaries to patients seen in multidisciplinary GI cancer clinics. Forty-one patients logged into the secure site, and 38 viewed their video. Fourteen patients shared their video and invited a total of 46 visitors, of whom 36 viewed the videos. Twenty-six patients completed the survey, with an average overall video satisfaction score of 9 on a scale of 1 to 10, with 10 being most positive. Conclusion Video visit summaries provide a personalized education tool that patients and caregivers find highly useful while navigating complex cancer care. We are exploring the incorporation of video visit summaries into the electronic medical record to enhance patient and caregiver understanding of their specific disease and treatment.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Shih-Nung. "Storyboard-based accurate automatic summary video editing system." Multimedia Tools and Applications 76, no. 18 (November 26, 2016): 18409–23. http://dx.doi.org/10.1007/s11042-016-4160-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Video summary"

1

Chu, Xiaoge. "Retrospection and deliberation : the create [i.e. creative] summary of the high definition video works." Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1327290.

Full text
Abstract:
This paper reviews the process of video production that was used to create the creative portion of the thesis project. During this process, I experienced creative art theory, creative methods, and new technology applications. For the production of the thesis, I used a high definition digital video camera to illustrate the conflict and fusion between the East and West on the level of cultural mythology. The thesis is comprised of five parts and seven subdivisions:PrefaceStatement of the problemReview of influenceDescription of the artworks, including seven subdivisions:Theme of the projectSelection of creative styleElements of art and cinematographyProject OverviewTransposing the concrete into the abstractExhibit understanding of the language of cinemaCreative application of emerging HDV technologyConclusion and exhibition statement.
Department of Art
APA, Harvard, Vancouver, ISO, and other styles
2

Brodin, Karolina. "Consuming the commercial break : an ethnographic study of the potential audiences for television advertising." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (EFI), 2007. http://www2.hhs.se/EFI/summary/721.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rouvier, Mickaël. "Structuration de contenus audio-visuel pour le résumé automatique." Thesis, Avignon, 2011. http://www.theses.fr/2011AVIG0192/document.

Full text
Abstract:
Ces dernières années, avec l’apparition des sites tels que Youtube, Dailymotion ou encore Blip TV, le nombre de vidéos disponibles sur Internet aconsidérablement augmenté. Le volume des collections et leur absence de structure limite l’accès par le contenu à ces données. Le résumé automatique est un moyen de produire des synthèses qui extraient l’essentiel des contenus et les présentent de façon aussi concise que possible. Dans ce travail, nous nous intéressons aux méthodes de résumé vidéo par extraction, basées sur l’analyse du canal audio. Nous traitons les différents verrous scientifiques liés à cet objectif : l’extraction des contenus, la structuration des documents, la définition et l’estimation des fonctions d’intérêts et des algorithmes de composition des résumés. Sur chacun de ces aspects, nous faisons des propositions concrètes qui sont évaluées. Sur l’extraction des contenus, nous présentons une méthode rapide de détection de termes. La principale originalité de cette méthode est qu’elle repose sur la construction d’un détecteur en fonction des termes cherchés. Nous montrons que cette stratégie d’auto-organisation du détecteur améliore la robustesse du système, qui dépasse sensiblement celle de l’approche classique basée sur la transcription automatique de la parole.Nous présentons ensuite une méthode de filtrage qui repose sur les modèles à mixtures de Gaussiennes et l’analyse factorielle telle qu’elle a été utilisée récemment en identification du locuteur. L’originalité de notre contribution tient à l’utilisation des décompositions par analyse factorielle pour l’estimation supervisée de filtres opérants dans le domaine cepstral.Nous abordons ensuite les questions de structuration de collections de vidéos. Nous montrons que l’utilisation de différents niveaux de représentation et de différentes sources d’informations permet de caractériser le style éditorial d’une vidéo en se basant principalement sur l’analyse de la source audio, alors que la plupart des travaux précédents suggéraient que l’essentiel de l’information relative au genre était contenue dans l’image. Une autre contribution concerne l’identification du type de discours ; nous proposons des modèles bas niveaux pour la détection de la parole spontanée qui améliorent sensiblement l’état de l’art sur ce type d’approches.Le troisième axe de ce travail concerne le résumé lui-même. Dans le cadre du résumé automatique vidéo, nous essayons, dans un premier temps, de définir ce qu’est une vue synthétique. S’agit-il de ce qui le caractérise globalement ou de ce qu’un utilisateur en retiendra (par exemple un moment émouvant, drôle....) ? Cette question est discutée et nous faisons des propositions concrètes pour la définition de fonctions d’intérêts correspondants à 3 différents critères : la saillance, l’expressivité et la significativité. Nous proposons ensuite un algorithme de recherche du résumé d’intérêt maximal qui dérive de celui introduit dans des travaux précédents, basé sur la programmation linéaire en nombres entiers
These last years, with the advent of sites such as Youtube, Dailymotion or Blip TV, the number of videos available on the Internet has increased considerably. The size and their lack of structure of these collections limit access to the contents. Sum- marization is one way to produce snippets that extract the essential content and present it as concisely as possible.In this work, we focus on extraction methods for video summary, based on au- dio analysis. We treat various scientific problems related to this objective : content extraction, document structuring, definition and estimation of objective function and algorithm extraction.On each of these aspects, we make concrete proposals that are evaluated.On content extraction, we present a fast spoken-term detection. The main no- velty of this approach is that it relies on the construction of a detector based on search terms. We show that this strategy of self-organization of the detector im- proves system robustness, which significantly exceeds the classical approach based on automatic speech recogntion.We then present an acoustic filtering method for automatic speech recognition based on Gaussian mixture models and factor analysis as it was used recently in speaker identification. The originality of our contribution is the use of decomposi- tion by factor analysis for estimating supervised filters in the cepstral domain.We then discuss the issues of structuring video collections. We show that the use of different levels of representation and different sources of information in or- der to characterize the editorial style of a video is principaly based on audio analy- sis, whereas most previous works suggested that the bulk of information on gender was contained in the image. Another contribution concerns the type of discourse identification ; we propose low-level models for detecting spontaneous speech that significantly improve the state of the art for this kind of approaches.The third focus of this work concerns the summary itself. As part of video summarization, we first try, to define what a synthetic view is. Is that what cha- racterizes the whole document, or what a user would remember (by example an emotional or funny moment) ? This issue is discussed and we make some concrete proposals for the definition of objective functions corresponding to three different criteria : salience, expressiveness and significance. We then propose an algorithm for finding the sum of the maximum interest that derives from the one introduced in previous works, based on integer linear programming
APA, Harvard, Vancouver, ISO, and other styles
4

Bendraou, Youssef. "Détection des changements de plans et extraction d'images représentatives dans une séquence vidéo." Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0458/document.

Full text
Abstract:
Les technologies multimédias ont récemment connues une grande évolution surtout avec la croissance rapide d'internet ainsi que la création quotidienne de grands volumes de données vidéos. Tout ceci nécessite de nouvelles méthodes performantes permettant d'indexer, de naviguer, de rechercher et de consulter les informations stockées dans de grandes bases de données multimédia. La récupération de données basée sur le contenu vidéo, qui est devenue un domaine de recherche très actif durant cette décennie, regroupe les différentes techniques conçues pour le traitement de la vidéo. Dans le cadre de cette thèse de doctorat, nous présentons des applications permettant la segmentation temporelle d'une vidéo ainsi que la récupération d'information pertinente dans une séquence vidéo. Une fois le processus de classification effectué, il devient possible de rechercher l'information utile en ajoutant de nouveaux critères, et aussi de visualiser l'information d'une manière appropriée permettant d'optimiser le temps et la mémoire. Dans une séquence vidéo, le plan est considéré comme l'unité élémentaire de la vidéo. Un plan est défini comme une suite d'image capturée par une même caméra représentant une action dans le temps. Pour composer une vidéo, plusieurs plans sont regroupés en utilisant des séquences de transitions. Ces transitions se catégorisent en transitions brusques et transitions progressives. Détecter les transitions présentes dans une séquence vidéo a fait l'objet de nos premières recherches. Plusieurs techniques, basées sur différents modèles mathématiques, ont été élaborées pour la détection des changements de plans. L'utilisation de la décomposition en valeur singulière (SVD) ains que la norme Frobenius ont permis d'obtenir des résultats précis en un temps de calcul réduit. Le résumé automatique des séquences vidéo est actuellement un sujet d'une très grande actualité. Comme son nom l'indique, il s'agit d'une version courte de la vidéo qui doit contenir l'essentiel de l'information, tout en étant le plus concis possible. Ils existent deux grandes familles de résumé : le résumé statique et le résumé dynamique. Sélectionner une image représentative de chaque plan permet de créer un scénarimage. Ceci est considéré comme étant un résumé statique et local. Dans notre travail, une méthode de résumé globale est proposée
With the recent advancement in multimedia technologies, in conjunction with the rapid increase of the volume of digital video data and the growth of internet ; it has becom mandatory to have the hability browse and search through information stored in large multimedia databases. For this purpose, content based video retrieval (CBVR) has become an active area of research durinf the last decade. The objective of this thesis is to present applications for temporal video segmentation and video retrieval based on different mathematical models. A shot is considered as the elementary unit of a video, and is defined as a continuous sequence of frames taken from a single camera, representing an action during time. The different types of transitions that may occur in a video sequence are categorized into : abrupt and gradual transition. In this work, through statistical analysis, we segment a video into its constituent units. This is achieved by identifying transitions between adjacent shots. The first proposed algorithm aims to detect abrupt shot transitions only by measuring the similarity between consecutive frames. Given the size of the vector containing distances, it can be modeled by a log normal distribution since all the values are positive. Gradual shot transition identification is a more difficult task when compared to cut detection. Generally, a gradual transition may share similar characteristics as a dynamic segment with camera or object motion. In this work, singular value decomposition (SVD) is performed to project features from the spatial domain to the singular space. Resulting features are reduced and more refined, which makes the remaining tasks easier. The proposed system, designed for detecting both abrupt and gradual transitions, has lead to reliable performances achieving high detection rates. In addition, the acceptable computational time allows to process in real time. Once a video is partitioned into its elementary units, high-level applications can be processed, such as the key-frame extraction. Selecting representative frames from each shot to form a storyboard is considered as a static and local video summarization. In our research, we opted for a global method based on local extraction. Using refined centrist features from the singular space, we select representative frames using modified k-means clustering based on important scenes. This leads to catch pertinent frames without redoudancy in the final storyboard
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Shuang Lim Alvin S. "Improving throughput of video streaming in wireless sensor networks." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SUMMER/Computer_Science_and_Software_Engineering/Thesis/Li_Shuang_55.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Worley, Benjmain James. "Information: Moving forward with New Media through Experiments in Digital and Video Art." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/art_design_theses/39.

Full text
Abstract:
My art is an experimental exploration of new media using images and sounds, combined with technology to communicate messages both random and intentional. This thesis will document a contemporary method of creating art with computers, which results in disorganized images from the unique point of view of a dyslexic artist. This study will explain how art is randomized information and explain the didactic processes of my art. The concept of the work is to present old media in a new context and show how information is accumulated into a new understanding. Historically, my art builds on the Dadaist movement. Humor, excess, and performance are essential in my art because they connect to the audience. My library of videos comes from a society saturated with images, sound, and an avalanche of information. I have used art to process and create approximately 40,000 pieces that will be used in this work.
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Hoang T. "Minsi Trails Council Boy Scouts Of America camping video and how can a summer camp experience contribute to a scout's emotional growth and self-identity /." Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1992. http://www.kutztown.edu/library/services/remote_access.asp.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Makrinos, George Adam. "Drawing Music, Playing Architecture." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33890.

Full text
Abstract:
Architecture and music share intrinsic meanings generated by a constant stream of metaphors which are forms of poetic transformations. This thesis sought to challenge the present way an architect-musician makes drawings through the exploration of multimedia possibilities at hand. The drawings are composed using Macromedia Flash MX. OPEN HOMEPAGE.EXE To download flash player, click here: Download flash Player
Master of Architecture
APA, Harvard, Vancouver, ISO, and other styles
9

Chiang, Chih-Chuan, and 江志釧. "The Study of Video Segmentation and Summary in News Video." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/52502394436779655974.

Full text
Abstract:
碩士
國立交通大學
資訊工程系
88
In order to accomplish the segmentation and video summary in our News Video Browser System(NVBS), in this paper, we propose a robust scene change detection algorithm and two different video summary method, static summary and dynamic summary. In detect scene change point,we compute histogram distance and pixel distance for each frame pair to select possible scene change point. After this, we use local maximum test to eliminate the false positive. For each shot, we find the caption picture, flashlight picture and close-up picture to be keyframes.In dynamic summary, we select the high motion shot, static shot, flashlight shot and close-up shot. After some adjustment, we carry out the dynamic summary.
APA, Harvard, Vancouver, ISO, and other styles
10

Chou, Chih-Wei, and 周智偉. "Video summary based on rate-distortion criterion." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/3549ta.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
96
Due to advanced in computer technology,video data are becoming available in the daily life. The method of managing Multi-media video database is more and more important,and traditional database management for text documents is not suitable for video database; therefore, efficient video database must equip video summary. Video summarization contains a number of key-frame and the key-frame is a simple yet effective form of summarizing a video sequence and the video summarization help user browses rapidly and effectively find out video that the user wants to find. Video summarization except extraction of key-frame has another important key, the number of key-frame. When storage and network bandwidth are limited, the number of key-frame must conform to the limit condition and as far as possible find the representative key-frame. Video summarization is important topic for managing Multi-media video. The number of key-frame in video summarization is related to distortion between video summarization and original video sequence. The number of key-frame is more, the distortion between video summarization and original video sequence is smaller. This paper emphasizes key-frame extraction and the rate of key-frame. First the user inputs the number of key-frame and then extracts the key-frame that has smallest distortion between original video sequence in key-frame number limit situation. In order to understand the entire video structure,the Normalized the graph cuts(NCuts) group method is carried out to cluster similar video paragraph. The resulting clusters form a direction temporal graph and a shortest path algorithm is proposed to find main structure of video. The performance of the proposed method is demonstrated by experiments on a collection of videos from Open Vide Project. We provided a meaningful comparison between results of the proposed summarization with Open Vide storyboard and the PME based approach.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video summary"

1

Education, Alberta Alberta. Video distribution demonstration project: Executive summary, conclusions and recommendations. [Edmonton, Alta.]: Alberta Education, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Washington (State). Dept. of Labor and Industries. Committee for Video Display Terminals. Workplace guidelines for VDTs: A summary of recommendations. Olympia, WA: Dept. of Labor & Industries, Industrial Hygiene Section, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Video(Northants)Limited, United. Cable Television Licence Application Public Summary: North East Northamptonshire and Market Harborough Area. Cheltenhamn: United Video(Northants)Ltd, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Washington (State). Dept. of Labor & Industries. Committee for Video Display Terminals. Workplace guidelines for VDTs: A summary of recommendations by the Committee for Video Display Terminals. [Olympia, WA]: Washington Dept. of Labor and Industries, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Caron, André H. Systemized summary of Canadian regulations concerning children and the audiovisual industry. Montréal: Centre de recherche en droit public, Université de Montréal, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kinney, Jeff. Gureggu no dame nikki: Āa dōshite kō naru no. Tōkyō: Popurasha, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kinney, Jeff. Diario di una schiappa: Vita da cani. Milano: Il Castoro, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kinney, Jeff. Xiao pi hai ri ji: Diary of a wimpy kid : "Tou gai gu yao huang ji" de xing cun zhe. 2nd ed. Guangzhou: Xin shi ji chu ban she, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kinney, Jeff. Xun ka ri ji: Diary of a wimpy kid : Shi kong de shu jia. [Taibei Shi]: Bo shi tu shu chu ban you xian gong si, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kinney, Jeff. Xiao pi hai ri ji: Cong tian er jiang de ju zhai = : Diary of a wimpy kid. Guangzhou: Xin shi ji chu ban she, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Video summary"

1

Verdult, Vincent. "Summary." In Optimal Audio and Video Reproduction at Home, 312–24. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429443800-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reimers, Ulrich. "Digital Television — a First Summary." In Digital Video Broadcasting (DVB), 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-662-04562-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Suh-Yin, Shin-Tzer Lee, and Duan-Yu Chen. "Automatic Video Summary and Description." In Lecture Notes in Computer Science, 37–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40053-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kulshreshth, Arun K., and Joseph J. LaViola. "Summary and Conclusion." In Designing Immersive Video Games Using 3DUI Technologies, 111–14. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77953-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Parekh, Ranjan. "Function Summary." In Fundamentals of IMAGE, AUDIO, and VIDEO PROCESSING Using MATLAB®, 371–80. First edition. | Boca Raton : CRC Press, 2021.: CRC Press, 2021. http://dx.doi.org/10.1201/9781003019718-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Ju, and C. C. Jay Kuo. "Summary and Future Work." In Semantic Video Object Segmentation for Content-Based Multimedia Applications, 95–97. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-1503-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ouyang, Jian-quan, Li Jin-tao, and Zhang Yong-dong. "Ontology Based Sports Video Annotation and Summary." In Content Computing, 499–508. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30483-8_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Shih-Nung. "Storyboard-Based Automatic Summary Video Editing System." In Lecture Notes in Electrical Engineering, 733–45. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3187-8_69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Banwaskar, M. R., and A. M. Rajurkar. "Creating Video Summary Using Speeded Up Robust Features." In Applied Computer Vision and Image Processing, 308–17. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4029-5_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jangra, Anubhav, Adam Jatowt, Mohammad Hasanuzzaman, and Sriparna Saha. "Text-Image-Video Summary Generation Using Joint Integer Linear Programming." In Lecture Notes in Computer Science, 190–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45442-5_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video summary"

1

Whatley, Janice, and Amrey Ahmad. "Using Video to Record Summary Lectures to Aid Students’ Revision." In InSITE 2007: Informing Science + IT Education Conference. Informing Science Institute, 2007. http://dx.doi.org/10.28945/3180.

Full text
Abstract:
Video as a tool for teaching and learning in higher education is a multimedia application with considerable promise. Including video within the online support material for a module can help students to gain an understanding of the material and prepare for assessment. We have experimented with using short videos that summarise the lectures given, as an aid for students to use when revising. An interpretive method has been adopted to investigate the use students make of these videos, during the teaching term and when revising for assessment. In this paper a summary of ways that video can be used for supporting teaching and learning is given, the ways in which we used video are presented followed by discussion of some issues relating to producing summary length videos. Preliminary research indicates that students find these summary lectures very useful for reviewing lecture material as well as for their revision.
APA, Harvard, Vancouver, ISO, and other styles
2

Müller, Alexander, Mathias Lux, and Laszlo Böszörmenyi. "The video summary GWAP." In the 12th International Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2362456.2362476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Program summary." In Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, 2004. IEEE, 2004. http://dx.doi.org/10.1109/isimp.2004.1433979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Baopu, Max Q. H. Meng, and Qian Zhao. "Wireless Capsule endoscopy video summary." In 2010 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2010. http://dx.doi.org/10.1109/robio.2010.5723369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Zhenyong, Hongtao Lu, Nan Deng, and Nengbin Cai. "Four-level video summary coding." In 2010 3rd International Congress on Image and Signal Processing (CISP). IEEE, 2010. http://dx.doi.org/10.1109/cisp.2010.5648288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Unknown. "CSCW '98 video program (summary)." In the 1998 ACM conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/289444.289521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Jiping, and Ganfeng Lu. "k-SDPP: Fixed-Size Video Summarization via Sequential Determinantal Point Processes." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/108.

Full text
Abstract:
With the explosive growth of video data, video summarization which converts long-time videos to key frame sequences has become an important task in information retrieval and machine learning. Determinantal point processes (DPPs) which are elegant probabilistic models have been successfully applied to video summarization. However, existing DPP-based video summarization methods suffer from poor efficiency of outputting a specified size summary or neglecting inherent sequential nature of videos. In this paper, we propose a new model in the DPP lineage named k-SDPP in vein of sequential determinantal point processes but with fixed user specified size k. Our k-SDPP partitions sampled frames of a video into segments where each segment is with constant number of video frames. Moreover, an efficient branch and bound method (BB) considering sequential nature of the frames is provided to optimally select k frames delegating the summary from the divided segments. Experimental results show that our proposed BB method outperforms not only k-DPP and sequential DPP (seqDPP) but also the partition and Markovian assumption based methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Demir, Mahmut, and H. Isil Bozma. "Video Summarization via Segments Summary Graphs." In 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2015. http://dx.doi.org/10.1109/iccvw.2015.140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Zhijie, Xiaohua Li, and Zhijun Sun. "The Summary of Video Denoising Method." In 2015 International Symposium on Computers and Informatics. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/isci-15.2015.247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sull, Sanghoon, Jung-Rim Kim, Yunam Kim, Hyun S. Chang, and Sang U. Lee. "Scalable hierarchical video summary and search." In Photonics West 2001 - Electronic Imaging, edited by Minerva M. Yeung, Chung-Sheng Li, and Rainer W. Lienhart. SPIE, 2001. http://dx.doi.org/10.1117/12.410967.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video summary"

1

Monroe, M. C., and C. F. Young. Summary evaluation of the video, {open_quotes}Transportation of radioactive and hazardous materials: Safety for all concerned{close_quotes}. Office of Scientific and Technical Information (OSTI), July 1993. http://dx.doi.org/10.2172/10181143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morris, Julia, Julia Bobiak, Fatima Asad, and Fozia Nur. Report: Accessibility of Health Data in Rural Canada. Spatial Determinants Lab at Carleton University, Department of Health Sciences, February 2021. http://dx.doi.org/10.22215/sdhlab/2020.4.

Full text
Abstract:
To inform the development of an interactive web-based rural health atlas, the Rural Atlas team within the Spatial Determinants Lab at Carleton University, Department of Health Sciences carried out two sets of informal interviews (User Needs Assessment and Tool Development). These interviews were conducted in order to obtain insight from key stakeholders that have been involved in rural health settings, rural health policy or advocacy, or the development of health mapping tools. Interviews took place via video-conferencing software with participants in the spring of 2020.The following report provides a brief summary of the findings of both sets of interviews.
APA, Harvard, Vancouver, ISO, and other styles
3

Jung, Jacob, Stephanie Hertz, and Richard Fischer. Summary of Collaborative Wildlife Protection and Recovery Initiative (CWPRI) conservation workshop : Least Bell’s Vireo. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42102.

Full text
Abstract:
This special report summarizes the regional workshop held 24–26 April 2018 at the US Fish and Wildlife Service (USFWS) Ecological Services Office in Carlsbad, California on the importance of collaboration among federal, state, and nongovernmental agencies to facilitate the recovery of threatened and endangered species (TES). This workshop focused primarily on one species, the least Bell’s vireo (LBVI), and how to achieve full recovery and eventual delisting through agency partnerships. A major theme of the workshop was applying the Endangered Species Act (ESA) Section 7(a)(1) conservation planning process as a building block towards recovery of LBVI—as well as other threatened, endangered, and at-risk riparian species within the Southwest. The main objective of this workshop was to assemble an interagency and interdisciplinary group of wildlife biologists and managers to detail how the Section 7(a)(1) conservation planning approach, in consultation with the USFWS, can assist in the recovery of LBVI primarily on federal lands but also other public and private lands. Goals of this workshop were to (1) review Section 7(a)(1); (2) outline LBVI ecosystem processes, life history, threats, and conservation solutions; and (3) develop and organize agency commitments to collaborative conservation practices.
APA, Harvard, Vancouver, ISO, and other styles
4

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe, and Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Edited by Mark James and Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Full text
Abstract:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was concluded that geophysical acoustic technologies cannot presently detect individual scallop but the remote sensing technologies can be used for broad scale habitat mapping of scallop harvest areas. Further, the techniques allow for monitoring these areas in terms of scallop dredging impact. Camera (video and still) imagery is effective for scallop count and provide data that compares favourably with diver-based ground truth information for recording scallop density. Deployment of cameras is possible through inexpensive drop-down camera frames which it is recommended be deployed on a wide area basis for further trials. In addition, implementation of a ‘citizen science’ approach to wide area recording is suggested to increase the stock assessment across the widest possible variety of seafloor types around Scotland. Armed with such data a full, statistical analysis could be completed and data used with automated processing routines for future long-term monitoring of stock.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography