Academic literature on the topic 'Video content analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video content analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video content analysis"

1

Liang, Chao, Changsheng Xu, and Hanqing Lu. "Personalized Sports Video Customization Using Content and Context Analysis." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–20. http://dx.doi.org/10.1155/2010/836357.

Full text
Abstract:
We present an integrated framework on personalized sports video customization, which addresses three research issues: semantic video annotation, personalized video retrieval and summarization, and system adaptation. Sports video annotation serves as the foundation of the video customization system. To acquire detailed description of video content, external web text is adopted to align with the related sports video according to their semantic correspondence. Based on the derived semantic annotation, a user-participant multiconstraint 0/1 Knapsack model is designed to model the personalized video customization, which can unify both video retrieval and summarization with different fusion parameters. As a measure to make the system adaptive to the particular user, a social network based system adaptation algorithm is proposed to learn latent user preference implicitly. Both quantitative and qualitative experiments conducted on twelve broadcast basketball and football videos validate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Gad, Gad, Eyad Gad, Korhan Cengiz, Zubair Fadlullah, and Bassem Mokhtar. "Deep Learning-Based Context-Aware Video Content Analysis on IoT Devices." Electronics 11, no. 11 (June 4, 2022): 1785. http://dx.doi.org/10.3390/electronics11111785.

Full text
Abstract:
Integrating machine learning with the Internet of Things (IoT) enables many useful applications. For IoT applications that incorporate video content analysis (VCA), deep learning models are usually used due to their capacity to encode the high-dimensional spatial and temporal representations of videos. However, limited energy and computation resources present a major challenge. Video captioning is one type of VCA that describes a video with a sentence or a set of sentences. This work proposes an IoT-based deep learning-based framework for video captioning that can (1) Mine large open-domain video-to-text datasets to extract video-caption pairs that belong to a particular domain. (2) Preprocess the selected video-caption pairs including reducing the complexity of the captions’ language model to improve performance. (3) Propose two deep learning models: A transformer-based model and an LSTM-based model. Hyperparameter tuning is performed to select the best hyperparameters. Models are evaluated in terms of accuracy and inference time on different platforms. The presented framework generates captions in standard sentence templates to facilitate extracting information in later stages of the analysis. The two developed deep learning models offer a trade-off between accuracy and speed. While the transformer-based model yields a high accuracy of 97%, the LSTM-based model achieves near real-time inference.
APA, Harvard, Vancouver, ISO, and other styles
3

Cui, Limeng, and Lijuan Chu. "YouTube Videos Related to the Fukushima Nuclear Disaster: Content Analysis." JMIR Public Health and Surveillance 7, no. 6 (June 7, 2021): e26481. http://dx.doi.org/10.2196/26481.

Full text
Abstract:
Background YouTube (Alphabet Incorporated) has become the most popular video-sharing platform in the world. The Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster resulted in public anxiety toward nuclear power and radiation worldwide. YouTube is an important source of information about the FDNPP disaster for the world. Objective This study's objectives were to examine the characteristics of YouTube videos related to the FDNPP disaster, analyze the content and comments of videos with a quantitative method, and determine which features contribute to making a video popular with audiences. This study is the first to examine FDNPP disaster–related videos on YouTube. Methods We searched for the term “Fukushima nuclear disaster” on YouTube on November 2, 2019. The first 60 eligible videos in the relevance, upload date, view count, and rating categories were recorded. Videos that were irrelevant, were non-English, had inappropriate words, were machine synthesized, and were <3 minutes long were excluded. In total, 111 videos met the inclusion criteria. Parameters of the videos, including the number of subscribers, length, the number of days since the video was uploaded, region, video popularity (views, views/day, likes, likes/day, dislikes, dislikes/day, comments, comments/day), the tone of the videos, the top ten comments, affiliation, whether Japanese people participated in the video, whether the video recorder visited Fukushima, whether the video contained theoretical knowledge, and whether the video contained information about the recent situation in Fukushima, were recorded. By using criteria for content and technical design, two evaluators scored videos and grouped them into the useful (score: 11-14), slightly useful (score: 6-10), and useless (score: 0-5) video categories. Results Of the 111 videos, 43 (38.7%) videos were useful, 43 (38.7%) were slightly useful, and 25 (22.5%) were useless. Useful videos had good visual and aural effects, provided vivid information on the Fukushima disaster, and had a mean score of 12 (SD 0.9). Useful videos had more views per day (P<.001), likes per day (P<.001), and comments per day (P=.02) than useless and slightly useful videos. The popularity of videos had a significant correlation with clear sounds (likes/day: P=.001; comments/day: P=.02), vivid information (likes/day: P<.001; comments/day: P=.007), understanding content (likes/day: P=.001; comments/day: P=.04). There was no significant difference in likes per day (P=.72) and comments per day (P=.11) between negative and neutral- and mixed-tone videos. Videos about the recent situation in Fukushima had more likes and comments per day. Video recorders who personally visited Fukushima Prefecture had more subscribers and received more views and likes. Conclusions The possible features that made videos popular to the public included video quality, videos made in Fukushima, and information on the recent situation in Fukushima. During risk communication on new forms of media, health institutes should increase publicity and be more approachable to resonate with international audiences.
APA, Harvard, Vancouver, ISO, and other styles
4

Thinh, Bui Van, Tran Anh Tuan, Ngo Quoc Viet, and Pham The Bao. "Content based video retrieval system using principal object analysis." Tạp chí Khoa học 14, no. 9 (September 20, 2019): 24. http://dx.doi.org/10.54607/hcmue.js.14.9.291(2017).

Full text
Abstract:
Video retrieval is a searching problem on videos or clips based on the content of video clips which relates to the input image or video. Some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. We propose a content based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is evaluated on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show.
APA, Harvard, Vancouver, ISO, and other styles
5

Faeruz, Ratna, Maila D. H. Rahiem, Nur Surayyah Madhubala Abdullah, Dzikri Rahmat Romadhon, Ratna Sari Dewi, Rahmatullah Rahmatullah, and Dede Rosyada. "Child Educational Content on Digital Folklore "Pak Lebai Malang": A Qualitative Content Analysis." Al-Athfal: Jurnal Pendidikan Anak 7, no. 2 (December 22, 2021): 111–22. http://dx.doi.org/10.14421/al-athfal.2021.72-02.

Full text
Abstract:
Purpose – The purpose of this study was to explore child educational content digital folklore on YouTube, and It is used to teach young children about science, language, and values. The unit analysis on this research was the video of Pak Lebai Malang from West Sumatera, Indonesia.Design/methods/approach – The qualitative content analysis method was used in this study. The content analyzed was digital folklore based on the Minangkabau story Pak Lebai Malang. The process began with downloading the video, creating a transcript, taking notes on the text, language, and context, re-watching the video, comparing and contrasting it to the memo, and eliciting evidence from the video.Findings – The data revealed the following ways in which digital folklore on YouTube teaches science, language, and values: 1) digital technology illustrates science concepts with simple-to-understand videos; 2) by repeating the words and visualizing each spoken word, YouTube videos teach children new vocabulary. 3) the characters’ expressions and intonation in the video teach children about social values.Research implications/limitations – This research could serve as a springboard for future research on the use of digital folklores in early childhood classrooms. It is advised that additional research be conducted to improve the interest, effectiveness, and applicability of digital folklore in the early childhood learning process and design more effective programs for teaching science, language, and value to young people children. The study’s drawback is that it analyzes only one video. If it is compared to other videos, it may provide a complete view. Practical implications – This study informs educators on the potential for using digital folklore to teach science, language, and values. It entails the implementation of more creative strategies in early childhood education. Additionally, the study inspires innovative content creators on YouTube to make their videos more relevant to young children’s learning. Additionally, parents may discover that something as simple as a YouTube video could be an incredible resource for their child’s development.Originality/value – The study explains child educational content based on local wisdom. The digital form of Pak Lebai Malang folklore can facilitate accessibility and acceptability. Paper type Research paper
APA, Harvard, Vancouver, ISO, and other styles
6

Jacob, Jaimon, M. Sudheep Elayidom, and V. P. Devassia. "Video content analysis and retrieval system using video storytelling and indexing techniques." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 6 (December 1, 2020): 6019. http://dx.doi.org/10.11591/ijece.v10i6.pp6019-6025.

Full text
Abstract:
Videos are used often for communicating ideas, concepts, experience, and situations, because of the significant advances made in video communication technology. The social media platforms enhanced the video usage expeditiously. At, present, recognition of a video is done, using the metadata like video title, video descriptions, and video thumbnails. There are situations like video searcher requires only a video clip on a specific topic from a long video. This paper proposes a novel methodology for the analysis of video content and using video storytelling and indexing techniques for the retrieval of the intended video clip from a long duration video. Video storytelling technique is used for video content analysis and to produce a description of the video. The video description thus created is used for preparation of an index using wormhole algorithm, guarantying the search of a keyword of definite length L, within the minimum worst-case time. This video index can be used by video searching algorithm to retrieve the relevant part of the video by virtue of the frequency of the word in the keyword search of the video index. Instead of downloading and transferring a whole video, the user can download or transfer the specifically necessary video clip. The network constraints associated with the transfer of videos are considerably addressed.
APA, Harvard, Vancouver, ISO, and other styles
7

Duan, Yamin. "Analysis Of Competitive Strategy Of Bilibili Content Ecology." BCP Business & Management 34 (December 14, 2022): 865–72. http://dx.doi.org/10.54691/bcpbm.v34i.3106.

Full text
Abstract:
With the development of The Times, the Internet industry has gradually penetrated all aspects of our daily life, especially in the field of pan-entertainment, among which the video industry is the gathering place of the Internet industry. Long video websites include iQiyi, Youku, and Tencent, while short video websites include Douyin and Kuaishou. Bilibili, an especially popular video website in recent years, has attracted many young people as its users. Bilibili, as a video website with both long and short videos, mainly focuses on PUGC content but also has rich self-made content. It has strong competition from many video websites, but it also successfully occupies a considerable part of the market with its unique content ecology and bullet screen culture. This paper will analyze the external and internal competition pattern of Bilibili, then analyze its differentiated competition strategy and competitive advantages, and finally put forward suggestions and prospects for its future development direction.
APA, Harvard, Vancouver, ISO, and other styles
8

Eide, Viktor S. Wold, Ole-Christoffer Granmo, Frank Eliassen, and Jørgen Andreas Michaelsen. "Real-time video content analysis." ACM Transactions on Multimedia Computing, Communications, and Applications 2, no. 2 (May 2006): 149–72. http://dx.doi.org/10.1145/1142020.1142024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Overmeire, Luk, Lode Nachtergaele, Fabio Verdicchio, Joeri Barbarien, and Peter Schelkens. "Constant quality video coding using video content analysis." Signal Processing: Image Communication 20, no. 4 (April 2005): 343–69. http://dx.doi.org/10.1016/j.image.2005.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pan, Peng, Changhua Yu, Tao Li, Xilei Zhou, Tingting Dai, Hanhan Tian, and Yaozu Xiong. "Xigua Video as a Source of Information on Breast Cancer: Content Analysis." Journal of Medical Internet Research 22, no. 9 (September 29, 2020): e19668. http://dx.doi.org/10.2196/19668.

Full text
Abstract:
Background Seeking health information on the internet is a popular trend. Xigua Video, a short video platform in China, ranks among the most accessed websites in the country and hosts an increasing number of videos with medical information. However, the nature of these videos is frequently unscientific, misleading, or even harmful. Objective Little is known about Xigua Video as a source of information on breast cancer. Thus, the study aimed to investigate the contents, quality, and reliability of breast cancer–related content on Xigua Video. Methods On February 4, 2020, a Xigua Video search was performed using the keyword “breast cancer.” Videos were categorized by 2 doctors based on whether the video content provided useful or misleading information. Furthermore, the reliability and quality of the videos were assessed using the 5-point DISCERN tool and 5-point global quality score criteria. Results Out of the 170 videos selected for the study, 64 (37.6%) were classified as useful, whereas 106 (62.4%) provided misleading information. A total of 41.8% videos (71/170) were generated by individuals compared to 19.4% videos (33/170) contributed by health care professionals. The topics mainly covered etiology, anatomy, symptoms, preventions, treatments, and prognosis. The top topic was “treatments” (119/170, 70%). The reliability scores and global quality scores of the videos in the useful information group were high (P<.001). No differences were observed between the 2 groups in terms of video length, duration in months, and comments. The number of total views was higher for the misleading information group (819,478.5 vs 647,940) but did not reach a level of statistical significance (P=.112). The uploading sources of the videos were mainly health care professionals, health information websites, medical advertisements, and individuals. Statistical differences were found between the uploading source groups in terms of reliability scores and global quality scores (P<.001). In terms of total views, video length, duration, and comments, no statistical differences were indicated among the said groups. However, a statistical difference was noted between the useful and misleading information video groups with respect to the uploading sources (P<.001). Conclusions A large number of Xigua videos pertaining to breast cancer contain misleading information. There is a need for accurate health information to be provided on Xigua Video and other social media; health care professionals should address this challenge.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Video content analysis"

1

Lidén, Jonas. "Distributed Video Content Analysis." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-99062.

Full text
Abstract:
Video Content Analysis (VCA) is usually computationally intense and time consuming. In this thesis the efficiency of VCA is increased by implementing a distributed VCA architecture. Automatic speech recognition is used as a case study to evaluate how the efficiency of VCA can be increased by distributing the workload across several machines. The system is to be run on standard desktop computers and need to support a variety of operating systems. The developed distributed system is compared to a serial system in use today. The results show increased performance, at the cost of a small increase in error rate. Two types of load balancing algorithms, static load balancing and dynamic load balancing, is evaluated in order to increase system throughput and it is concluded that the dynamic algorithm outperforms the static algorithm when running on a heterogeneous set of machines and that the differences are negligible when running on a homogeneous set of machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Chan, Stephen Chi Yee. "Video analysis for content-based applications." Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fraz, Muhammad. "Video content analysis for intelligent forensics." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.

Full text
Abstract:
The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild.
APA, Harvard, Vancouver, ISO, and other styles
4

VON, WITTING DANIEL. "Annotation and indexing of video content basedon sentiment analysis." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-156387.

Full text
Abstract:
Due to scientific advances in mobility and connectivity, digital media can be distributed to multiple platforms by streams and video on demand services. The abundance of video productions poses a problem in term sof storage, organization and cataloging. How the movies or TV-series should be sorted and retrieved is much dictated by user preferences,motivating proper indexing and an notation of video content. While movies tend to be described by keywords or genre, this thesis constitutesan attempt to automatically index videos, based on their semantics. Representing a video by the sentiment it invokes, would not only be more descriptive, but could also be used to compare movies directly based onthe actual content. Since filmmaking is biased by human perception,this project looks to utilize these characteristics for machine learning.The video is modeled as a sequence of shots, attempting to capture the temporal nature of the information. Sentiment analysis of videos has been used as labels in a supervised learning algorithm, namely a SVM using a string kernel. Besides the specifics of learning, the work of this thesis involve other relevant fields such a feature extraction and videosegmentation. The results show that there are patterns in video fit for learning; however the performance of the method is inconclusive due to lack of data. It would therefore be interesting to evaluate the approach further, using more data along with minor modifications.
Tack vare tekniska framsteg inom mobilitet och tillgänglighet, kan media såsom film distribueras till flertalet olika plattformar, i form avströmning eller liknande tjänster. Det enorma utbudet av TV-serier och film utgör svårigheter för hur materialet ska lagras, sorteras och katalogiseras. Ofta är det dessutom användarna som ställer krav på vad somär relevant i en sökning. Det påvisar vikten av lämplig notation och indexering.I dag används oftast text som beskrivning av videoinnehållet, i form av antingen genre eller nyckelord. Det här arbetet är ett försök till att automatiskt kunna indexera film och serier, beroende på det semantiska innehållet. Att istället beskriva videomaterialet beroende på hur det uppfattas, samt de känslor som väcks, innebär en mer karaktäristisk skildring. Ett sådant signalement skulle beskriva det faktiska innehållet på ett annat sätt, som är mer lämpligt för jämförelser mellan två videoproduktioner. Eftersom skapandet av film anpassar sig till hur människor uppfattar videomaterial, kommer denna undersökning utnyttja de regler och praxis som används, som hjälp för maskininlärningen. Hur en film uppfattas, eller de känslor som framkallas, utgör en bas för inlärningen, då de används för att beteckna de olika koncept som ska klassificeras. En video representeras som en sekvens av klipp, med avsikt att fånga de tidsmässiga egenskaperna. Metoden som används för denna övervakade inlärning är en SVM som kan hantera data i form av strängar. Förutom de teknikaliteter som krävs för att förstå inlärningen,tar rapporten upp relevanta andra områden, t.ex. hur information ska extraheras och videosegmentering. Resultaten visar att det finns mönster i video, lämpliga för inlärning. På grund av för lite data, är det inte möjligt att avgöra hur metoden presterar. Det vore därför intressant med vidare analys, med mer data samt smärre modifikationer.
APA, Harvard, Vancouver, ISO, and other styles
5

Lindmark, Peter G. "A CONTENT ANALYSIS OF ADVERTISING IN POPULAR VIDEO GAMES." Cleveland State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=csu1326227481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Horn, Johanna, and Daniel Severus. "Exploring the Trust Generating Factors of Video Tutorials." Thesis, Högskolan i Gävle, Företagsekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-23651.

Full text
Abstract:
New technologies have increased the possible ways in which humans interact and as a result require new as well as old ways to establish trust. The findings of this paper suggest that trust should be divided into three main categories of trust drivers, exchange factors, design factors and motivational factors. The results indicate that tutorials can, and should, include drivers that build these categories. While we found varying degrees on how well implemented these were, we found that design factors were generally more prominent and found opportunities for tutorials to improve on the exchange side.
APA, Harvard, Vancouver, ISO, and other styles
7

Ren, Jinchang. "Semantic content analysis for effective video segmentation, summarisation and retrieval." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4251.

Full text
Abstract:
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Song, Yale. "Structured video content analysis : learning spatio-temporal and multimodal structures." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90003.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 141-154).
Video data exhibits a variety of structures: pixels exhibit spatial structure, e.g., the same class of objects share certain shapes and/or colors in image; sequences of frames exhibit temporal structure, e.g., dynamic events such as jumping and running have a certain chronological order of frame occurrence; and when combined with audio and text, there is multimodal structure, e.g., human behavioral data shows correlation between audio (speech) and visual information (gesture). Identifying, formulating, and learning these structured patterns is a fundamental task in video content analysis. This thesis tackles two challenging problems in video content analysis - human action recognition and behavior understanding - and presents novel algorithms to solve each: one algorithm performs sequence classification by learning spatio-temporal structure of human action; another performs data fusion by learning multimodal structure of human behavior. The first algorithm, hierarchical sequence summarization, is a probabilistic graphical model that learns spatio-temporal structure of human action in a fine-to-coarse manner. It constructs a hierarchical representation of video by iteratively summarizing the video sequence, and uses the representation to learn spatio-temporal structure of human action, classifying sequences into action categories. We developed an efficient learning method to train our model, and show that its complexity grows only sublinearly with the depth of the hierarchy. The second algorithm focuses on data fusion - the task of combining information from multiple modalities in an effective way. Our approach is motivated by the observation that human behavioral data is modality-wise sparse, i.e., information from just a few modalities contain most information needed at any given time. We perform data fusion using structured sparsity, representing a multimodal signal as a sparse combination of multimodal basis vectors embedded in a hierarchical tree structure, learned directly from the data. The key novelty is in a mixed-norm formulation of regularized matrix factorization via structured sparsity. We show the effectiveness of our algorithms on two real-world application scenarios: recognizing aircraft handling signals used by the US Navy, and predicting people's impression about the personality of public figures from their multimodal behavior. We describe the whole procedure of the recognition pipeline, from the signal acquisition to processing, to the interpretation of the processed signals using our algorithms. Experimental results show that our algorithms outperform state-of-the-art methods on human action recognition and behavior understanding.
by Yale Song.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Humienny, Raymond Tyler. "Content Analysis of Video Game Loot Boxes in the Media." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1546434312362585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Feng. "Video content analysis and its applications for multimedia authoring of presentations /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20WANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Video content analysis"

1

Li, Ying, and C. C. Jay Kuo. Video Content Analysis Using Multimodal Information. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-3712-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Content-based analysis of digital video. Boston, MA: Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ying. Video content analysis using multimodal information: For movie content extraction, indexing, and representation. Boston, MA: Kluwer Academic Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Ying. Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation. Boston, MA: Springer US, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jay, Kuo C. C., ed. Video content analysis using multimodal information: For movie content extraction, indexing, and representation. Boston, Mass: Kluwer Academic Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bernard, Merialdo, and Lian Shiguo, eds. TV content analysis: Techniques and applications / Yiannis Kompatsiaris, Bernard Merialdo, and Shiguo Lian. Boca Raton, FL: Taylor & Francis, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Coelho, Alessandra Martins. Multimedia Networking and Coding: State-of-the Art Motion Estimation in the Context of 3D TV. Cyprus: INTECH, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

(Korea), Kungnip Pangjae Yŏn'guso. Chinŭnghyŏng yŏngsang chŏngbo insik kisul ŭl iyong han chaenan kwalli kodohwa kibŏp kaebal =: Advancement of disaster management techniques for intelligent video contents analysis. Sŏul T'ŭkpyŏlsi: Kungnip Pangjae Kyoyugwŏn Yŏn'guwŏn, Pangjae Yŏn'guso, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Soriano, Cheryll Ruth, and Earvin Charles Cabalquinto. Philippine Digital Cultures. Nieuwe Prinsengracht 89 1018 VR Amsterdam Nederland: Amsterdam University Press, 2022. http://dx.doi.org/10.5117/9789463722445.

Full text
Abstract:
Social media platforms have been pivotal in redefining the conduct of contemporary society. Amid the proliferation of a range of new and ubiquitous online platforms, YouTube, a video-based platform, remains a key driver in the democratisation of creative, playful, vernacular, intimate, as well as political expressions. As a critical node of contemporary communication and digital cultures, its steady uptake and appropriation in a social media-savvy nation such as the Philippines requires a critical examination of its role in the continued reconstruction of identities, communities, and broader social institutions. This book closely analyses the diverse content and practices of amateur Filipino YouTubers, exposing and problematising the dynamics of brokering the contested aspirational logics of beauty and selfhood, interracial relationships, world-class labour, and progressive governance in a digital sphere. Ultimately, Philippine Digital Cultures: Brokerage Dynamics on YouTube offers a fresh, compelling, and nuanced account of YouTube as an important site for the mediation of culture, economy, and politics in Philippine postcolonial modernity amid rapid economic globalisation and digitalisation.
APA, Harvard, Vancouver, ISO, and other styles
10

Hanjalic, Alan. Content-Based Analysis of Digital Video. Springer, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Video content analysis"

1

Otsuka, Isao, Sam Shipman, and Ajay Divakaran. "A Video Browsing enabled Personal Video Recorder." In Multimedia Content Analysis, 1–12. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hua, Xian-Sheng, and Hong-Jiang Zhang. "Automatic Home Video Editing." In Multimedia Content Analysis, 1–35. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 3271–76. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_1018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 1–8. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_1018-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 4381–88. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_1018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Guangyu, Changsheng Xu, and Qingming Huang. "Sports Video Analysis: From Semantics to Tactics." In Multimedia Content Analysis, 1–44. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, Kevin W., and Ajay Divakaran. "Broadcast Video Content Segmentation by Supervised Learning." In Multimedia Content Analysis, 1–17. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Anandathirtha, Paresh, K. R. Ramakrishnan, S. Kumar Raja, and Mohan S. Kankanhalli. "Experiential Sampling for Object Detection in Video." In Multimedia Content Analysis, 1–32. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Albanese, Massimiliano, Pavan Turaga, Rama Chellappa, Andrea Pugliese, and V. S. Subrahmanian. "Semantic Video Content Analysis." In Video Search and Mining, 147–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12900-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Ying, and C. C. Jay Kuo. "Video Content Pre-Processing." In Video Content Analysis Using Multimodal Information, 35–67. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-3712-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video content analysis"

1

Moreira, Daniel, Siome Goldenstein, and Anderson Rocha. "Sensitive-Video Analysis." In XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.

Full text
Abstract:
Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive videoanalysis, which we tackle in two ways. In the first one (sensitive-video classification), we explore efficient methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. Hypotheses are stated and validated, leading to contributions (papers, dataset, and patents) in the fields of Digital Forensics and Computer Vision.
APA, Harvard, Vancouver, ISO, and other styles
2

Oyucu, Saadin, and Huseyin Polat. "Online Video Content Analysis System." In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, 2018. http://dx.doi.org/10.1109/ismsit.2018.8567320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Min, Jesse S. Jin, and Suhuai Luo. "Personalized video adaptation based on video content analysis." In the 9th International Workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1509212.1509216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

He, Yun, Cheng Du, and Tao Xie. "Video object analysis for content-based video coding." In Electronic Imaging '99, edited by Kiyoharu Aizawa, Robert L. Stevenson, and Ya-Qin Zhang. SPIE, 1998. http://dx.doi.org/10.1117/12.334694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abrams, David, and Steven McDowall. "Video Content Analysis with Effective Response." In 2007 IEEE Conference on Technologies for Homeland Security. IEEE, 2007. http://dx.doi.org/10.1109/ths.2007.370020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Granmo, Ole-Christoffer. "Parallel hypothesis driven video content analysis." In the 2004 ACM symposium. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/967900.968035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sakarya, Ufuk, and Ziya Telatar. "Video content analysis using dominant sets." In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alshuth, Peter, Thorsten Hermes, Lutz Voigt, and Otthein Herzog. "Video retrieval: content analysis by ImageMiner." In Photonics West '98 Electronic Imaging, edited by Ishwar K. Sethi and Ramesh C. Jain. SPIE, 1997. http://dx.doi.org/10.1117/12.298457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dimitrova, Nevenka, Thomas McGee, Lalitha Agnihotri, Serhan Dagtas, and Radu S. Jasinschi. "Selective video content analysis and filtering." In Electronic Imaging, edited by Minerva M. Yeung, Boon-Lock Yeo, and Charles A. Bouman. SPIE, 1999. http://dx.doi.org/10.1117/12.373567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yongjie, Weiyi Li, and Houxiang Wang. "Dynamic video summarization with content analysis." In the Fifth International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2499788.2499845.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video content analysis"

1

Orchard, Michael, and Robert Joyce. Content Analysis of Video Sequences. Fort Belvoir, VA: Defense Technical Information Center, February 2002. http://dx.doi.org/10.21236/ada414069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brumby, Steven P. Video Analysis & Search Technology (VAST): Automated content-based labeling and searching for video and images. Office of Scientific and Technical Information (OSTI), May 2014. http://dx.doi.org/10.2172/1133765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Full text
Abstract:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO, and other styles
4

Vlasenko, Kateryna V., Sergei V. Volkov, Daria A. Kovalenko, Iryna V. Sitak, Olena O. Chumak, and Alexander A. Kostikov. Web-based online course training higher school mathematics teachers. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3894.

Full text
Abstract:
The article looks into the problem of theoretical aspects of using Web 2.0 technology in higher education. This paper describes answers of 87 respondents who have helped to identify the most required types of educational content for the integration to pages of the online course training higher school mathematics teachers. The authors carry out a theoretical analysis of researches and resources that consider the development of theoretical aspects of using web tools in higher education. The research presents the characteristics common to online courses, principles of providing a functioning and physical placement of online systems in webspace. The paper discusses the approaches of creating and using animated content in online systems. The authors describe the methods of publishing video content in web systems, in particular, the creation and use of video lectures, animation, presentations. This paper also discusses several of the existing options of integrating presentations on web pages and methods of integrating mathematical expressions in web content. It is reasonable to make a conclusion about the expediency of promoting online courses, the purpose of which is to get mathematics teachers acquainted with the technical capabilities of creating educational content developed on Web 2.0 technology.
APA, Harvard, Vancouver, ISO, and other styles
5

Chorna, Olha V., Vita A. Hamaniuk, and Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3253.

Full text
Abstract:
Integration of ICT significantly increases the possibilities of the educational process and extends the boundaries of the educational sphere as a whole. Publicly available resources, such as e-mail, blogs, forums, online applications, video hosting sites, can serve as the basis for building open learning and education. Informational educational technologies of learning foreign languages are in the focus of this study. The article represents the results of theoretical analysis of content on the subject of its personal- and didactic-definite orientation, as well as some aspects of the practical use of commonly used YouTube video materials in the process of teaching German as the first or second foreign language in higher education, namely at the pedagogical university. Taking into account the practical experience of using the materials of several relevant thematic YouTube channels with a fairly wide constant audience, a concise didactic analysis of their product is presented and recommendations on converting video content into methodological material in the framework of practical course of German language by future teachers are offered. Due to the suggested recommendations, the following tasks can be solved: enrichment of the vocabulary; semantization of phraseological units, constant figures of speech, cliché; development of pronunciation skills; expansion of linguistic competence; improving listening and speaking skills; increasing motivation to learn, etc.
APA, Harvard, Vancouver, ISO, and other styles
6

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4431.

Full text
Abstract:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Guan, Haiying, Daniel Zhou, Jonathan Fiscus, John Garofolo, and James Horan. Evaluation infrastructure for the measurement of content-based video quality and video analytics performance. Gaithersburg, MD: National Institute of Standards and Technology, July 2017. http://dx.doi.org/10.6028/nist.ir.8187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Frantseva, Anastasiya. The video lectures course "Elements of Mathematical Logic" for students enrolled in the Pedagogical education direction, profile Primary education. Frantseva Anastasiya Sergeevna, April 2021. http://dx.doi.org/10.12731/frantseva.0411.14042021.

Full text
Abstract:
The video lectures course is intended for full-time and part-time students enrolled in "Pedagogical education" direction, profile "Primary education" or "Primary education - Additional education". The course consists of four lectures on the section "Elements of Mathematical Logic" of the discipline "Theoretical Foundations of the Elementary Course in Mathematics" on the profile "Primary Education". The main lecture materials source is a textbook on mathematics for students of higher pedagogical educational institutions Stoilova L.P. (M.: Academy, 2014.464 p.). The content of the considered mathematics section is adapted to the professional needs of future primary school teachers. It is accompanied by examples of practice exercises from elementary school mathematics textbooks. The course assumes students productive learning activities, which they should carry out during the viewing. The logic’s studying contributes to the formation of the specified profile students of such professional skills as "the ability to carry out pedagogical activities for the implementation of primary general education programs", "the ability to develop methodological support for programs of primary general education." In addition, this section contributes to the formation of such universal and general professional skills as "the ability to perform searching, critical analysis and synthesis of information, to apply a systematic approach to solving the assigned tasks", "the ability to participate in the development of basic and additional educational programs, to design their individual components". The video lectures course was recorded at Irkutsk State University.
APA, Harvard, Vancouver, ISO, and other styles
9

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
10

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography