To see the other types of publications on this topic, follow the link: Video databases.

Journal articles on the topic 'Video databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Agarla, Mirko, Luigi Celona, and Raimondo Schettini. "An Efficient Method for No-Reference Video Quality Assessment." Journal of Imaging 7, no. 3 (2021): 55. http://dx.doi.org/10.3390/jimaging7030055.

Full text
Abstract:
Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of video frames using two lightweight Convolutional Neural Networks (CNNs). Then, it estima
APA, Harvard, Vancouver, ISO, and other styles
2

Mozhaeva, Anastasia, Elizaveta Vashenko, Vladimir Selivanov, Alexei Potashnikov, Igor Vlasuyk, and Lee Streeter. "Analysis of current video databases for quality assessment." T-Comm 16, no. 2 (2022): 48–56. http://dx.doi.org/10.36724/2072-8735-2022-16-2-48-56.

Full text
Abstract:
The popularity of video streaming has grown significantly over the past few years. Video quality prediction metrics can be used to perform extensive video codec analysis and customize high-quality assurance. Video databases with subjective ratings form an important basis for training video quality metrics, and codecs based on machine learning algorithms. More than three dozen subjective video databases are now available. In this article, modern video databases are presented, analyzed current database and findings methods for improving. For analysis, performance criteria are proposed based on s
APA, Harvard, Vancouver, ISO, and other styles
3

Kamble, Shailesh D., Dilip Kumar Jang Bahadur Saini, Sachin Jain, Kapil Kumar, Sunil Kumar, and Dharmesh Dhabliya. "A novel approach of surveillance video indexing and retrieval using object detection and tracking." Journal of Interdisciplinary Mathematics 26, no. 3 (2023): 341–50. http://dx.doi.org/10.47974/jim-1665.

Full text
Abstract:
The problem of searching videos in large databases i.e. multimedia applications is a major challenge. Therefore, video indexing is used to search the location of the particular video in a large database quickly. Quickly locating the video in the large database is the good quality of indexing. Still, there is a scope of improvement in quickly searching a video in a large database in terms of assigning labels to video. In computer vision, real-time object detection and tracking is a gigantic, vibrant yet indecisive and intricate area. You only look once (YOLO) algorithm is used to detect the obj
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Ramesh, and Arun Hampapur. "Metadata in video databases." ACM SIGMOD Record 23, no. 4 (1994): 27–33. http://dx.doi.org/10.1145/190627.190638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Patel, Nilesh V., and Ishwar K. Sethi. "Video shot detection and characterization for video databases." Pattern Recognition 30, no. 4 (1997): 583–92. http://dx.doi.org/10.1016/s0031-3203(96)00114-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shearer, Kim, Svetha Venkatesh, and Dorota Kieronska. "Spatial Indexing for Video Databases." Journal of Visual Communication and Image Representation 7, no. 4 (1996): 325–35. http://dx.doi.org/10.1006/jvci.1996.0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morand, Cl, J. Benois-Pineau, J. Ph Domenger, J. Zepeda, E. Kijak, and Ch Guillemot. "Scalable object-based video retrieval in HD video databases." Signal Processing: Image Communication 25, no. 6 (2010): 450–65. http://dx.doi.org/10.1016/j.image.2010.04.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bertino, Elisa, Ahmed K. Elmagarmid, and Mohand-Saïd Hacid. "A database approach to quality of service specification in video databases." ACM SIGMOD Record 32, no. 1 (2003): 35–40. http://dx.doi.org/10.1145/640990.640995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

CH, Subrahmanyam, Venkata Rao D, and Usha Rani N. "Low bit Rate Video Quality Analysis Using NRDPF-VQA Algorithm." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 1 (2015): 71. http://dx.doi.org/10.11591/ijece.v5i1.pp71-77.

Full text
Abstract:
<p>In this work, we propose NRDPF-VQA (No Reference Distortion Patch Features Video Quality Assessment) model aims to use to measure the video quality assessment for H.264/AVC (Advanced Video Coding). The proposed method takes advantage of the contrast changes in the video quality by luminance changes. The proposed quality metric was tested by using LIVE video database. The experimental results show that the new index performance compared with the other NR-VQA models that require training on LIVE video databases, CSIQ video database, and VQEG HDTV video database. The values are compared
APA, Harvard, Vancouver, ISO, and other styles
10

CHEN, SHU-CHING, NA ZHAO, and MEI-LING SHYU. "MODELING SEMANTIC CONCEPTS AND USER PREFERENCES IN CONTENT-BASED VIDEO RETRIEVAL." International Journal of Semantic Computing 01, no. 03 (2007): 377–402. http://dx.doi.org/10.1142/s1793351x07000159.

Full text
Abstract:
In this paper, a user-centered framework is proposed for video database modeling and retrieval to provide appealing multimedia experiences on the content-based video queries. By incorporating the Hierarchical Markov Model Mediator (HMMM) mechanism, the source videos, segmented video shots, visual/audio features, semantic events, and high-level user perceptions are seamlessly integrated in a video database. With the hierarchical and stochastic design for video databases and semantic concept modeling, the proposed framework supports the retrieval for not only single events but also temporal sequ
APA, Harvard, Vancouver, ISO, and other styles
11

Koprulu, Mesru, Nihan Kesim Cicekli, and Adnan Yazici. "Spatio-temporal querying in video databases." Information Sciences 160, no. 1-4 (2004): 131–52. http://dx.doi.org/10.1016/j.ins.2003.08.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Erozel, Guzen, Nihan Kesim Cicekli, and Ilyas Cicekli. "Natural language querying for video databases." Information Sciences 178, no. 12 (2008): 2534–52. http://dx.doi.org/10.1016/j.ins.2008.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chan, Yung-Kuan, and Chin-Chen Chang. "Spatial Similarity Retrieval in Video Databases." Journal of Visual Communication and Image Representation 12, no. 2 (2001): 107–22. http://dx.doi.org/10.1006/jvci.2000.0460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Varga, Domonkos. "No-Reference Video Quality Assessment Based on Benford’s Law and Perceptual Features." Electronics 10, no. 22 (2021): 2768. http://dx.doi.org/10.3390/electronics10222768.

Full text
Abstract:
No-reference video quality assessment (NR-VQA) has piqued the scientific community’s interest throughout the last few decades, owing to its importance in human-centered interfaces. The goal of NR-VQA is to predict the perceptual quality of digital videos without any information about their distortion-free counterparts. Over the past few decades, NR-VQA has become a very popular research topic due to the spread of multimedia content and video databases. For successful video quality evaluation, creating an effective video representation from the original video is a crucial step. In this paper, w
APA, Harvard, Vancouver, ISO, and other styles
15

Agarla, Mirko, Luigi Celona, and Raimondo Schettini. "No-Reference Quality Assessment of In-Capture Distorted Videos." Journal of Imaging 6, no. 8 (2020): 74. http://dx.doi.org/10.3390/jimaging6080074.

Full text
Abstract:
We introduce a no-reference method for the assessment of the quality of videos affected by in-capture distortions due to camera hardware and processing software. The proposed method encodes both quality attributes and semantic content of each video frame by using two Convolutional Neural Networks (CNNs) and then estimates the quality score of the whole video by using a Recurrent Neural Network (RNN), which models the temporal information. The extensive experiments conducted on four benchmark databases (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) containing in-capture distortions demonstra
APA, Harvard, Vancouver, ISO, and other styles
16

Aghbari, Z., K. Kaneko, and A. Makinouchi. "Content-trajectory approach for searching video databases." IEEE Transactions on Multimedia 5, no. 4 (2003): 466–81. http://dx.doi.org/10.1109/tmm.2003.819092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kuo, T. C. T., and A. L. P. Chen. "Content-based query processing for video databases." IEEE Transactions on Multimedia 2, no. 1 (2000): 1–13. http://dx.doi.org/10.1109/6046.825790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bertino, E., T. Catarci, A. K. Elmagarmid, and M. S. Hacid. "Quality of service specification in video databases." IEEE Multimedia 10, no. 4 (2003): 71–81. http://dx.doi.org/10.1109/mmul.2003.1237552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Xiangmin, Xiaofang Zhou, Lei Chen, and Athman Bouguettaya. "Efficient subsequence matching over large video databases." VLDB Journal 21, no. 4 (2011): 489–508. http://dx.doi.org/10.1007/s00778-011-0255-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Marti, Ni Wayan, and Luh Putu Tuti Ariani. "PENGEMBANGAN KONTEN PEMBELAJARAN MATA KULIAH BASIS DATA BERBASIS MICRO-LEARNING DI PROGRAM STUDI S1 ILMU KOMPUTER-UNDIKSHA." Jurnal Pendidikan Teknologi dan Kejuruan 20, no. 1 (2023): 1–12. http://dx.doi.org/10.23887/jptkundiksha.v20i1.54572.

Full text
Abstract:
This development research aims to develop learning content for the Database course by applying the micro-learning method in the Computer Science Study Program, Faculty of Engineering and Vocational, Undiksha. Micro-Learning is an evolutionary form of online learning and can be considered an innovative approach to 21st-century digital learning. Micro-Learning is a learning method in which learning content is presented in the form of short bite-sized pieces that focus on one learning topic. The development procedure used is the Lee & Owens development model with five stages, namely the analy
APA, Harvard, Vancouver, ISO, and other styles
21

Teferi, Dereje, and Josef Bigun. "Damascening video databases for evaluation of face tracking and recognition – The DXM2VTS database." Pattern Recognition Letters 28, no. 15 (2007): 2143–56. http://dx.doi.org/10.1016/j.patrec.2007.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ulomi, George S., Alex F. Mongi, and Mussa A. Dida. "Towards Efficient Video Codec for 360-degree Video Streaming over Broadband Network." Indian Journal Of Science And Technology 18, no. 5 (2025): 357–65. https://doi.org/10.17485/ijst/v18i5.3964.

Full text
Abstract:
Objectives: This study presents the compression efficiency analysis of AV1, H.265/HEVC, and VVenc based on the Peak Signal-to-Noise Ratio (PSNR) and Video Multimethod Assessment Fusion (VMAF) objective quality metrics. Methods: The study utilizes video sets from publicly available databases and YouTube. The video sets were compressed using High Efficient Video Codec (HEVC/H.265) and Versatile Video Encoder (VVenc) based on Common Test Conditions (CTC) for fixed-quality encoding at different rates. For STV-AV1, we use quantization parameters which result in the rate nearly the same as that of C
APA, Harvard, Vancouver, ISO, and other styles
23

Tripathi, Dr Rajeev. "Substantial Content Reclamation for Clustering." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 3 (2021): 17–20. http://dx.doi.org/10.35940/ijrte.c6365.0910321.

Full text
Abstract:
The massive volume of data stored in computer files and databases is rapidly increasing. Users of these data, on the other hand, demand more complex information from databases. The video data have exponential growth towards accessing and storing. The vital problem associated to video data is efficient, qualitative and fast accessing. We talk about how video pictures are clustered. We presume video clips have been divided into shots, each of which is denoted by a collection of key frames. As a result, video clustering is limited to still key frame pictures. In amble database finding the qualifi
APA, Harvard, Vancouver, ISO, and other styles
24

Rajeev, Tripathi. "Substantial Content Reclamation for Clustering." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 3 (2021): 17–20. https://doi.org/10.35940/ijrte.C6365.0910321.

Full text
Abstract:
The massive volume of data stored in computer files and databases is rapidly increasing. Users of these data, on the other hand, demand more complex information from databases. The video data have exponential growth towards accessing and storing. The vital problem associated to video data is efficient, qualitative and fast accessing. We talk about how video pictures are clustered. We presume video clips have been divided into shots, each of which is denoted by a collection of key frames. As a result, video clustering is limited to still key frame pictures. In amble database finding the qualifi
APA, Harvard, Vancouver, ISO, and other styles
25

Hu, Wanrou, Gan Huang, Linling Li, Li Zhang, Zhiguo Zhang, and Zhen Liang. "Video‐triggered EEG‐emotion public databases and current methods: A survey." Brain Science Advances 6, no. 3 (2020): 255–87. http://dx.doi.org/10.26599/bsa.2020.9050026.

Full text
Abstract:
Emotions, formed in the process of perceiving external environment, directly affect human daily life, such as social interaction, work efficiency, physical wellness, and mental health. In recent decades, emotion recognition has become a promising research direction with significant application values. Taking the advantages of electroencephalogram (EEG) signals (i.e., high time resolution) and video‐based external emotion evoking (i.e., rich media information), video‐triggered emotion recognition with EEG signals has been proven as a useful tool to conduct emotion‐related studies in a laborator
APA, Harvard, Vancouver, ISO, and other styles
26

Baloch, Abdul Rasheed, Ubaidullah Alias Kashif, Kashif Gul Chachar, and Maqsood Ali Solangi. "Video Copyright Detection Using High Level Objects in Video Clip." Sukkur IBA Journal of Computing and Mathematical Sciences 1, no. 2 (2017): 95. http://dx.doi.org/10.30537/sjcms.v1i2.33.

Full text
Abstract:
Latest advancements in online video databases have caused a huge violation of copyright material misuse. Usually a video clip having a proper copyright is available in online video databases like YouTube without permission of the owner. It remains available until the owner takes a notice and requests to the website manager to remove copyright material. The problem with this approach is that usually the copyright material is downloaded and watched illegally during the period of upload and subsequent removal on request of the owner. This study aims at presenting an automatic content based system
APA, Harvard, Vancouver, ISO, and other styles
27

Varga, Domonkos. "No-Reference Video Quality Assessment Using Multi-Pooled, Saliency Weighted Deep Features and Decision Fusion." Sensors 22, no. 6 (2022): 2209. http://dx.doi.org/10.3390/s22062209.

Full text
Abstract:
With the constantly growing popularity of video-based services and applications, no-reference video quality assessment (NR-VQA) has become a very hot research topic. Over the years, many different approaches have been introduced in the literature to evaluate the perceptual quality of digital videos. Due to the advent of large benchmark video quality assessment databases, deep learning has attracted a significant amount of attention in this field in recent years. This paper presents a novel, innovative deep learning-based approach for NR-VQA that relies on a set of in parallel pre-trained convo
APA, Harvard, Vancouver, ISO, and other styles
28

Ghuge, C. A., V. Chandra Prakash, and Sachin D. Ruikar. "Weighed query-specific distance and hybrid NARX neural network for video object retrieval." Computer Journal 63, no. 11 (2019): 1738–55. http://dx.doi.org/10.1093/comjnl/bxz113.

Full text
Abstract:
Abstract The technical revolution in the field of video recording using the surveillance videos has increased the amount of the video databases that caused the need for an efficient video management system. This paper proposes a hybrid model using the nearest search algorithm (NSA) and the Levenberg–Marquardt (LM)-based non-linear autoregressive exogenous (NARX) neural network for performing the video object retrieval using the trajectories. Initially, the position of the objects in the video are retrieved using NSA and NARX individually, and they are averaged to determine the position of the
APA, Harvard, Vancouver, ISO, and other styles
29

Hussain, Altaf, Mehtab Ahmad, Tariq Hussain, and Ijaz Ullah. "Efficient Content Based Video Retrieval System by Applying AlexNet on Key Frames." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 11, no. 2 (2022): 207–35. http://dx.doi.org/10.14201/adcaij.27430.

Full text
Abstract:
The video retrieval system refers to the task of retrieving the most relevant video collection, given a user query. By applying some feature extraction models the contents of the video can be extracted. With the exponential increase in video data in online and offline databases as well as a huge implementation of multiple applications in health, military, social media, and art, the Content-Based Video Retrieval (CBVR) system has emerged. The CBVR system takes the inner contents of the video frame and analyses features of each frame, through which similar videos are retrieved from the database.
APA, Harvard, Vancouver, ISO, and other styles
30

Mirkovic, Milan, Petar Vrgovic, Dubravko Culibrk, Darko Stefanovic, and Andras Anderla. "Evaluating the Role of Content in Subjective Video Quality Assessment." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/625219.

Full text
Abstract:
Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a
APA, Harvard, Vancouver, ISO, and other styles
31

George, S. Ulomi, F. Mongi Alex, and A. Dida Mussa. "Towards Efficient Video Codec for 360-degree Video Streaming over Broadband Network." Indian Journal of Science and Technology 18, no. 5 (2025): 357–65. https://doi.org/10.17485/IJST/v18i5.3964.

Full text
Abstract:
<strong>Objectives:</strong>&nbsp;This study presents the compression efficiency analysis of AV1, H.265/HEVC, and VVenc based on the Peak Signal-to-Noise Ratio (PSNR) and Video Multimethod Assessment Fusion (VMAF) objective quality metrics.&nbsp;<strong>Methods:</strong>&nbsp;The study utilizes video sets from publicly available databases and YouTube. The video sets were compressed using High Efficient Video Codec (HEVC/H.265) and Versatile Video Encoder (VVenc) based on Common Test Conditions (CTC) for fixed-quality encoding at different rates. For STV-AV1, we use quantization parameters whic
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Yanchao, Dongxiang Zhang, Shuhao Zhang, Sai Wu, Zexu Feng, and Gang Chen. "Predictive and Near-Optimal Sampling for View Materialization in Video Databases." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–27. http://dx.doi.org/10.1145/3639274.

Full text
Abstract:
Scalable video query optimization has re-emerged as an attractive research topic in recent years. The OTIF system, a video database with cutting-edge efficiency, has introduced a new paradigm of utilizing view materialization to facilitate online query processing. Specifically, it stores the results of multi-object tracking queries to answer common video queries with sub-second latency. However, the cost associated with view materialization in OTIF is prohibitively high for supporting large-scale video streams. In this paper, we study efficient MOT-based view materialization in video databases
APA, Harvard, Vancouver, ISO, and other styles
33

Javanbakhti, Solmaz, Sveta Zinger, and Peter H. N. De With. "Fast scene analysis for surveillance & video databases." IEEE Transactions on Consumer Electronics 63, no. 3 (2017): 325–33. http://dx.doi.org/10.1109/tce.2017.014979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Park, Sanghyun, and Ki-Ho Hyun. "Trie for similarity matching in large video databases." Information Systems 29, no. 8 (2004): 641–52. http://dx.doi.org/10.1016/s0306-4379(03)00046-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

D�nderler, Mehmet Emin, �zg�r Ulusoy, and Ugur G�d�kbay. "Rule-based spatiotemporal query processing for video databases." VLDB Journal The International Journal on Very Large Data Bases 13, no. 1 (2004): 86–103. http://dx.doi.org/10.1007/s00778-003-0114-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

MacKinnon, Gregory, and Conor Vibert. "Video databases: An emerging tool in business education." Education and Information Technologies 19, no. 1 (2012): 87–101. http://dx.doi.org/10.1007/s10639-012-9213-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nasreen, Azra, and Shobha G. "Parallelizing Multi-featured Content Based Search and Retrieval of Videos through High Performance Computing." Indonesian Journal of Electrical Engineering and Computer Science 5, no. 1 (2017): 214. http://dx.doi.org/10.11591/ijeecs.v5.i1.pp214-219.

Full text
Abstract:
&lt;p&gt;Video Retrieval is an important technology that helps to design video search engines and allow users to browse and retrieve videos of interest from huge databases. Though, there are many existing techniques to search and retrieve videos based on spatial and temporal features but are unable to perform well resulting in high ranking of irrelevant videos leading to poor user satisfaction. In this paper an efficient multi-featured method for matching and extraction is proposed in parallel paradigm to retrieve videos accurately and quickly from the collection. Proposed system is tested on
APA, Harvard, Vancouver, ISO, and other styles
38

Xiao, Sihan. "More than Data: A Multivocal Inquiry into Video-Based Research on Learning and Teaching." ECNU Review of Education 1, no. 3 (2018): 23–35. http://dx.doi.org/10.30926/ecnuroe2018010302.

Full text
Abstract:
Purpose This commentary aims to echo Wilkinson, Bailey, and Maher's (this volume) arguments about the affordances of videos and video databases in studying learning and teaching. Design/Approach/Methods This article illustrates a multivocal approach to the videos from the Video Mosaic Collaborative (VMC). In particular, three mathematics teachers in Shanghai were invited to watch and discuss a set of VMC videos. Two recurring themes concerning mathematics learning and teaching were identified in this video-cued interview and discussed in relation to the VMC Analytics. Findings The VMC videos p
APA, Harvard, Vancouver, ISO, and other styles
39

THURAISINGHAM, BHAVANI. "MANAGING AND MINING MULTIMEDIA DATABASES." International Journal on Artificial Intelligence Tools 13, no. 03 (2004): 739–59. http://dx.doi.org/10.1142/s0218213004001776.

Full text
Abstract:
Several advances have been made on managing multimedia databases as well as on data mining. Recently there is active research on mining multimedia databases. This paper provides an overview of managing multimedia databases and then describes issues on mining multimedia databases. In particular mining text, image, audio and video data are discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Dong, X. C., and V. I. Ionin. "Using object-oriented databases in face recognition." «System analysis and applied information science», no. 2 (August 18, 2020): 54–60. http://dx.doi.org/10.21122/2309-4923-2020-2-54-60.

Full text
Abstract:
The aim of the work is to develop an algorithm functioning by a face recognition system using object-oriented databases. The system provides automatic identification of the desired object or identifies someone using a digital photo or video frame from a video source. The technology includes comparing pre-scanned face elements from the resulting image with prototypes of faces stored in the database. Modern packages of object-oriented databases give the user the opportunity to create a new class with the specified attributes and methods, obtain classes that inherit attributes and methods from su
APA, Harvard, Vancouver, ISO, and other styles
41

Noetel, Michael, Shantell Griffith, Oscar Delaney, et al. "Video Improves Learning in Higher Education: A Systematic Review." Review of Educational Research 91, no. 2 (2021): 204–36. http://dx.doi.org/10.3102/0034654321990713.

Full text
Abstract:
Universities around the world are incorporating online learning, often relying on videos (asynchronous multimedia). We systematically reviewed the effects of video on learning in higher education. We searched five databases using 27 keywords to find randomized trials that measured the learning effects of video among college students. We conducted full-text screening, data extraction, and risk of bias in duplicate. We calculated pooled effect sizes using multilevel random-effects meta-analysis. Searches retrieved 9,677 unique records. After screening 329 full texts, 105 met inclusion criteria,
APA, Harvard, Vancouver, ISO, and other styles
42

Moon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, et al. "Natural language processing based advanced method of unnecessary video detection." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.

Full text
Abstract:
&lt;span&gt;In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze
APA, Harvard, Vancouver, ISO, and other styles
43

Nazmun, Nessa Moon, Salehin Imrus, Parvin Masuma, et al. "Natural language processing based advanced method of unnecessary video detection." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (2021): 5411–19. https://doi.org/10.11591/ijece.v11i6.pp5411-5419.

Full text
Abstract:
In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare
APA, Harvard, Vancouver, ISO, and other styles
44

Witoonchart, Pareyaasiri, and Yun-Ju Huang. "Using Video Modeling in Enhance Social Skills to Children With Autism: A Literature Review." Ramathibodi Medical Journal 47, no. 2 (2024): 46–56. http://dx.doi.org/10.33165/rmj.2024.47.2.266424.

Full text
Abstract:
The objective of this research paper is to explore the advantages, limitations, and empirical evidence for the effectiveness of video modeling as an instructional approach for children with autism. Video modeling, which falls under assistive technology in therapeutic intervention strategies, utilizes videos to exhibit desired behaviors and competencies. A total of 28 research articles, carefully selected from 3 reputable publication resources (APA PsycNet, Springer, and Eric), were analyzed through content analysis. These articles were published in online databases between 2000 and 2024. The f
APA, Harvard, Vancouver, ISO, and other styles
45

Thotakura, Vishnu Priya, and Purnachand Nalluri. "A novel hybrid feature extraction and ensemble C3D classification for anomaly detection in surveillance videos." Indonesian Journal of Electrical Engineering and Computer Science 30, no. 3 (2023): 1572. http://dx.doi.org/10.11591/ijeecs.v30.i3.pp1572-1585.

Full text
Abstract:
Anomaly detection in several deep learning frameworks are recently presented on real-time video databases as a challenging task. However, these frameworks have high false positive rate (FPR) and error rate due to various backgrounds, motion appearance and semantic high-level and low-level features for anomaly detection through action classification. Also, extraction of features and classification are the major problems in traditional convolution neural network (CNN) on real-time video databases. The proposed work is a novel action classification framework which is designed and implemented on l
APA, Harvard, Vancouver, ISO, and other styles
46

Vishnu, Priya Thotakura, and Nalluri Purnachand. "A novel hybrid feature extraction and ensemble C3D classification for anomaly detection in surveillance videos." A novel hybrid feature extraction and ensemble C3D classification for anomaly detection in surveillance videos 30, no. 3 (2023): 1572–85. https://doi.org/10.11591/ijeecs.v30.i3.pp1572-1585.

Full text
Abstract:
Anomaly detection in several deep learning frameworks are recently presented on real-time video databases as a challenging task. However, these frameworks have high false positive rate (FPR) and error rate due to various backgrounds, motion appearance and semantic high-level and low-level features for anomaly detection through action classification. Also, extraction of features and classification are the major problems in traditional convolution neural network (CNN) on real-time video databases. The proposed work is a novel action classification framework which is designed and implemented on l
APA, Harvard, Vancouver, ISO, and other styles
47

Braczynski, Anne K., Bergita Ganse, Stephanie Ridwan, Christian Schlenstedt, Jörg B. Schulz, and Christoph Hoog Antink. "YouTube Videos on Parkinson’s Disease are a Relevant Source of Patient Information." Journal of Parkinson's Disease 11, no. 2 (2021): 833–42. http://dx.doi.org/10.3233/jpd-202513.

Full text
Abstract:
Background: Parkinson’s disease (PD) is the most frequent movement disorder. Patients access YouTube, one of the largest video databases in the world, to retrieve health-related information increasingly often. Objective: We aimed to identify high-quality publishers, so-called “channels” that can be recommended to patients. We hypothesized that the number of views and the number of uploaded videos were indicators for the quality of the information given by a video on PD. Methods: YouTube was searched for 8 combinations of search terms that included “Parkinson” in German. For each term, the firs
APA, Harvard, Vancouver, ISO, and other styles
48

Pentland, Alex. "Content-based indexing of images and video." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1358 (1997): 1283–90. http://dx.doi.org/10.1098/rstb.1997.0111.

Full text
Abstract:
By representing image content using probabilistic models of an object's appearance we can obtain semantics–preserving compression of the image data. Such compact representations of an image's salient features allow rapid computer searches of even large image databases. Examples are shown for databases of face images, a video of American sign language (ASL), and a video of facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
49

Sioutas, Spyros, Kostas Tsichlas, Bill Vassiliadis, and Dimitrios Tsolis. "Efficient Access Methods for Temporal Interval Queries of Video Metadata." JUCS - Journal of Universal Computer Science 13, no. (10) (2007): 1411–33. https://doi.org/10.3217/jucs-013-10-1411.

Full text
Abstract:
Indexing video content is one of the most important problems in video databases. In this paper we present linear time and space algorithms for handling video metadata that represent objects or events present in various frames of the video sequence. To accomplish this, we make a straightforward reduction of this problem to the intersection problem in Computational Geometry. Our first result is an improvement over the one of V. S. Subrahmanian [Subramanian, 1998] by a logarithmic factor in storage. This is achieved by using different basic data structures. Then, we present two other interesting
APA, Harvard, Vancouver, ISO, and other styles
50

Kosch, Harald, Ahmed Mostefaoui, László Böszörményi, and Lionel Brunie. "Heuristics for Optimizing Multi-Clip Queries in Video Databases." Multimedia Tools and Applications 22, no. 3 (2004): 235–62. http://dx.doi.org/10.1023/b:mtap.0000017030.45487.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!