To see the other types of publications on this topic, follow the link: Video methods.

Journal articles on the topic 'Video methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wadia, Reena. "Live-video and video demonstration methods." British Dental Journal 228, no. 4 (2020): 253. http://dx.doi.org/10.1038/s41415-020-1309-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Chong, and Zhan Jun Si. "Applied Research of Assessment Methods on Video Quality." Applied Mechanics and Materials 262 (December 2012): 157–62. http://dx.doi.org/10.4028/www.scientific.net/amm.262.157.

Full text
Abstract:
With the rapid development of modern video technology, the range of video applications is increasing, such as online video conferencing, online classroom, online medical, etc. However, due to the quantity of video data is large, video has to be compressed and encoded appropriately, but the encoding process may cause some distortions on video quality. Therefore, how to evaluate the video quality efficiently and accurately is essential in the fields of video processing, video quality monitoring and multimedia video applications. In this article, subjective, and comprehensive evaluation method of video quality were introduced, a video quality assessment system was completed, four ITU recommended videos were encoded and evaluated by Degradation Category Rating (DCR) and Structural Similarity (SSIM) methods using five different formats. After that, comprehensive evaluations with weights were applied. Results show that data of all three evaluations have good consistency; H.264 is the best encoding method, followed by Xvid and wmv8; the higher the encoding bit rate is, the better the evaluations are, but comparing to 1000kbps, the subjective and objective evaluation scores of 1400kbps couldn’t improve obviously. The whole process could also evaluate new encodings methods, and is applicable for high-definition video, finally plays a significant role in promoting the video quality evaluation and video encoding.
APA, Harvard, Vancouver, ISO, and other styles
3

Palau, Roberta De Carvalho Nobre, Bianca Santos da Cunha Silveira, Robson André Domanski, et al. "Modern Video Coding: Methods, Challenges and Systems." Journal of Integrated Circuits and Systems 16, no. 2 (2021): 1–12. http://dx.doi.org/10.29292/jics.v16i2.503.

Full text
Abstract:
With the increasing demand for digital video applications in our daily lives, video coding and decoding become critical tasks that must be supported by several types of devices and systems. This paper presents a discussion of the main challenges to design dedicated hardware architectures based on modern hybrid video coding formats, such as the High Efficiency Video Coding (HEVC), the AOMedia Video 1 (AV1) and the Versatile Video Coding (VVC). The paper discusses eachstep of the hybrid video coding process, highlighting the main challenges for each codec and discussing the main hardware solutions published in the literature. The discussions presented in the paper show that there are still many challenges to be overcome and open research opportunities, especially for the AV1 and VVC codecs. Most of these challenges are related to the high throughput required for processing high and ultrahigh resolution videos in real time and to energy constraints of multimedia-capable devices.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Decheng, and Jinxin Chen. "Visual Thinking Methods and Training in Video Production." International Journal for Innovation Education and Research 7, no. 12 (2019): 499–507. http://dx.doi.org/10.31686/ijier.vol7.iss12.2099.

Full text
Abstract:
"A picture is worth a thousand words". Internet plus has brought people into the era of picture reading. Pictures and videos are everywhere. And dynamic video has the characteristics of sound, sound and documentary. It has become a popular media form for the public. Therefore, mobile phone video shooting and production are convenient, and the popularization of video production and dissemination has become inevitable. However, the creation of artistic and innovative video works requires producers to master certain visual thinking methods in addition to film montage theories and techniques. The article briefly outlines the forming process of the concept of visual thinking, and proposes advanced methods of visual thinking: intuitive method, selection method, discovery method, and inquiry method. In the process of video production, some methods of visual thinking are analyzed through a case, such as the visualization of textual information, the figuration of image, the logic of concreteness, and the systematization of logic. We have studied practical visual thinking training methods, from the three stages of video production: script creation, shooting practice, and video packaging,
APA, Harvard, Vancouver, ISO, and other styles
5

Patel, Rahul S., Gajanan P. Khapre, and R. M. Mulajkr. "Video Retrieval Systems Methods, Techniques, Trends and Challenges." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (2017): 72–81. http://dx.doi.org/10.31142/ijtsrd5862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

V., Puyda, and Stoian A. "ON METHODS OF OBJECT DETECTION IN VIDEO STREAMS." Computer systems and network 2, no. 1 (2017): 80–87. http://dx.doi.org/10.23939/csn2020.01.080.

Full text
Abstract:
Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color
APA, Harvard, Vancouver, ISO, and other styles
7

Choe, Jaeryun, Haechul Choi, Heeji Han, and Daehyeok Gwon. "Novel video coding methods for versatile video coding." International Journal of Computational Vision and Robotics 11, no. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.10040489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Heeji, Daehyeok Gwon, Jaeryun Choe, and Haechul Choi. "Novel video coding methods for versatile video coding." International Journal of Computational Vision and Robotics 11, no. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.117582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Crescentia.A, Anto, and Sujatha G. "An Overview of Digital Video Tampering Detection Using Passive Methods and D-Hash Algorithm." International Journal of Engineering & Technology 7, no. 4.6 (2018): 373. http://dx.doi.org/10.14419/ijet.v7i4.6.28444.

Full text
Abstract:
Video tampering and integrity detection can be defined as methods of alteration of the contents of the video which will enable it to hide objects, an occasion or adjust the importance passed on by the collection of images in the video. Modification of video contents is growing rapidly due to the expansion of the video procurement gadgets and great video altering programming devices. Subsequently verification of video files is transforming into something very vital. Video integrity verification aims to search out the hints of altering and subsequently asses the realness and uprightness of the video. These strategies might be ordered into active and passive techniques. Therefore our area of concern in this paper is to present our views on different passive video tampering detection strategies and integrity check. Passive video tampering identification strategies are grouped into consequent three classifications depending on the type of counterfeiting as: Detection of double or multiple compressed videos, Region altering recognition and Video inter-frame forgery detection. So as to detect the tampering of the video, it is split into frames and hash is generated for a group of frames referred to as Group of Pictures. This hash value is verified by the receiver to detect tampering.
APA, Harvard, Vancouver, ISO, and other styles
10

Majumdar, Dr Jharna, and Spoorthy B. "Comparisons of Video Summarization Methods." IOSR Journal of Computer Engineering 16, no. 5 (2014): 52–56. http://dx.doi.org/10.9790/0661-16525256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rose, Emma, and Alison Cardinal. "Participatory video methods in UX." Communication Design Quarterly 6, no. 2 (2018): 9–20. http://dx.doi.org/10.1145/3282665.3282667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sokolova, A. I., and A. S. Konushin. "Methods of gait recognition in video." Proceedings of the Institute for System Programming of the RAS 31, no. 1 (2019): 69–82. http://dx.doi.org/10.15514/ispras-2018-31(1)-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sokolova, A. I., and A. S. Konushin. "Methods of gait recognition in video." Proceedings of the Institute for System Programming of the RAS 31, no. 1 (2019): 69–82. http://dx.doi.org/10.15514/ispras-2019-31(1)-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cowan, Crispin, Shanwei Cen, Jonathan Walpole, and Calton Pu. "Adaptive methods for distributed video presentation." ACM Computing Surveys 27, no. 4 (1995): 580–83. http://dx.doi.org/10.1145/234782.234794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sokolova, A., and A. Konushin. "Methods of Gait Recognition in Video." Programming and Computer Software 45, no. 4 (2019): 213–20. http://dx.doi.org/10.1134/s0361768819040091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Meiqing, and Choi-Hong Lai. "Grey video compression methods using fractals." International Journal of Computer Mathematics 84, no. 11 (2007): 1567–90. http://dx.doi.org/10.1080/00207160601178299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Draréni, Jamil, Sébastien Roy, and Peter Sturm. "Methods for geometrical video projector calibration." Machine Vision and Applications 23, no. 1 (2011): 79–89. http://dx.doi.org/10.1007/s00138-011-0322-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Delforouzi, Ahmad, Bhargav Pamarthi, and Marcin Grzegorzek. "Training-Based Methods for Comparison of Object Detection Methods for Visual Object Tracking." Sensors 18, no. 11 (2018): 3994. http://dx.doi.org/10.3390/s18113994.

Full text
Abstract:
Object tracking in challenging videos is a hot topic in machine vision. Recently, novel training-based detectors, especially using the powerful deep learning schemes, have been proposed to detect objects in still images. However, there is still a semantic gap between the object detectors and higher level applications like object tracking in videos. This paper presents a comparative study of outstanding learning-based object detectors such as ACF, Region-Based Convolutional Neural Network (RCNN), FastRCNN, FasterRCNN and You Only Look Once (YOLO) for object tracking. We use an online and offline training method for tracking. The online tracker trains the detectors with a generated synthetic set of images from the object of interest in the first frame. Then, the detectors detect the objects of interest in the next frames. The detector is updated online by using the detected objects from the last frames of the video. The offline tracker uses the detector for object detection in still images and then a tracker based on Kalman filter associates the objects among video frames. Our research is performed on a TLD dataset which contains challenging situations for tracking. Source codes and implementation details for the trackers are published to make both the reproduction of the results reported in this paper and the re-use and further development of the trackers for other researchers. The results demonstrate that ACF and YOLO trackers show more stability than the other trackers.
APA, Harvard, Vancouver, ISO, and other styles
19

Jeon, Myunghoon, and Byoung-Dai Lee. "Toward Content-Aware Video Partitioning Methods for Distributed HEVC Video Encoding." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 3 (2015): 569. http://dx.doi.org/10.11591/ijece.v5i3.pp569-578.

Full text
Abstract:
Recently, cloud computing has emerged as a potential platform for distributed video encoding due to its advantages in terms of costs as well as performance. For distributed video encoding, the input video must be partitioned into several segments, each of which is processed over distributed resources. This paper describes the effect of different video partitioning schemes on overall encoding performance in the distributed encoding of High-Efficiency Video Coding (HEVC). In addition, we explored performances of video partitioning schemes on the basis of the types of the content to be encoded
APA, Harvard, Vancouver, ISO, and other styles
20

LeBaron, Curtis, Paula Jarzabkowski, Michael G. Pratt, and Greg Fetzer. "An Introduction to Video Methods in Organizational Research." Organizational Research Methods 21, no. 2 (2017): 239–60. http://dx.doi.org/10.1177/1094428117745649.

Full text
Abstract:
Video has become a methodological tool of choice for many researchers in social science, but video methods are relatively new to the field of organization studies. This article is an introduction to video methods. First, we situate video methods relative to other kinds of research, suggesting that video recordings and analyses can be used to replace or supplement other approaches, not only observational studies but also retrospective methods such as interviews and surveys. Second, we describe and discuss various features of video data in relation to ontological assumptions that researchers may bring to their research design. Video involves both opportunities and pitfalls for researchers, who ought to use video methods in ways that are consistent with their assumptions about the world and human activity. Third, we take a critical look at video methods by reporting progress that has been made while acknowledging gaps and work that remains to be done. Our critical considerations point repeatedly at articles in this special issue, which represent recent and important advances in video methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Hanqing, Chunyan Hu, Feifei Lee, et al. "A Supervised Video Hashing Method Based on a Deep 3D Convolutional Neural Network for Large-Scale Video Retrieval." Sensors 21, no. 9 (2021): 3094. http://dx.doi.org/10.3390/s21093094.

Full text
Abstract:
Recently, with the popularization of camera tools such as mobile phones and the rise of various short video platforms, a lot of videos are being uploaded to the Internet at all times, for which a video retrieval system with fast retrieval speed and high precision is very necessary. Therefore, content-based video retrieval (CBVR) has aroused the interest of many researchers. A typical CBVR system mainly contains the following two essential parts: video feature extraction and similarity comparison. Feature extraction of video is very challenging, previous video retrieval methods are mostly based on extracting features from single video frames, while resulting the loss of temporal information in the videos. Hashing methods are extensively used in multimedia information retrieval due to its retrieval efficiency, but most of them are currently only applied to image retrieval. In order to solve these problems in video retrieval, we build an end-to-end framework called deep supervised video hashing (DSVH), which employs a 3D convolutional neural network (CNN) to obtain spatial-temporal features of videos, then train a set of hash functions by supervised hashing to transfer the video features into binary space and get the compact binary codes of videos. Finally, we use triplet loss for network training. We conduct a lot of experiments on three public video datasets UCF-101, JHMDB and HMDB-51, and the results show that the proposed method has advantages over many state-of-the-art video retrieval methods. Compared with the DVH method, the mAP value of UCF-101 dataset is improved by 9.3%, and the minimum improvement on JHMDB dataset is also increased by 0.3%. At the same time, we also demonstrate the stability of the algorithm in the HMDB-51 dataset.
APA, Harvard, Vancouver, ISO, and other styles
22

Gavic, Lidia, Martina Marcelja, Kristina Gorseta, and Antonija Tadin. "Comparison of Different Methods of Education in the Adoption of Oral Health Care Knowledge." Dentistry Journal 9, no. 10 (2021): 111. http://dx.doi.org/10.3390/dj9100111.

Full text
Abstract:
Aim: The scope of this study was to determine if there is a critical distinction in the usage of lectures, videos, and pamphlets as educational material utilized in the adoption of oral health care knowledge. Materials and methods: Three-hundred and thirty children from ages 11 to 13 from the city of Split, Croatia completed the questionnaire on oral health care knowledge. Consequently, they were educated by randomly using a method: lecture, pamphlet, or video. Finally, after education, their knowledge was tested again. Results: Different statistical tests were used for comparison of different sets of data. The Wilcoxon signed-rank test showed a statistically significant difference (p ˂ 0.001) compared to the results before and after education. The Kruskal–Wallis test comparing knowledge outcomes after three different types of education: video, lecture, and pamphlet, showed a statistically significant difference in the final knowledge between groups (p ˂ 0.05). A pairwise comparison between different types of education showed a significant statistical difference between education conducted by pamphlet and video material (p = 0.003) and pamphlet and lecture (p = 0.006). No difference was observed between the level of knowledge acquired through video material education and lectures (p = 0.928). Conclusion: Videos and lectures as means of education showed equal effectiveness in the adoption of oral health care knowledge, while the pamphlet was a method that proved to be less effective.
APA, Harvard, Vancouver, ISO, and other styles
23

Agarla, Mirko, Luigi Celona, and Raimondo Schettini. "An Efficient Method for No-Reference Video Quality Assessment." Journal of Imaging 7, no. 3 (2021): 55. http://dx.doi.org/10.3390/jimaging7030055.

Full text
Abstract:
Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of video frames using two lightweight Convolutional Neural Networks (CNNs). Then, it estimates the quality score of the entire video using a Support Vector Regressor (SVR). We compare the proposed method against several relevant state-of-the-art methods using four benchmark databases containing user generated videos (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC). The results show that the proposed method at a substantially lower computational cost predicts subjective video quality in line with the state of the art methods on individual databases and generalizes better than existing methods in cross-database setup.
APA, Harvard, Vancouver, ISO, and other styles
24

Yin, Xiao-lei, Dong-xue Liang, Lu Wang, et al. "Analysis of Coronary Angiography Video Interpolation Methods to Reduce X-ray Exposure Frequency Based on Deep Learning." Cardiovascular Innovations and Applications 6, no. 1 (2021): 17–24. http://dx.doi.org/10.15212/cvia.2021.0011.

Full text
Abstract:
Cardiac coronary angiography is a major technique that assists physicians during interventional heart surgery. Under X-ray irradiation, the physician injects a contrast agent through a catheter and determines the coronary arteries’ state in real time. However, to obtain a more accurate state of the coronary arteries, physicians need to increase the frequency and intensity of X-ray exposure, which will inevitably increase the potential for harm to both the patient and the surgeon. In the work reported here, we use advanced deep learning algorithms to find a method of frame interpolation for coronary angiography videos that reduces the frequency of X-ray exposure by reducing the frame rate of the coronary angiography video, thereby reducing X-ray-induced damage to physicians. We established a new coronary angiography image group dataset containing 95,039 groups of images extracted from 31 videos. Each group includes three consecutive images, which are used to train the video interpolation network model. We apply six popular frame interpolation methods to this dataset to confirm that the video frame interpolation technology can reduce the video frame rate and reduce exposure of physicians to X-rays.
APA, Harvard, Vancouver, ISO, and other styles
25

Mercaldo-Allen, R., P. Clark, Y. Liu, et al. "Exploring video and eDNA metabarcoding methods to assess oyster aquaculture cages as fish habitat." Aquaculture Environment Interactions 13 (August 12, 2021): 277–94. http://dx.doi.org/10.3354/aei00408.

Full text
Abstract:
Multi-tiered oyster aquaculture cages may provide habitat for fish assemblages similar to natural structured seafloor. Methods were developed to assess fish assemblages associated with aquaculture gear and boulder habitat using underwater video census combined with environmental DNA (eDNA) metabarcoding. Action cameras were mounted on 3 aquaculture cages at a commercial eastern oyster Crassostrea virginica farm (‘cage’) and among 3 boulders on a natural rock reef (‘boulder’) from June to August 2017 in Long Island Sound, USA. Interval and continuous video recording strategies were tested. During interval recording, cameras collected 8 min video segments hourly from 07:00 to 19:00 h on cages only. Continuous video was also collected for 2-3 h on oyster cages and boulders. Data loggers recorded light intensity and current speed. Seawater was collected for eDNA metabarcoding on the reef and farm. MaxN measurements of fish abundance were calculated in video, and 7 fish species were observed. Black sea bass Centropristis striata, cunner Tautogolabrus adspersus, scup Stenotomus chrysops, and tautog Tautoga onitis were the most abundant species observed in both oyster cage and boulder videos. In continuous video, black sea bass, scup, and tautog were observed more frequently and at higher abundance on the cage farm, while cunner were observed more frequently and at higher abundance on boulders within the rock reef. eDNA metabarcoding detected 42 fish species at the farm and reef. Six species were detected using both methods. Applied in tandem, video recording and eDNA provided a comprehensive approach for describing fish assemblages in difficult to sample structured oyster aquaculture and boulder habitats.
APA, Harvard, Vancouver, ISO, and other styles
26

Umar, Rusydi, Abdu Fadlil, and Alfiansyah Imanda Putra. "Analisis Forensics Untuk Mendeteksi Pemalsuan Video." J-SAKTI (Jurnal Sains Komputer dan Informatika) 3, no. 2 (2019): 193. http://dx.doi.org/10.30645/j-sakti.v3i2.140.

Full text
Abstract:
The current technology is proving that the ease with which crimes occur using computer science in the field of video editing, in addition from time to time more and more video editing software and increasingly eassy to use, but the development of this technology is widely misused by video creators to manipulate video hoaxes that cause disputes, so many video cases are spread which cannot be trusted by the public. Counterfeiting is an act of modifying documents, products, images or videos, among other media. Forensic video is one of the scientific methods in research that aims to obtain evidence and facts in determining the authenticity of a video. This makes the basis of research to detect video falsification. This study uses analysis with 2 forensic tools, forevid and VideoCleaner. The result of this study is the detection of differences in metadata, hash and contrast of original videos and manipulated videos.
APA, Harvard, Vancouver, ISO, and other styles
27

Gorucu-Coskuner, Hande, Ezgi Atik, and Tulin Taner. "Comparison of Live-Video and Video Demonstration Methods in Clinical Orthodontics Education." Journal of Dental Education 84, no. 1 (2020): 44–50. http://dx.doi.org/10.21815/jde.019.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Prasath, V. B. Surya. "Video denoising with adaptive temporal averaging." Engineering review 39, no. 3 (2019): 243–47. http://dx.doi.org/10.30765/er.39.3.05.

Full text
Abstract:
Recently the proliferation of digital videos has increased exponentially due to availability of consumer cameras . Despite the improvement in the sensor technologies, one of the fundamental problems is that of noise affecting the video scenes. Recently, adaptive, pixel-wise, temporal averaging methods can advocate in denoising videos. In this work, we adapt the edge maps of frames within temporal averaging to guide the denoising away from the edges. This allows the filtering to remove noise in intermediate flat regions while respecting boundaries of objects better. The experimental results indicate that we can obtain improved video denoising results in comparison to other filtering methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Jiafeng, Chenhao Li, Jihong Liu, Jing Zhang, Li Zhuo, and Meng Wang. "Personalized Mobile Video Recommendation Based on User Preference Modeling by Deep Features and Social Tags." Applied Sciences 9, no. 18 (2019): 3858. http://dx.doi.org/10.3390/app9183858.

Full text
Abstract:
With the explosive growth of mobile videos, helping users quickly and effectively find mobile videos of interest and further provide personalized recommendation services are the developing trends of mobile video applications. Mobile videos are characterized by their wide variety, single content, and short duration, and thus traditional personalized video recommendation methods cannot produce effective recommendation performance. Therefore, a personalized mobile video recommendation method is proposed based on user preference modeling by deep features and social tags. The main contribution of our work is three-fold: (1) deep features of mobile videos are extracted by an improved exponential linear units-3D convolutional neural network (ELU-3DCNN) for representing video content; (2) user preference is modeled by combining user preference for deep features with user preference for social tags that are respectively modeled by maximum likelihood estimation and exponential moving average method; (3) a personalized mobile video recommendation system based on user preference modeling is built after detecting key frames with a differential evolution optimization algorithm. Experiments on YouTube-8M dataset have shown that our method outperforms state-of-the-art methods in terms of both precision and recall of personalized mobile video recommendation.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Xiaojuan, and Huiwen Deng. "Research on Personalized Recommendation Methods for Online Video Learning Resources." Applied Sciences 11, no. 2 (2021): 804. http://dx.doi.org/10.3390/app11020804.

Full text
Abstract:
It is not easy to find learning materials of interest quickly in the vast amount of online learning materials. The purpose of this study is to find students’ interests according to their learning behaviors in the network and to recommend related video learning materials. For the students who do not leave an evaluation record in the learning platform, the association rule algorithm in data mining is used to find out the videos that students are interested in and recommend them. For the students who have evaluation records in the platform, we use the collaborative filtering algorithm based on items in machine learning, and use the Pearson correlation coefficient method to find highly similar video materials, and then recommend the learning materials they are interested in. The two methods are used in different situations, and all students in the learning platform can get recommendation. Through the application, our methods can reduce the data search time, improve the stickiness of the platform, solve the problem of information overload, and meet the personalized needs of the learners.
APA, Harvard, Vancouver, ISO, and other styles
31

Shieh, Chin-Shiuh, Yong-Shixa Jhan, Yuan-Li Liu, Mong-Fong Horng, and Tsair-Fwu Lee. "Video Object Tracking with Heuristic Optimization Methods." Journal of Image and Graphics 6, no. 2 (2018): 95–99. http://dx.doi.org/10.18178/joig.6.2.95-99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Thanga Ramya and Rangarajan. "Knowledge Based Methods for Video Data Retrieval." International Journal of Computer Science and Information Technology 3, no. 5 (2011): 165–72. http://dx.doi.org/10.5121/ijcsit.2011.3514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Johnson, Brenda K. "Model What You Teach: Science Methods Video." School Science and Mathematics 88, no. 6 (1988): 476–79. http://dx.doi.org/10.1111/j.1949-8594.1988.tb11840.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

U. S, Sethulekshmi, Remya R. S, and Mili Rosline Mathews. "A Survey on Digital Video Authentication Methods." International Journal of Computer Trends and Technology 22, no. 1 (2015): 35–40. http://dx.doi.org/10.14445/22312803/ijctt-v22p108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gunturk, B. K., Y. Altunbasak, and R. M. Mersereau. "Multiframe resolution-enhancement methods for compressed video." IEEE Signal Processing Letters 9, no. 6 (2002): 170–74. http://dx.doi.org/10.1109/lsp.2002.800503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Forero, A., L. Barrero, J. Quiroga, F. Calderän, and L. Quintana. "Video based methods for observing pedestrians behaviour." Injury Prevention 18, Suppl 1 (2012): A228.2—A228. http://dx.doi.org/10.1136/injuryprev-2012-040590w.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

DeCuir-Gunby, Jessica T., Patricia L. Marshall, and Allison W. McCulloch. "Using Mixed Methods to Analyze Video Data." Journal of Mixed Methods Research 6, no. 3 (2011): 199–216. http://dx.doi.org/10.1177/1558689811421174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ackfeldt, Anders. "Methods for Studying Video Games and Religion." CyberOrient 14, no. 2 (2020): 107–9. http://dx.doi.org/10.1002/j.1804-3194.2020.tb00007.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Pilka, Filip, and Miloš Oravec. "Prediction Methods for MPEG-4 and H.264 Video Transmission." Journal of Electrical Engineering 62, no. 2 (2011): 57–64. http://dx.doi.org/10.2478/v10187-011-0010-6.

Full text
Abstract:
Prediction Methods for MPEG-4 and H.264 Video Transmission Video services became a large part of internet network traffic. Therefore understanding of video coding standards and video traffic sources, such as video trace files is highly important. In this paper we concentrate on the basic characteristics of mpeg-4 and h.264 video coding standards. We describe the concept of the i, p and b frames in these standards since they are the main feature of every video trace file. Then we describe the content of the video trace files since the trace files are important for researchers to investigate network performance and understanding of network features. These are the important issues in terms of assuring the quality of service (QoS) in multimedia applications spread across the internet. Traffic prediction and bandwidth allocation are the crucial parts in terms of QoS. In this kind of applications, the artificial neural networks are vastly used. Therefore we illustrate the results of neural networks for video traffic prediction using both mpeg-4 and h.264 trace files.
APA, Harvard, Vancouver, ISO, and other styles
40

Ramezani, Mohsen, and Farzin Yaghmaee. "Retrieving Human Action by Fusing the Motion Information of Interest Points." International Journal on Artificial Intelligence Tools 27, no. 03 (2018): 1850008. http://dx.doi.org/10.1142/s0218213018500082.

Full text
Abstract:
In response to the fast propagation of videos on the Internet, Content-Based Video Retrieval (CBVR) was introduced to help users find their desired items. Since most videos concern humans, human action retrieval was introduced as a new topic in CBVR. Most human action retrieval methods represent an action by extracting and describing its local features as more reliable than global ones; however, these methods are complex and not very accurate. In this paper, a low-complexity representation method that more accurately describes extracted local features is proposed. In this method, each video is represented independently from other videos. To this end, the motion information of each extracted feature is described by the directions and sizes of its movements. In this system, the correspondence between the directions and sizes of the movements is used to compare videos. Finally, videos that correspond best with the query video are delivered to the user. Experimental results illustrate that this method can outperform state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Park, Eunhee, and Yu-Ping Chang. "Using Digital Media to Empower Adolescents in Smoking Prevention: Mixed Methods Study." JMIR Pediatrics and Parenting 3, no. 1 (2020): e13031. http://dx.doi.org/10.2196/13031.

Full text
Abstract:
Background There is a critical need for effective health education methods for adolescent smoking prevention. The coproduction of antismoking videos shows promising results for adolescent health education. Objective This study explored the feasibility of a smoking prevention program using the coproduction of antismoking videos in order to empower adolescents in smoking prevention and tobacco control. A smoking prevention program based on coproduction of antismoking videos over eight sessions was implemented in a low-income neighborhood. Methods A mixed methods design with a concurrent embedded approach was used. In total, 23 adolescents participated in the program. During the prevention program, small groups of participants used video cameras and laptops to produce video clips containing antismoking messages. Quantitative data were analyzed using the Wilcoxon signed-rank test to examine changes in participants’ psychological empowerment levels between pre- and postintervention; qualitative interview data were analyzed using content analysis. Results Pre- and postcomparison data revealed that participants’ psychological empowerment levels were significantly enhanced for all three domains—intrapersonal, interactional, and behavioral—of psychological empowerment (P<.05). Interviews confirmed that the coproduction of antismoking videos is feasible in empowering participants, by supporting nonsmoking behaviors and providing them with an opportunity to help build a smoke-free community. Conclusions Both quantitative and qualitative data supported the feasibility of the coproduction of antismoking videos in empowering adolescents in smoking prevention. Coproduction of antismoking videos with adolescents was a beneficial health education method.
APA, Harvard, Vancouver, ISO, and other styles
42

Ivanov, D. I. "METHODS OF IMAGE RECOGNITION IN A VIDEO STREAM." Applied Mathematics and Fundamental Informatics 8, no. 1 (2021): 042–49. http://dx.doi.org/10.25206/2311-4908-2021-8-1-42-49.

Full text
Abstract:
The article examines the problem of automatic object recognition using a video stream as a digital image. Algorithms for recognizing and tracking objects in the video stream are considered, methods used in video processing are analyzed, and the use of machine learning tools in working with video is described.The main approaches to solving the problem of recognizing moving objects in a video stream are investigated: the detection-based approach and the tracking-based approach. Arguments are made in favor of the tracking-based approach, and, in addition, modern methods of tracking objects in the video stream are considered. In particular, the algorhythms: Online Boosting Tracker - one of the first object tracking algorithms with high tracking accuracy, MIL Tracker (Multiple Instance Learning Tracker), which is a development of the idea of learning with a teacher and the Online Boosting algorithm and the KCF Tracker algorithm (Kernelized Correlation Filters Tracker) - a method that uses the mathematical properties of overlapping areas of positive examples.As a result, the advantages and disadvantages of the considered methods and algorithms for recognizing and tracking objects for various applications are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
43

Wilhoit, Elizabeth D. "Photo and Video Methods in Organizational and Managerial Communication Research." Management Communication Quarterly 31, no. 3 (2017): 447–66. http://dx.doi.org/10.1177/0893318917704511.

Full text
Abstract:
In this article, I introduce photo and video methods (PVM) to organizational communication. PVM have rarely been used in organizational communication research but offer advantages through providing a shared anchor around which researchers and participants can communicate, adding meaning through the framing and act of taking pictures or videos, and incorporating more senses. These additions to the research process offer new ways for participants and researchers to communicate. I detail two specific methods (photo-elicitation interviews and participant viewpoint ethnography) to illustrate some of the advantages of PVM relative to other methods. Through these examples, my goal is to inspire other scholars to see where PVM might be applicable to their research, adding differently supported theorizing to organizational communication.
APA, Harvard, Vancouver, ISO, and other styles
44

Spoto, Cheryl G., and A. J. G. Babu. "Highlighting in Alphanumeric Displays: The Efficacy of Monochrome Methods." Proceedings of the Human Factors Society Annual Meeting 33, no. 5 (1989): 370–74. http://dx.doi.org/10.1177/154193128903300530.

Full text
Abstract:
Highlighting is used to attract attention to displayed information. Prior work has called into question the efficacy reverse video as a highlighting method in alphanumeric displays. Brightness is highly recommended in guideline documents, but no empirical study of its efficacy in alphanumeric displays has been published. An experiment was conducted to investigate the efficacy of these methods in monochromatic, alphanumeric displays. Search time was significantly faster for reverse video than for high intensity highlighting. Reverse video may attract attention better than high intensity video. Heavy use of reverse video may aid in the systematic search of unhighlighted items. The results are analyzed in terms of a mathematical model.
APA, Harvard, Vancouver, ISO, and other styles
45

Seam, Nitin, Jeremy B. Richards, Patricia A. Kritek, et al. "Design and Implementation of a Peer-Reviewed Medical Education Video Competition: The Best of American Thoracic Society Video Lecture Series." Journal of Graduate Medical Education 11, no. 5 (2019): 592–96. http://dx.doi.org/10.4300/jgme-d-19-00071.1.

Full text
Abstract:
ABSTRACT Background Video is an increasingly popular medium for consuming online content, and video-based education is effective for knowledge acquisition and development of technical skills. Despite the increased interest in and use of video in medical education, there remains a need to develop accurate and trusted collections of peer-reviewed videos for medical learners. Objective We developed the first professional society-based, open-access library of crowd-sourced and peer-reviewed educational videos for medical learners and health care providers. Methods A comprehensive peer-review process of medical education videos was designed, implemented, reviewed, and modified using a plan-do-study-act approach to ensure optimal accuracy and effective pedagogy, while emphasizing modern teaching methods and brevity. The number of submissions and views were tracked as metrics of interest and engagement of medical learners and educators. Results The Best of American Thoracic Society Video Lecture Series (BAVLS) was launched in 2016. Total video submissions for 2016, 2017, and 2018 were 26, 55, and 52, respectively. Revisions to the video peer-review process were made after each submission cycle. By 2017, the total views of BAVLS videos on www.thoracic.org and YouTube were 9100 and 17 499, respectively. By 2018, total views were 77 720 and 152 941, respectively. BAVLS has achieved global reach, with views from 89 countries. Conclusions The growth in submissions, content diversity, and viewership of BAVLS is a result of an intentional and evolving review process that emphasizes creativity and innovation in video-based pedagogy. BAVLS can serve as an example for developing institutional or society-based video platforms.
APA, Harvard, Vancouver, ISO, and other styles
46

Rajasekhar, H., and B. Prabhakara Rao. "An Efficient Video Compression Technique Using Watershed Algorithm and JPEG-LS Encoding." Journal of Computational and Theoretical Nanoscience 13, no. 10 (2016): 6671–79. http://dx.doi.org/10.1166/jctn.2016.5613.

Full text
Abstract:
In the previous video compression method, the videos were segmented by using the novel motion estimation algorithm with aid of watershed method. But, the compression ratio (CR) of compression with novel motion estimation algorithm was not giving an adequate result. Moreover this methods performance is needed to be improved in the encoding and decoding processes. Because most of the video compression methods have utilized encoding techniques like JPEG, Run Length, Huffman coding and LSK encoding. The improvement of the encoding techniques in the compression process will improve the compression result. Hence, to overcome these drawbacks, we intended to propose a new video compression method with renowned encoding technique. In this proposed video compression method, the input video frames motion vectors are estimated by applying watershed and ARS-ST (Adaptive Rood Search with Spatio-Temporal) algorithms. After that, the vector blocks which have high difference value are encoded by using the JPEG-LS encoder. JPEG-LS have excellent coding and computational efficiency, and it outperforms JPEG2000 and many other image compression methods. This algorithm is of relatively low complexity, low storage requirement and its compression capability is efficient enough. To get the compressed video, the encoded blocks are subsequently decoded by JPEG-LS. The implementation result shows the effectiveness of proposed method, in compressing more number of videos. The performance of our proposed video compression method is evaluated by comparing the result of proposed method with the existing video compression techniques. The comparison result shows that our proposed method acquires high-quality compression ratio and PSNR for the number of testing videos than the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Špilka, Radim. "Learner-Content Interaction in Flipped Classroom Model." International Journal of Information and Communication Technologies in Education 4, no. 3 (2015): 53–61. http://dx.doi.org/10.1515/ijicte-2015-0014.

Full text
Abstract:
Abstract The article deals with the interaction of elementary school students with online educational videos. Half-yearly survey was conducted in mathematics lessons pupils in the eighth grade. During the experimental teaching was flipped classroom teaching model, where students watch educational instructional video before school lessons. During class when the teacher uses activization teaching methods that build on the content of the educational video. It turned out that there is a correlation between the average length of time that students watched videos and length instructional videos. Students watched a video about three times the length of their time. Additionally was monitored a number of playback of educational videos. Here it shows a slightly declining and fluctuating trend. For some video, especially towards the end of the experiment, the number playback are low due to preservation the measured correlation. This suggests that some students stopped to watch educational videos at the end of the experiment or accelerated video playback.
APA, Harvard, Vancouver, ISO, and other styles
48

Han, Zhisong, Yaling Liang, Zengqun Chen, and Zhiheng Zhou. "A two-stream network with joint spatial-temporal distance for video-based person re-identification." Journal of Intelligent & Fuzzy Systems 39, no. 3 (2020): 3769–81. http://dx.doi.org/10.3233/jifs-192067.

Full text
Abstract:
Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.
APA, Harvard, Vancouver, ISO, and other styles
49

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Full text
Abstract:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO, and other styles
50

Fagel, Sascha. "Merging methods of speech visualization." ZAS Papers in Linguistics 40 (January 1, 2005): 19–32. http://dx.doi.org/10.21248/zaspil.40.2005.255.

Full text
Abstract:
The author presents MASSY, the MODULAR AUDIOVISUAL SPEECH SYNTHESIZER. The system combines two approaches of visual speech synthesis. Two control models are implemented: a (data based) di-viseme model and a (rule based) dominance model where both produce control commands in a parameterized articulation space. Analogously two visualization methods are implemented: an image based (video-realistic) face model and a 3D synthetic head. Both face models can be driven by both the data based and the rule based articulation model.
 
 The high-level visual speech synthesis generates a sequence of control commands for the visible articulation. For every virtual articulator (articulation parameter) the 3D synthetic face model defines a set of displacement vectors for the vertices of the 3D objects of the head. The vertices of the 3D synthetic head then are moved by linear combinations of these displacement vectors to visualize articulation movements. For the image based video synthesis a single reference image is deformed to fit the facial properties derived from the control commands. Facial feature points and facial displacements have to be defined for the reference image. The algorithm can also use an image database with appropriately annotated facial properties. An example database was built automatically from video recordings. Both the 3D synthetic face and the image based face generate visual speech that is capable to increase the intelligibility of audible speech.
 
 Other well known image based audiovisual speech synthesis systems like MIKETALK and VIDEO REWRITE concatenate pre-recorded single images or video sequences, respectively. Parametric talking heads like BALDI control a parametric face with a parametric articulation model. The presented system demonstrates the compatibility of parametric and data based visual speech synthesis approaches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography