Academic literature on the topic 'Consecutive frames'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Consecutive frames.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Consecutive frames"

1

Шевяков, Ю. І., В. В. Ларін, є. л. Казаков, and Ахмед Абдалла. "The video processing features research in computer systems and special purpose networks." Системи озброєння і військова техніка, no. 4(64), (December 17, 2020): 126–32. http://dx.doi.org/10.30748/soivt.2020.64.16.

Full text
Abstract:
For a typical low complexity video sequence, the weight of each P-frame in the stream is approximately three times smaller than the I-frame weight. However, taking into account the number of P-frames in the group, they make the main contribution to the total video data amount. Therefore, the possibility of upgrading coding methods for P-frames is considered on preliminary blocks' type identification with the subsequent formation of block code structures. As the correlation coefficient between adjacent frames increases, the compression ratio of the differential-represented frame's binary mask i
APA, Harvard, Vancouver, ISO, and other styles
2

EL-KHAMY, S. E., E. M. SAAD, M. M. HADHOUD, M. I. DESSOUKY, A. M. ABBAS, and F. E. ABD EL-SAMIE. "A MODIFIED WIENER FILTER FOR MULTI-FRAME RESTORATION OF BLURRED AND NOISY IMAGES." International Journal of Information Acquisition 02, no. 02 (2005): 123–35. http://dx.doi.org/10.1142/s0219878905000490.

Full text
Abstract:
This paper proposes the use of a modified Wiener digital restoration technique for multi-frame image sequences that are degraded by both blur and noise. The proposed multi-channel Wiener restoration filter accounts for both intra-frame (spatial) and inter-frame (temporal) correlation. A modified cross-correlation formula between consecutive frames, which directly utilizes the motion vectors in the calculation of correlation among frames is derived and implemented in a multi-frame Wiener filter. Our modification estimates the motion vectors (horizontal and vertical) between consecutive frames u
APA, Harvard, Vancouver, ISO, and other styles
3

Etoh, T. G., D. Poggemann, G. Kreider, et al. "An image sensor which captures 100 consecutive frames at 1 000 000 frames/s." IEEE Transactions on Electron Devices 50, no. 1 (2003): 144–51. http://dx.doi.org/10.1109/ted.2002.806474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Bo, Chuming Lin, and Weimin Tan. "Frame and Feature-Context Video Super-Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5597–604. http://dx.doi.org/10.1609/aaai.v33i01.33015597.

Full text
Abstract:
For video super-resolution, current state-of-the-art approaches either process multiple low-resolution (LR) frames to produce each output high-resolution (HR) frame separately in a sliding window fashion or recurrently exploit the previously estimated HR frames to super-resolve the following frame. The main weaknesses of these approaches are: 1) separately generating each output frame may obtain high-quality HR estimates while resulting in unsatisfactory flickering artifacts, and 2) combining previously generated HR frames can produce temporally consistent results in the case of short informat
APA, Harvard, Vancouver, ISO, and other styles
5

Mehmetcik, Erdal, and Tolga Ciloglu. "Speech enhancement by maintaining phase continuity between consecutive analysis frames." Journal of the Acoustical Society of America 132, no. 3 (2012): 1972. http://dx.doi.org/10.1121/1.4755273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yu-Lun, Yi-Tung Liao, Yen-Yu Lin, and Yung-Yu Chuang. "Deep Video Frame Interpolation Using Cyclic Frame Generation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8794–802. http://dx.doi.org/10.1609/aaai.v33i01.33018794.

Full text
Abstract:
Video frame interpolation algorithms predict intermediate frames to produce videos with higher frame rates and smooth view transitions given two consecutive frames as inputs. We propose that: synthesized frames are more reliable if they can be used to reconstruct the input frames with high quality. Based on this idea, we introduce a new loss term, the cycle consistency loss. The cycle consistency loss can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data. It can be integrated into any frame interpola
APA, Harvard, Vancouver, ISO, and other styles
7

Ren, Zhuli, Liguan Wang, and Lin Bi. "Robust GICP-Based 3D LiDAR SLAM for Underground Mining Environment." Sensors 19, no. 13 (2019): 2915. http://dx.doi.org/10.3390/s19132915.

Full text
Abstract:
Unmanned mining is one of the most effective methods to solve mine safety and low efficiency. However, it is the key to accurate localization and mapping for underground mining environment. A novel graph simultaneous localization and mapping (SLAM) optimization method is proposed, which is based on Generalized Iterative Closest Point (GICP) three-dimensional (3D) point cloud registration between consecutive frames, between consecutive key frames and between loop frames, and is constrained by roadway plane and loop. GICP-based 3D point cloud registration between consecutive frames and consecuti
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Kun, Chungin Joung, and Seungjun Baek. "Sequence-to-Sequence Video Prediction by Learning Hierarchical Representations." Applied Sciences 10, no. 22 (2020): 8288. http://dx.doi.org/10.3390/app10228288.

Full text
Abstract:
Video prediction which maps a sequence of past video frames into realistic future video frames is a challenging task because it is difficult to generate realistic frames and model the coherent relationship between consecutive video frames. In this paper, we propose a hierarchical sequence-to-sequence prediction approach to address this challenge. We present an end-to-end trainable architecture in which the frame generator automatically encodes input frames into different levels of latent Convolutional Neural Network (CNN) features, and then recursively generates future frames conditioned on th
APA, Harvard, Vancouver, ISO, and other styles
9

Xing, Jinbo, Wenbo Hu, Yuechen Zhang, and Tien-Tsin Wong. "Flow-aware synthesis: A generic motion model for video frame interpolation." Computational Visual Media 7, no. 3 (2021): 393–405. http://dx.doi.org/10.1007/s41095-021-0208-x.

Full text
Abstract:
AbstractA popular and challenging task in video research, frame interpolation aims to increase the frame rate of video. Most existing methods employ a fixed motion model, e.g., linear, quadratic, or cubic, to estimate the intermediate warping field. However, such fixed motion models cannot well represent the complicated non-linear motions in the real world or rendered animations. Instead, we present an adaptive flow prediction module to better approximate the complex motions in video. Furthermore, interpolating just one intermediate frame between consecutive input frames may be insufficient fo
APA, Harvard, Vancouver, ISO, and other styles
10

H. Borse, Janhavi, Dipti D. Patil, and Vinod Kumar. "Tracking Keypoints from Consecutive Video Frames Using CNN Features for Space Applications." Tehnički glasnik 15, no. 1 (2021): 11–17. http://dx.doi.org/10.31803/tg-20210204161210.

Full text
Abstract:
Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms l
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Consecutive frames"

1

Reza-Alikhani, Hamid-Reza. "Motion compensation for image compression : pel-recursive motion estimation algorithm." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/33721.

Full text
Abstract:
In motion pictures there is a certain amount of redundancy between consecutive frames. These redundancies can be exploited by using interframe prediction techniques. To further enhance the efficiency of interframe prediction, motion estimation and compensation, various motion compensation techniques can be used. There are two distinct techniques for motion estimation block matching and pel-recursive block matching has been widely used as it produces a better signal-to-noise ratio or a lower bit rate for transmission than the pel-recursive method. In this thesis, various pel-recursive motion es
APA, Harvard, Vancouver, ISO, and other styles
2

He, Xiaochen. "Feature extraction from two consecutive traffic images for 3D wire frame reconstruction of vehicle." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B3786791X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

He, Xiaochen, and 何小晨. "Feature extraction from two consecutive traffic images for 3D wire frame reconstruction of vehicle." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B3786791X.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Consecutive frames"

1

M, Naveenkumar, Sriharsha K. V., and Vadivel A. "Moving Object Detection and Tracking Based on the Contour Extraction and Centroid Representation." In Advanced Methodologies and Technologies in Artificial Intelligence, Computer Simulation, and Human-Computer Interaction. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7368-5.ch012.

Full text
Abstract:
This chapter presents a novel approach for moving object detection and tracking based on contour extraction and centroid representation (CECR). Firstly, two consecutive frames are read from the video, and they are converted into grayscale. Next, the absolute difference is calculated between them and the result frame is converted into binary by applying gray threshold technique. The binary frame is segmented using contour extraction algorithm. The centroid representation is used for motion tracking. In the second stage of experiment, initially object is detected by using CECR and motion of each track is estimated by Kalman filter. Experimental results show that the proposed method can robustly detect and track the moving object.
APA, Harvard, Vancouver, ISO, and other styles
2

M, Naveenkumar, Sriharsha K. V., and Vadivel A. "Moving Object Detection and Tracking Based on the Contour Extraction and Centroid Representation." In Encyclopedia of Information Science and Technology, Fourth Edition. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2255-3.ch019.

Full text
Abstract:
This chapter presents a novel approach for moving object detection and tracking based on the Contour Extraction and Centroid Representation (CECR). Firstly, two consecutive frames are read from the video and they are converted into gray scale. Next, the absolute difference is calculated between them and the result frame is converted into binary by applying gray threshold technique. The binary frame is segmented using contour extraction algorithm. The centroid representation is used for motion tracking. In the second stage of experiment, initially object is detected by using CECR and motion of each track is estimated by kalman filter. Experimental results show that the proposed method can robustly detect and track the moving object.
APA, Harvard, Vancouver, ISO, and other styles
3

Khoramshahi, Ehsan, Juha Hietaoja, Anna Valros, Jinhyeon Yun, and Matti Pastell. "Image Quality Assessment and Outliers Filtering in an Image-Based Animal Supervision System." In Biometrics. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0983-7.ch049.

Full text
Abstract:
This paper presents a probabilistic framework for the image quality assessment (QA), and filtering of outliers, in an image-based animal supervision system (asup). The proposed framework recognizes asup's imperfect frames in two stages. The first stage deals with the similarity analysis of the same-class distributions. The objective of this stage is to maximize the separability measures by defining a set of similarity indicators (SI) under the condition that the number of permissible values for them is restricted to be relatively low. The second stage, namely faulty frame recognition (FFR), deals with asup's QA training and real-time quality assessment (RTQS). In RTQS, decisions are made based on a real-time quality assessment mechanism such that the majority of the defected frames are removed from the consecutive sub routines that calculate the movements. The underlying approach consists of a set of SI indexes employed in a simple Bayesian inference model. The results confirm that a significant amount of defected frames can be efficiently classified by this approach. The performance of the proposed technique is demonstrated by the classification on a cross-validation set of mixed high and low quality frames. The classification shows a true positive rate of 88.6% while the false negative rate is only about 2.5%.
APA, Harvard, Vancouver, ISO, and other styles
4

Khoramshahi, Ehsan, Juha Hietaoja, Anna Valros, Jinhyeon Yun, and Matti Pastell. "Image Quality Assessment and Outliers Filtering in an Image-Based Animal Supervision System." In Veterinary Science. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5640-4.ch003.

Full text
Abstract:
This paper presents a probabilistic framework for the image quality assessment (QA), and filtering of outliers, in an image-based animal supervision system (asup). The proposed framework recognizes asup's imperfect frames in two stages. The first stage deals with the similarity analysis of the same-class distributions. The objective of this stage is to maximize the separability measures by defining a set of similarity indicators (SI) under the condition that the number of permissible values for them is restricted to be relatively low. The second stage, namely faulty frame recognition (FFR), deals with asup's QA training and real-time quality assessment (RTQS). In RTQS, decisions are made based on a real-time quality assessment mechanism such that the majority of the defected frames are removed from the consecutive sub routines that calculate the movements. The underlying approach consists of a set of SI indexes employed in a simple Bayesian inference model. The results confirm that a significant amount of defected frames can be efficiently classified by this approach. The performance of the proposed technique is demonstrated by the classification on a cross-validation set of mixed high and low quality frames. The classification shows a true positive rate of 88.6% while the false negative rate is only about 2.5%.
APA, Harvard, Vancouver, ISO, and other styles
5

Edvardsen, Thor, Lars Gunnar Klaeboe, Ewa Szymczyk, and Jarosław D. Kasprzak. "Assessment of myocardial function by speckle-tracking echocardiography." In The ESC Textbook of Cardiovascular Imaging, edited by José Luis Zamorano, Jeroen J. Bax, Juhani Knuuti, et al. Oxford University Press, 2021. http://dx.doi.org/10.1093/med/9780198849353.003.0007.

Full text
Abstract:
Myocardial deformation or strain is the universal property of contracting cardiac muscle. Deformation is defined in physics as relative change of length (and is therefore unitless and usually given as percentage) and in cardiac imaging it is thus algebraically negative for shortening or positive for thickening. There are several definitions of strain—Lagrangian strain refers to a fixed baseline distance and Eulerian (or natural) strain—to a dynamically changing reference length, representing a time integral of strain rate (which can be obtained by tissue Doppler). Measurements of strains are usually obtained by greyscale image quantification modality—speckle-tracking echocardiography (STE) which analyses myocardial motion by tracking and matching naturally occurring markers of myocardial texture, described as speckles. Echocardiographic speckles represent interference pattern of subtle myocardial scatters and can be followed from frame to frame by dedicated software to define the displacement of the myocardium within the interval between consecutive frames (inverse of frame rate).
APA, Harvard, Vancouver, ISO, and other styles
6

Korikov, Anatoly, and Oleg Krivtsov. "Development of Model and Software for Tracking Head Avatars in E-Learning Systems." In Handbook of Research on Estimation and Control Techniques in E-Learning Systems. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9489-7.ch031.

Full text
Abstract:
In this chapter, we focus on the theoretical basis of the method of tracking a person's head, based on the construction of its geometric texture patterns, and finding the parameters of its movement between pairs of consecutive video frames. The task of tracking the position of geometric head model is formulated as the problem of determining the parameters of the model (translation and rotation) so that the projection of a 3D model of the head on the video frame coincides with the real image of the head in this picture. To solve this problem, we use an efficient algorithm for infrared imaging. The application of the expressions is obtained during movement of the head as a three-dimensional body with six degrees of freedom and the use of perspective projection and for avatars modeling by Prof Vardan Mkrttchian last publications in IGI Global 2011-2015.
APA, Harvard, Vancouver, ISO, and other styles
7

Bhaumik, Hrishikesh, Manideepa Chakraborty, Siddhartha Bhattacharyya, and Susanta Chakraborty. "Detection of Gradual Transition in Videos." In Intelligent Analysis of Multimedia Information. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0498-6.ch011.

Full text
Abstract:
During video editing, the shots composing the video are coalesced together by different types of transition effects. These editing effects are classified into abrupt and gradual changes, based on the inherent nature of these transitions. In abrupt transitions, there is an instantaneous change in the visual content of two consecutive frames. Gradual transitions are characterized by a slow and continuous change in the visual contents occurring between two shots. In this chapter, the challenges faced in this field along with an overview of the different approaches are presented. Also, a novel method for detection of dissolve transitions using a two-phased approach is enumerated. The first phase deals with detection of candidate dissolves by identifying parabolic patterns in the mean fuzzy entropy of the frames. In the second phase, an ensemble of four parameters is used to design a filter which eliminates candidates based on thresholds set for each of the four stages of filtration. The experimental results show a marked improvement over other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Gu, Irene Yu-Hua, and Vasile Gui. "Joint Space-Time-Range Mean ShiftBased Image and Video Segmentation." In Advances in Image and Video Segmentation. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-753-9.ch006.

Full text
Abstract:
This chapter addresses image and video segmentation by using mean shift-based filtering and segmentation. Mean shift is an effective and elegant method to directly seek the local modes (or, local maxima) of the probability density function without the requirement of actually estimating it. Mean shift is proportional to the normalized density gradient estimate, and is pointing to the local stationary point (or, local mode) of the density estimate at which it converges. A mean shift filter can be related to a domain filter, a range filter or a bilateral filter depending on the variable setting in the kernel, and also has its own strength due to its flexibility and statistical basis. In this chapter a variety of mean shift filtering approaches are described for image/video segmentation and nonlinear edge-preserving image smoothing. A joint space-time-range domain mean shift-based video segmentation approach is presented. Segmentation of moving/static objects/background is obtained through inter-frame mode-matching in consecutive frames and motion vector mode estimation. Newly appearing objects/regions in the current frame due to new foreground objects or uncovered background regions are segmented by intra-frame mode estimation. Examples of image/video segmentation are included to demonstrate the effectiveness and robustness of these methods. Pseudo codes of the algorithms are also included.
APA, Harvard, Vancouver, ISO, and other styles
9

Shazman, Shula. "Selecting Intermittent Fasting Type to Improve Health in Type 2 Diabetes: A Machine Learning Approach." In Type 2 Diabetes [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.95336.

Full text
Abstract:
Intermittent fasting (IF) is the cycling between periods of eating and fasting. The two most popular forms of IER are: the 5: 2 diet characterized by two consecutive or non-consecutive “fast” days and the alternate-day energy restriction, commonly called alternate-day fasting (ADF). The second form is time-restricted feeding (TRF), eating within specific time frames such as the most prevalent 16: 8 diet, with 16 hours of fasting and 8 hours for eating. It is already known that IF can bring about changes in metabolic parameters related with type 2 diabetes (T2D). Furthermore, IF can be effective in improving health by reducing metabolic disorders and age-related diseases. However, it is not clear yet whether the age at which fasting begins, gender and severity of T2D influence on the effectiveness of the different types of IF in reducing metabolic disorders. In this chapter I will present the risk factors of T2D, the different types of IF interventions and the research-based knowledge regarding the effect of IF on T2D. Furthermore, I will describe several machine learning approaches to provide a recommendation system which reveals a set of rules that can assist selecting a successful IF intervention for a personal case. Finally, I will discuss the question: Can we predict the optimal IF intervention for a prediabetes patient?
APA, Harvard, Vancouver, ISO, and other styles
10

Eishita, Farjana Z., Ashfaqur Rahman, Salahuddin A. Azad, and Akhlaqur Rahman. "Occlusion Handling in Object Detection." In Multidisciplinary Computational Intelligence Techniques. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1830-5.ch005.

Full text
Abstract:
Object tracking is a process that follows an object through consecutive frames of images to determine the object’s movement relative other objects of those frames. In other words, tracking is the problem of estimating the trajectory of an object in the image plane as it moves around a scene. This chapter presents research that deals with the problem of tracking objects when they are occluded. An object can be partially or fully occluded. Depending on the tracking domain, a tracker can deal with partial and full object occlusions using features such as colour and texture. But sometimes it fails to detect the objects after occlusion. The shape feature of an individual object can provide additional information while combined with colour and texture features. It has been observed that with the same colour and texture if two object’s shape information is taken then these two objects can be detected after the occlusion has occurred. From this observation, a new and a very simple algorithm is presented in this chapter, which is able to track objects after occlusion even if the colour and textures are the same. Some experimental results are shown along with several case studies to compare the effectiveness of the shape features against colour and texture features.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Consecutive frames"

1

Yin, Xin, Delong Yang, Dongyun Lin, and Xiafu Peng. "Fast Matching for Consecutive Frames of UAV Video." In 2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020. http://dx.doi.org/10.1109/ccdc49329.2020.9164566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hung, Kuo-Lung, and Shih-Che Lai. "Exemplar-based video inpainting approach using temporal relationship of consecutive frames." In 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST). IEEE, 2017. http://dx.doi.org/10.1109/icawst.2017.8256482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Noda, Hisayori, and Akinori Nishihara. "Relationship between consecutive frames in generalized harmonics analysis for predictive coding." In 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009. IEEE, 2009. http://dx.doi.org/10.1109/iscas.2009.5118145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jeong, Woo Jin, Jin Wook Park, Dong-Seok Lee, Wonju Choi, and Young Shik Moon. "Weighted linear motion deblurring with blur kernel estimation using consecutive frames." In 2014 International Symposium on Consumer Electronics (ICSE). IEEE, 2014. http://dx.doi.org/10.1109/isce.2014.6884387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ren, Yong, Xudong Xie, Jianming Hu, and Zhiheng Li. "A stereo visual odometry based on SURF feature and three consecutive frames." In 2015 IEEE First International Smart Cities Conference (ISC2). IEEE, 2015. http://dx.doi.org/10.1109/isc2.2015.7366224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Linkai Chen, Pinwei Zhu, and Guangping Zhu. "Moving objects detection based on background subtraction combined with consecutive frames subtraction." In 2010 International Conference on Future Information Technology and Management Engineering (FITME). IEEE, 2010. http://dx.doi.org/10.1109/fitme.2010.5656702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Chu-Tak, Wan-Chi Siu, and Daniel P. K. Lun. "Semi-Supervised Deep Vision-Based Localization Using Temporal Correlation Between Consecutive Frames." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Etoh, Takeharu G., Yuya Hatsuki, Tomoo Okinaka, et al. "An image sensor of 1,000,000 fps, 300,000 pixels, and 144 consecutive frames." In 26th International Congress on High-Speed Photography and Photonics, edited by Dennis L. Paisley, Stuart Kleinfelder, Donald R. Snyder, and Brian J. Thompson. SPIE, 2005. http://dx.doi.org/10.1117/12.566156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Chu-Tak, Wan-Chi Siu, and Daniel P. K. Lun. "Vision-based Place Recognition Using ConvNet Features and Temporal Correlation Between Consecutive Frames." In 2019 IEEE Intelligent Transportation Systems Conference - ITSC. IEEE, 2019. http://dx.doi.org/10.1109/itsc.2019.8917364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shi Lei, Shi Fangfei, Wang Teng, Bu Leping, and Hou Xinguo. "A new fire detection method based on the centroid variety of consecutive frames." In 2017 2nd International Conference on Image, Vision and Computing (ICIVC). IEEE, 2017. http://dx.doi.org/10.1109/icivc.2017.7984594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!