Academic literature on the topic 'Video methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video methods"

1

Wadia, Reena. "Live-video and video demonstration methods." British Dental Journal 228, no. 4 (2020): 253. http://dx.doi.org/10.1038/s41415-020-1309-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Chong, and Zhan Jun Si. "Applied Research of Assessment Methods on Video Quality." Applied Mechanics and Materials 262 (December 2012): 157–62. http://dx.doi.org/10.4028/www.scientific.net/amm.262.157.

Full text
Abstract:
With the rapid development of modern video technology, the range of video applications is increasing, such as online video conferencing, online classroom, online medical, etc. However, due to the quantity of video data is large, video has to be compressed and encoded appropriately, but the encoding process may cause some distortions on video quality. Therefore, how to evaluate the video quality efficiently and accurately is essential in the fields of video processing, video quality monitoring and multimedia video applications. In this article, subjective, and comprehensive evaluation method of video quality were introduced, a video quality assessment system was completed, four ITU recommended videos were encoded and evaluated by Degradation Category Rating (DCR) and Structural Similarity (SSIM) methods using five different formats. After that, comprehensive evaluations with weights were applied. Results show that data of all three evaluations have good consistency; H.264 is the best encoding method, followed by Xvid and wmv8; the higher the encoding bit rate is, the better the evaluations are, but comparing to 1000kbps, the subjective and objective evaluation scores of 1400kbps couldn’t improve obviously. The whole process could also evaluate new encodings methods, and is applicable for high-definition video, finally plays a significant role in promoting the video quality evaluation and video encoding.
APA, Harvard, Vancouver, ISO, and other styles
3

Palau, Roberta De Carvalho Nobre, Bianca Santos da Cunha Silveira, Robson André Domanski, et al. "Modern Video Coding: Methods, Challenges and Systems." Journal of Integrated Circuits and Systems 16, no. 2 (2021): 1–12. http://dx.doi.org/10.29292/jics.v16i2.503.

Full text
Abstract:
With the increasing demand for digital video applications in our daily lives, video coding and decoding become critical tasks that must be supported by several types of devices and systems. This paper presents a discussion of the main challenges to design dedicated hardware architectures based on modern hybrid video coding formats, such as the High Efficiency Video Coding (HEVC), the AOMedia Video 1 (AV1) and the Versatile Video Coding (VVC). The paper discusses eachstep of the hybrid video coding process, highlighting the main challenges for each codec and discussing the main hardware solutions published in the literature. The discussions presented in the paper show that there are still many challenges to be overcome and open research opportunities, especially for the AV1 and VVC codecs. Most of these challenges are related to the high throughput required for processing high and ultrahigh resolution videos in real time and to energy constraints of multimedia-capable devices.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Decheng, and Jinxin Chen. "Visual Thinking Methods and Training in Video Production." International Journal for Innovation Education and Research 7, no. 12 (2019): 499–507. http://dx.doi.org/10.31686/ijier.vol7.iss12.2099.

Full text
Abstract:
"A picture is worth a thousand words". Internet plus has brought people into the era of picture reading. Pictures and videos are everywhere. And dynamic video has the characteristics of sound, sound and documentary. It has become a popular media form for the public. Therefore, mobile phone video shooting and production are convenient, and the popularization of video production and dissemination has become inevitable. However, the creation of artistic and innovative video works requires producers to master certain visual thinking methods in addition to film montage theories and techniques. The article briefly outlines the forming process of the concept of visual thinking, and proposes advanced methods of visual thinking: intuitive method, selection method, discovery method, and inquiry method. In the process of video production, some methods of visual thinking are analyzed through a case, such as the visualization of textual information, the figuration of image, the logic of concreteness, and the systematization of logic. We have studied practical visual thinking training methods, from the three stages of video production: script creation, shooting practice, and video packaging,
APA, Harvard, Vancouver, ISO, and other styles
5

Patel, Rahul S., Gajanan P. Khapre, and R. M. Mulajkr. "Video Retrieval Systems Methods, Techniques, Trends and Challenges." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (2017): 72–81. http://dx.doi.org/10.31142/ijtsrd5862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

V., Puyda, and Stoian A. "ON METHODS OF OBJECT DETECTION IN VIDEO STREAMS." Computer systems and network 2, no. 1 (2017): 80–87. http://dx.doi.org/10.23939/csn2020.01.080.

Full text
Abstract:
Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color
APA, Harvard, Vancouver, ISO, and other styles
7

Choe, Jaeryun, Haechul Choi, Heeji Han, and Daehyeok Gwon. "Novel video coding methods for versatile video coding." International Journal of Computational Vision and Robotics 11, no. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.10040489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Heeji, Daehyeok Gwon, Jaeryun Choe, and Haechul Choi. "Novel video coding methods for versatile video coding." International Journal of Computational Vision and Robotics 11, no. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.117582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Crescentia.A, Anto, and Sujatha G. "An Overview of Digital Video Tampering Detection Using Passive Methods and D-Hash Algorithm." International Journal of Engineering & Technology 7, no. 4.6 (2018): 373. http://dx.doi.org/10.14419/ijet.v7i4.6.28444.

Full text
Abstract:
Video tampering and integrity detection can be defined as methods of alteration of the contents of the video which will enable it to hide objects, an occasion or adjust the importance passed on by the collection of images in the video. Modification of video contents is growing rapidly due to the expansion of the video procurement gadgets and great video altering programming devices. Subsequently verification of video files is transforming into something very vital. Video integrity verification aims to search out the hints of altering and subsequently asses the realness and uprightness of the video. These strategies might be ordered into active and passive techniques. Therefore our area of concern in this paper is to present our views on different passive video tampering detection strategies and integrity check. Passive video tampering identification strategies are grouped into consequent three classifications depending on the type of counterfeiting as: Detection of double or multiple compressed videos, Region altering recognition and Video inter-frame forgery detection. So as to detect the tampering of the video, it is split into frames and hash is generated for a group of frames referred to as Group of Pictures. This hash value is verified by the receiver to detect tampering.
APA, Harvard, Vancouver, ISO, and other styles
10

Majumdar, Dr Jharna, and Spoorthy B. "Comparisons of Video Summarization Methods." IOSR Journal of Computer Engineering 16, no. 5 (2014): 52–56. http://dx.doi.org/10.9790/0661-16525256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Video methods"

1

Whiteman, Don, and Greg Glen. "Compression Methods for Instrumentation Video." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611516.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada<br>Video compression is typically required to solve the bandwidth problems related to the transmission of instrumentation video. The use of color systems typically results in bandwidth requirements beyond the capabilities of current receiving and recording equipment. The HORACE specification, IRIG-210, was introduced as an attempt to provide standardization between government test ranges. The specification provides for video compression in order to alleviate the bandwidth problems associated with instrumentation video and is intended to assure compatibility, data quality, and performance of instrumentation video systems. This paper provides an overview of compression methods available for instrumentation video and summarizes the benefits of each method and the problems associated with different compression methods when utilized for instrumentation video. The affects of increased data link bit error rates are also discussed for each compression method. This paper also includes a synopsis of the current HORACE specification, a proposed Vector HORACE specification for color images and hardware being developed to meet both specifications.
APA, Harvard, Vancouver, ISO, and other styles
2

Jung, Agata. "Comparison of Video Quality Assessment Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15062.

Full text
Abstract:
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
APA, Harvard, Vancouver, ISO, and other styles
3

Toivonen, T. (Tuukka). "Efficient methods for video coding and processing." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286957.

Full text
Abstract:
Abstract This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors. For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard. Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation. Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
4

Begaint, Jean. "Towards novel inter-prediction methods for image and video compression." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S038/document.

Full text
Abstract:
En raison de la grande disponibilité des dispositifs de capture vidéo et des nouvelles pratiques liées aux réseaux sociaux, ainsi qu’à l’émergence des services en ligne, les images et les vidéos constituent aujourd’hui une partie importante de données transmises sur internet. Les applications de streaming vidéo représentent ainsi plus de 70% de la bande passante totale de l’internet. Des milliards d’images sont déjà stockées dans le cloud et des millions y sont téléchargés chaque jour. Les besoins toujours croissants en streaming et stockage nécessitent donc une amélioration constante des outils de compression d’image et de vidéo. Cette thèse vise à explorer des nouvelles approches pour améliorer les méthodes actuelles de prédiction inter-images. De telles méthodes tirent parti des redondances entre images similaires, et ont été développées à l’origine dans le contexte de la vidéo compression. Dans une première partie, de nouveaux outils de prédiction inter globaux et locaux sont associés pour améliorer l’efficacité des schémas de compression de bases de données d’image. En associant une compensation géométrique et photométrique globale avec une prédiction linéaire locale, des améliorations significatives peuvent être obtenues. Une seconde approche est ensuite proposée qui introduit un schéma de prédiction inter par régions. La méthode proposée est en mesure d’améliorer les performances de codage par rapport aux solutions existantes en estimant et en compensant les distorsions géométriques et photométriques à une échelle semi locale. Cette approche est ensuite adaptée et validée dans le cadre de la compression vidéo. Des améliorations en réduction de débit sont obtenues, en particulier pour les séquences présentant des mouvements complexes réels tels que des zooms et des rotations. La dernière partie de la thèse se concentre sur l’étude des méthodes d’apprentissage en profondeur dans le cadre de la prédiction inter. Ces dernières années, les réseaux de neurones profonds ont obtenu des résultats impressionnants pour un grand nombre de tâches de vision par ordinateur. Les méthodes basées sur l’apprentissage en profondeur proposées à l’origine pour de l’interpolation d’images sont étudiées ici dans le contexte de la compression vidéo. Des améliorations en terme de performances de codage sont obtenues par rapport aux méthodes d’estimation et de compensation de mouvements traditionnelles. Ces résultats mettent en évidence le fort potentiel de ces architectures profondes dans le domaine de la compression vidéo<br>Due to the large availability of video cameras and new social media practices, as well as the emergence of cloud services, images and videos constitute today a significant amount of the total data that is transmitted over the internet. Video streaming applications account for more than 70% of the world internet bandwidth. Whereas billions of images are already stored in the cloud and millions are uploaded every day. The ever growing streaming and storage requirements of these media require the constant improvements of image and video coding tools. This thesis aims at exploring novel approaches for improving current inter-prediction methods. Such methods leverage redundancies between similar frames, and were originally developed in the context of video compression. In a first approach, novel global and local inter-prediction tools are associated to improve the efficiency of image sets compression schemes based on video codecs. By leveraging a global geometric and photometric compensation with a locally linear prediction, significant improvements can be obtained. A second approach is then proposed which introduces a region-based inter-prediction scheme. The proposed method is able to improve the coding performances compared to existing solutions by estimating and compensating geometric and photometric distortions on a semi-local level. This approach is then adapted and validated in the context of video compression. Bit-rate improvements are obtained, especially for sequences displaying complex real-world motions such as zooms and rotations. The last part of the thesis focuses on deep learning approaches for inter-prediction. Deep neural networks have shown striking results for a large number of computer vision tasks over the last years. Deep learning based methods proposed for frame interpolation applications are studied here in the context of video compression. Coding performance improvements over traditional motion estimation and compensation methods highlight the potential of these deep architectures
APA, Harvard, Vancouver, ISO, and other styles
5

Grundmann, Matthias. "Computational video: post-processing methods for stabilization, retargeting and segmentation." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47596.

Full text
Abstract:
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
APA, Harvard, Vancouver, ISO, and other styles
6

Coria, Mendoza Lino Evgueni. "Low-complexity methods for image and video watermarking." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/571.

Full text
Abstract:
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
APA, Harvard, Vancouver, ISO, and other styles
7

Maucho, Geoffrey Sunday. "Weighted distortion methods for error resilient video coding." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110392.

Full text
Abstract:
Wireless and Internet video applications are hampered by bit errors and packet errors, respectively. In addition, packet losses in best effort Internet applications limit video communication applications. Because video compression uses temporal prediction, compressed video is especially susceptible to the problem of transmission errors in one frame propagating into subsequent frames. It is therefore necessary to develop methods to improve the performance of compressed video in the face of channel impairments. Recent work in this area has focused on estimating the end-to-end distortion, which is shown to be useful in building an error resilient encoder. However, these techniques require an accurate estimate of the channel conditions, which is not always accessible for some applications.Recent video compression standards have adopted a Rate Distortion Optimization (RDO) framework to determine coding options that address the trade-off between rate and distortion. In this dissertation, error robustness is added to the RDO framework as a design consideration. This dissertation studies the behavior of motion-compensated prediction (MCP) in a hybrid video coder, and presents techniques of improving the performance in an error prone environment. An analysis of the motion trajectory gives us insight on how to improve MCP without explicit knowledge of the channel conditions. Information from the motion trajectory analysis is used in a novel way to bias the distortion used in RDO, resulting in an encoded bitstream that is both error resilient and bitrate efficient.We also present two low complexity solutions that exploit past inter-frame dependencies. In order to avoid error propagation, regions of a frame are classified according to their potential of having propagated errors. By using this method, we are then able to steer the MCP engine towards areas that are considered ``safe" for prediction. Considering the impact error propagation may have in a RDO framework, our work enhances the overall perceived quality of compressed video while maintaining high coding efficiency. Comparison with other error resilient video coding techniques show the advantages offered by the weighted distortion techniques we present in this dissertation.<br>Les applications vidéo pour l'Internet et les systèmes de communication sans fil sont respectivement entravées par les erreurs de paquets et de bits. De plus, les pertes de paquets des meilleures applications Internet limitent les communications vidéo. Comme la compression vidéo utilise des techniques de prédiction temporelle, les transmissions de vidéos comprimés sont particulièrement sensibles aux erreurs se propageant d'une trame à l'autre. Il est donc nécessaire de développer des techniques pour améliorer la performance de la compression vidéo face au bruit des canaux de transmission. De récents travaux sur le sujet ont mis l'emphase sur l'estimation de la distorsion point-à-point, technique utile pour construire un codeur vidéo tolérant aux erreurs. Ceci étant dit, cette approche requiert une estimation précise des conditions du canal de transmission, ce qui n'est pas toujours possible pour certaines applications.Les standards de compression récents utilisent un cadre d'optimisation dèbit distorsion (RDO) afin de déterminer les options de codage en fonction du compromis souhaité entre distorsion et taux de transmission. Dans cette thèse, nous ajoutons la robustesse aux erreurs au cadre RDO en tant que critère de conception. Nous étudions le comportement de la prédiction de mouvement compensé (MCP) dans un codeur vidéo hybride et présentons des techniques pour en améliorer la performance dans des environnements propices aux erreurs. L'analyse de la trajectoire du mouvement nous permet d'améliorer la MCP sans connatre explicitement les conditions du canal de transmission. L'information de l'analyse de la trajectoire du mouvement est utilisée de façon à contrer le biais de la distorsion utilisée dans le cadre RDO, ce qui permet d'obtenir un encodage binaire d'un taux eficace et résistant aux erreurs. Nous présentons également deux techniques à faible complexité qui exploitent la dépendance entre la trame à coder et les trames qui précèdent. Afin d'éviter la propagation des erreurs, les régions d'une trame sont classées en fonction de leur potentiel à contenir des erreurs propagées. Avec cette méthode, nous sommes ` même de diriger l'outil MCP vers les régions où la prédiction peut être faite de façon "sécuritaire". Considérant l'impact que peut avoir la propagation des erreurs dans un cadre RDO, nos travaux améliorent la qualité globale perçue de vidéos comprimés tout en maintenant de bons taux de transmission. Des comparaisons avec les meilleures techniques robustes de codage vidéo présentement utilisées démontrent les avantages offerts par les techniques de distorsion pondérée présentées dans cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
8

Naghdinezhad, Amir. "Error resilient methods in scalable video coding (SVC)." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121379.

Full text
Abstract:
With the rapid development of multimedia technology, video transmission over unreliable channels like Internet and wireless networks, is widely used. Channel errors can result in a mismatch between the encoder and the decoder, and because of the predictive structures used in video coding, the errors will propagate both temporally and spatially. Consequently, the quality of the received video at the decoder may degrade significantly. In order to improve the quality of the received video, several error resilient methods have been proposed. Furthermore, in addition to compression efficiency and error robustness, flexibility has become a new multimedia requirement in advanced multimedia applications. In these applications such as video conferencing and video streaming, compressed video is transmitted over heterogeneous networks with a broad range of clients with different requirements and capabilities in terms of power, bandwidth and display resolution, simultaneously accessing the same coded video. The scalable video coding concept was proposed to address the flexibility issue by generating a single bit stream that meets the requirement of these users. This dissertation is concerned with novel contributions in the area of error resilience for scalable extension of H.264/AVC. The first part of the dissertation focuses on modifying the conventional prediction structure in order to reduce the propagation of error to succeeding frames. We propose two new prediction structures that can be used in temporal and spatial scalability of SVC. The proposed techniques improve the previous methods by efficiently exploiting the Intra macroblocks (MBs) in the reference frames and exponential decay of error propagation caused by the introduced leaky prediction.In order to satisfy both coding efficiency and error resilience in error prone channels, we combine error resilience mode decision technique with the proposed prediction structures. The end-to-end distortion of the proposed prediction structure is estimated and used instead of the source coding distortion in the rate distortion optimization. Furthermore, accurately analysing the utility of each video packet in unequal error protection techniques is a critical and usually very complex process. We present an accurate low complexity utility estimation technique. This technique estimates the utility of each network abstraction layer (NAL) by considering the error propagation to future frames. Also, a low delay version of this technique, which can be used in delay constrained applications, is presented.<br>La révolution technologique de l'information et des communications a donné lieu à un élargissement du marché des applications multimédias. Sur des canaux non fiables comme Internet et les réseaux sans fil, la présence des erreurs de transmission est considérée comme l'une des principales causes de la dégradation de la qualité vidéo au niveau du récepteur. Et en raison des structures de prédiction utilisées dans le codage vidéo, ces erreurs ont tendance à se propager à la fois temporellement et spatialement. Par conséquent, la qualité de la vidéo reçue risque de se dégrader d'une façon considérable. Afin de minimiser ce risque, des outils qui permettent de renforcer la robustesse contre les erreurs ont été proposés. En plus de la résistance aux erreurs, la flexibilité est devenue une nouvelle exigence dans des applications multimédias comme la vidéo conférence et la vidéo en streaming. En effet, la vidéo compressée est transmise sur des réseaux hétérogènes avec un large éventail de clients ayant des besoins différents et des capacités différentes en termes de puissance, de résolution vidéo et de bande passante, d'où la nécessite d'une solution pour l'accès simultané à la même vidéo codée. La scalabilité est venue répondre aux exigences de tous ces utilisateurs.Cette thèse, élaborée dans le cadre du développement de la version scalable de la norme H.264/AVC (aussi connue sous le nom de SVC), présente des idées innovantes dans le domaine de la résilience aux erreurs. La première partie de la thèse expose deux nouvelles structures de prédiction qui aident à renforcer la résistance aux erreurs. Les structures proposées peuvent être utilisées dans la scalabilité temporelle et spatiale et visent essentiellement à améliorer les méthodes antérieures en exploitant de manière plus efficace les MBs "Intra" dans les images de référence et en profitant de la prédiction "Leaky" qui permet de réduire de façon exponentielle la propagation des erreurs de transmission.Afin de satisfaire à la fois l'efficacité du codage et la résilience aux erreurs, nous avons combiné les techniques proposées avec les modules de décision. En plus, une estimation de la distorsion de bout en bout a été utilisée dans le calcul du coût des différents modes. En outre, analyser avec précision l'importance de chaque paquet de données vidéo dans de telles structures est un processus critique et généralement très complexe. Nous avons proposé une méthode simple et fiable pour cette estimation. Cette méthode consiste à évaluer l'importance de chaque couche d'abstraction réseau (NAL) en considérant la propagation des erreurs dans les images futures. En plus, une version avec un faible délai de réponse a été présentée.
APA, Harvard, Vancouver, ISO, and other styles
9

Isgro, Francesco. "Geometric methods for video sequence analysis and applications." Thesis, Heriot-Watt University, 2001. http://hdl.handle.net/10399/495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kies, Jonathan K. "Empirical Methods for Evaluating Video-Mediated Collaborative Work." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30537.

Full text
Abstract:
Advancements in computer technology are making video conferencing a viable communication medium for desktop computers. These same advancements are changing the structure and means by which information workers conduct business. From a human factors perspective, however, the study of new communication technologies and their relationships with end users presents a challenging research domain. This study employed two diverse research approaches to the problem of reduced video frame rate in desktop video conferencing. In the first study, a psychophysical method was used to evaluate video image quality as a function of frame rate for a series of different scenes. Scenes varied in terms of level of detail, velocity of panning, and content. Results indicate that for most scenes, differences in frame rate become less detectable above approximately 10 frames per second (fps), suggesting a curvilinear relationship between image quality and frame rate. For a traditional conferencing scene, however, a linear increase in frame rate produced a linear improvement in perceived image quality. High detail scenes were perceived to be of lower quality than the low detail scenes, while panning velocity had no effect. In the second study, a collection of research methods known as ethnography was used to examine long-term use of desktop video by collaborators in a real work situation. Participants from a graduate course met each week for seven weeks and worked on a class project under one of four communication conditions: face-to-face, 1 fps, 10 fps, and 25 fps. Dependent measures included interviews, questionnaires, interaction analysis measures, and ethnomethodology. Recommendations are made regarding the utility and expense of each method with respect to uncovering human factors issues in video-mediated collaboration. It is believed that this research has filled a significant gap in the human factors literature of advanced telecommunications and research methodology.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Video methods"

1

Comaniciu, Dorin, Rudolf Mester, Kenichi Kanatani, and David Suter, eds. Statistical Methods in Video Processing. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b104157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Video interaction analysis: Methods and methodology. Peter Lang, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Akramullah, Shahriar. Digital Video Concepts, Methods, and Metrics. Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6713-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kyte, Thomas, and Darl Kuhn. Digital Video Concepts, Methods, and Metrics. Apress, 2014. http://dx.doi.org/10.1007/978-1-4842-0760-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Larsen-Freeman, Diane. Language teaching methods: Teacher's handbook for the video series. Office of English Language Programs, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Statistical Methods in Video Processing Workshop (2002 Copenhagen, Denmark). Proceedings of the Statistical Methods in Video Processing Workshop. Monash University - Dept. Electrical and Computer Systems Engineering, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aesthetic plastic surgery video atlas. Elsevier Saunders, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1947-, Trollip Stanley R., and Alessi Stephen M. 1951-, eds. Multimedia for learning: Methods and development. 3rd ed. Allyn and Bacon, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stern, John M., and Joseph I. Sirven. Atlas of video-EEG monitoring. McGraw-Hill, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Frantzides, Constantine T., and Mark A. Carlson. Video atlas of advanced minimally invasive surgery. Saunders/Elsevier, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Video methods"

1

Lucas, Laurent, Céline Loscos, and Yannick Remion. "Coding Methods for Depth Videos." In 3D Video. John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118761915.ch12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Töppe, Eno, Martin R. Oswald, Daniel Cremers, and Carsten Rother. "Silhouette-Based Variational Methods for Single View Reconstruction." In Video Processing and Computational Video. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dawson, Catherine. "Video analysis." In A–Z of Digital Research Methods. Routledge, 2019. http://dx.doi.org/10.4324/9781351044677-56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Salomon, David. "Video Compression." In A Guide to Data Compression Methods. Springer New York, 2002. http://dx.doi.org/10.1007/978-0-387-21708-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lenette, Caroline. "Participatory Video." In Arts-Based Methods in Refugee Research. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8008-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Akramullah, Shahriar. "Video Coding Standards." In Digital Video Concepts, Methods, and Metrics. Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6713-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Akramullah, Shahriar. "Video Quality Metrics." In Digital Video Concepts, Methods, and Metrics. Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6713-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akramullah, Shahriar. "Video Coding Performance." In Digital Video Concepts, Methods, and Metrics. Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6713-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dandashi, Amal, and Jihad Mohamad Alja’am. "Video Classification Methods: Multimodal Techniques." In Recent Trends in Computer Applications. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89914-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ratcliff, Donald. "Video methods in qualitative research." In Qualitative research in psychology: Expanding perspectives in methodology and design. American Psychological Association, 2003. http://dx.doi.org/10.1037/10595-007.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video methods"

1

Yu, Feiwu, Xinxiao Wu, Yuchao Sun, and Lixin Duan. "Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/154.

Full text
Abstract:
Existing deep learning methods of video recognition usually require a large number of labeled videos for training. But for a new task, videos are often unlabeled and it is also time-consuming and labor-intensive to annotate them. Instead of human annotation, we try to make use of existing fully labeled images to help recognize those videos. However, due to the problem of domain shifts and heterogeneous feature representations, the performance of classifiers trained on images may be dramatically degraded for video recognition tasks. In this paper, we propose a novel method, called Hierarchical Generative Adversarial Networks (HiGAN), to enhance recognition in videos (i.e., target domain) by transferring knowledge from images (i.e., source domain). The HiGAN model consists of a \emph{low-level} conditional GAN and a \emph{high-level} conditional GAN. By taking advantage of these two-level adversarial learning, our method is capable of learning a domain-invariant feature representation of source images and target videos. Comprehensive experiments on two challenging video recognition datasets (i.e. UCF101 and HMDB51) demonstrate the effectiveness of the proposed method when compared with the existing state-of-the-art domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Cika, Petr, Dominik Kovac, and Jan Bilek. "Objective video quality assessment methods: Video encoders comparison." In 2015 7th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT). IEEE, 2015. http://dx.doi.org/10.1109/icumt.2015.7382453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng, Jiping, and Ganfeng Lu. "k-SDPP: Fixed-Size Video Summarization via Sequential Determinantal Point Processes." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/108.

Full text
Abstract:
With the explosive growth of video data, video summarization which converts long-time videos to key frame sequences has become an important task in information retrieval and machine learning. Determinantal point processes (DPPs) which are elegant probabilistic models have been successfully applied to video summarization. However, existing DPP-based video summarization methods suffer from poor efficiency of outputting a specified size summary or neglecting inherent sequential nature of videos. In this paper, we propose a new model in the DPP lineage named k-SDPP in vein of sequential determinantal point processes but with fixed user specified size k. Our k-SDPP partitions sampled frames of a video into segments where each segment is with constant number of video frames. Moreover, an efficient branch and bound method (BB) considering sequential nature of the frames is provided to optimally select k frames delegating the summary from the divided segments. Experimental results show that our proposed BB method outperforms not only k-DPP and sequential DPP (seqDPP) but also the partition and Markovian assumption based methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Ardebilian Fard, Mohsen, Xiaowei Tu, and Liming Chen. "Improvement of shot detection methods based on dynamic threshold selection." In Voice, Video, and Data Communications, edited by C. C. Jay Kuo, Shih-Fu Chang, and Venkat N. Gudivada. SPIE, 1997. http://dx.doi.org/10.1117/12.290342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barbieri, Mauro, Lalitha Agnihotri, and Nevenka Dimitrova. "Video summarization: methods and landscape." In ITCom 2003, edited by John R. Smith, Sethuraman Panchanathan, and Tong Zhang. SPIE, 2003. http://dx.doi.org/10.1117/12.515733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Svensson, Torbjorn K., Lars Sundeman, and Erland Sundberg. "Methods and studies on fiber reliability at Swedish Telecom." In Video Communications and Fiber Optic Networks, edited by Vincent J. Tekippe and John P. Varachi, Jr. SPIE, 1993. http://dx.doi.org/10.1117/12.163773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Gengshen, Li Liu, Yuchen Guo, et al. "Unsupervised Deep Video Hashing with Balanced Rotation." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/429.

Full text
Abstract:
Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.
APA, Harvard, Vancouver, ISO, and other styles
8

Deng, Kangle, Tianyi Fei, Xin Huang, and Yuxin Peng. "IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/307.

Full text
Abstract:
Automatically generating videos according to the given text is a highly challenging task, where visual quality and semantic consistency with captions are two critical issues. In existing methods, when generating a specific frame, the information in those frames generated before is not fully exploited. And an effective way to measure the semantic accordance between videos and captions remains to be established. To address these issues, we present a novel Introspective Recurrent Convolutional GAN (IRC-GAN) approach. First, we propose a recurrent transconvolutional generator, where LSTM cells are integrated with 2D transconvolutional layers. As 2D transconvolutional layers put more emphasis on the details of each frame than 3D ones, our generator takes both the definition of each video frame and temporal coherence across the whole video into consideration, and thus can generate videos with better visual quality. Second, we propose mutual information introspection to semantically align the generated videos to text. Unlike other methods simply judging whether the video and the text match or not, we further take mutual information to concretely measure the semantic consistency. In this way, our model is able to introspect the semantic distance between the generated video and the corresponding text, and try to minimize it to boost the semantic consistency.We conduct experiments on 3 datasets and compare with state-of-the-art methods. Experimental results demonstrate the effectiveness of our IRC-GAN to generate plausible videos from given text.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Ruixin, Zhenyu Weng, Yuesheng Zhu, and Bairong Li. "Temporal Adaptive Alignment Network for Deep Video Inpainting." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/129.

Full text
Abstract:
Video inpainting aims to synthesize visually pleasant and temporally consistent content in missing regions of video. Due to a variety of motions across different frames, it is highly challenging to utilize effective temporal information to recover videos. Existing deep learning based methods usually estimate optical flow to align frames and thereby exploit useful information between frames. However, these methods tend to generate artifacts once the estimated optical flow is inaccurate. To alleviate above problem, we propose a novel end-to-end Temporal Adaptive Alignment Network(TAAN) for video inpainting. The TAAN aligns reference frames with target frame via implicit motion estimation at a feature level and then reconstruct target frame by taking the aggregated aligned reference frame features as input. In the proposed network, a Temporal Adaptive Alignment (TAA) module based on deformable convolutions is designed to perform temporal alignment in a local, dense and adaptive manner. Both quantitative and qualitative evaluation results show that our method significantly outperforms existing deep learning based methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Moreira, Daniel, Siome Goldenstein, and Anderson Rocha. "Sensitive-Video Analysis." In XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.

Full text
Abstract:
Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive videoanalysis, which we tackle in two ways. In the first one (sensitive-video classification), we explore efficient methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. Hypotheses are stated and validated, leading to contributions (papers, dataset, and patents) in the fields of Digital Forensics and Computer Vision.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video methods"

1

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Full text
Abstract:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO, and other styles
2

Baral, Aniruddha, Jeffery Roesler, and Junryu Fu. Early-age Properties of High-volume Fly Ash Concrete Mixes for Pavement: Volume 2. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-031.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) is more cost-efficient, sustainable, and durable than conventional concrete. This report presents a state-of-the-art review of HVFAC properties and different fly ash characterization methods. The main challenges identified for HVFAC for pavements are its early-age properties such as air entrainment, setting time, and strength gain, which are the focus of this research. Five fly ash sources in Illinois have been repeatedly characterized through x-ray diffraction, x-ray fluorescence, and laser diffraction over time. The fly ash oxide compositions from the same source but different quarterly samples were overall consistent with most variations observed in SO3 and MgO content. The minerals present in various fly ash sources were similar over multiple quarters, with the mineral content varying. The types of carbon present in the fly ash were also characterized through x-ray photoelectron spectroscopy, loss on ignition, and foam index tests. A new computer vision–based digital foam index test was developed to automatically capture and quantify a video of the foam layer for better operator and laboratory reliability. The heat of hydration and setting times of HVFAC mixes for different cement and fly ash sources as well as chemical admixtures were investigated using an isothermal calorimeter. Class C HVFAC mixes had a higher sulfate imbalance than Class F mixes. The addition of chemical admixtures (both PCE- and lignosulfonate-based) delayed the hydration, with the delay higher for the PCE-based admixture. Both micro- and nano-limestone replacement were successful in accelerating the setting times, with nano-limestone being more effective than micro-limestone. A field test section constructed of HVFAC showed the feasibility and importance of using the noncontact ultrasound device to measure the final setting time as well as determine the saw-cutting time. Moreover, field implementation of the maturity method based on wireless thermal sensors demonstrated its viability for early opening strength, and only a few sensors with pavement depth are needed to estimate the field maturity.
APA, Harvard, Vancouver, ISO, and other styles
3

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe, and Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Edited by Mark James and Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Full text
Abstract:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was concluded that geophysical acoustic technologies cannot presently detect individual scallop but the remote sensing technologies can be used for broad scale habitat mapping of scallop harvest areas. Further, the techniques allow for monitoring these areas in terms of scallop dredging impact. Camera (video and still) imagery is effective for scallop count and provide data that compares favourably with diver-based ground truth information for recording scallop density. Deployment of cameras is possible through inexpensive drop-down camera frames which it is recommended be deployed on a wide area basis for further trials. In addition, implementation of a ‘citizen science’ approach to wide area recording is suggested to increase the stock assessment across the widest possible variety of seafloor types around Scotland. Armed with such data a full, statistical analysis could be completed and data used with automated processing routines for future long-term monitoring of stock.
APA, Harvard, Vancouver, ISO, and other styles
4

Methodology of sports working capacity level increase in basketball players on the basis of stimulation and rehabilitation means. Viktor V. Andreev, Igor E. Konovalov, Dmitriy S. Andreev, Aleksandr I. Morozov, 2021. http://dx.doi.org/10.14526/2070-4798-2021-16-1-5-11.

Full text
Abstract:
The increased level of modern sport development increases the demands claimed on different aspects of the training process with further rehabilitation organization and realization. That is why we still have the problem of an adequate and effective integral system creation. The mentioned direction has a direct connection with the activity of scientists, coaches- practitioners and sports clubs. They have to work within one mechanism of interaction. Materials. Studying the level of working capacity influence stimulation and organism rehabilitation means on an organism of basketball players from higher educational establishments on the basis of a wildgrowing plant root “snowdon rose” (Rhodiola rosea), classical massage with special oils and contrast shower application. Research methods. The following methods were used in the experiment: scientific-methodical sources analysis concerning the level of working capacity and athletes’ functional rehabilitation increase; functional tests; the received video material with the indices analysis; mathematical statistics. The research realization was on the basis of N.F. Katanov State University, Khakassia and Khakassia Technical Institute (branch) of Siberian Federal University in Abakan. Results. During the research work we stated qualitative and quantitative indices of athletes’ coordinating endurance with the help of video together with other mentioned above rehabilitation means; the received results were handled and we revealed positive changes in the studied information values of basketball players’ motor sphere and respiratory system. Conclusion. The results analysis, received after the research, helped to come to the following conclusion: out of the presented components the biological factor in a form of a wild-growing plant root “snowdon rose” (Rhodiola rosea) has the main influence on the working capacity and functional rehabilitation of basketball players’ organisms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography