To see the other types of publications on this topic, follow the link: Video compression. Real-time data processing.

Journal articles on the topic 'Video compression. Real-time data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video compression. Real-time data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Guo Sheng. "Design of Image Processing System Based on FPGA." Advanced Materials Research 403-408 (November 2011): 1281–84. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1281.

Full text
Abstract:
To speed up the image acquisition and make full use of effective information, a design method of CCD partial image scanning system is presented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experiments show that the system has stable performance, real-time data transmission and high quality images, feasible by adopting the algorithm and scheme proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Khosravi, Mohammad R., Sadegh Samadi, and Reza Mohseni. "Spatial Interpolators for Intra-Frame Resampling of SAR Videos: A Comparative Study Using Real-Time HD, Medical and Radar Data." Current Signal Transduction Therapy 15, no. 2 (December 1, 2020): 144–96. http://dx.doi.org/10.2174/2213275912666190618165125.

Full text
Abstract:
Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Guo Sheng. "Detection Design and Implementation of Image Capture and Processing System Based on FPGA." Advanced Materials Research 433-440 (January 2012): 4565–70. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.4565.

Full text
Abstract:
Due to the project in this article, a kind of image capture and processing system based on FPGA is proposed, the low cost high performance FPGA is selected as the main core, the design of the whole system including software and hardware are implemented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experimental results prove right and feasible by adopting the algorithm and scheme proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Zayneb, Abir Jaafar Hussain, Wasiq Khan, Thar Baker, Haya Al-Askar, Janet Lunn, Raghad Al-Shabandar, Dhiya Al-Jumeily, and Panos Liatsis. "Lossy and Lossless Video Frame Compression: A Novel Approach for High-Temporal Video Data Analytics." Remote Sensing 12, no. 6 (March 20, 2020): 1004. http://dx.doi.org/10.3390/rs12061004.

Full text
Abstract:
The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search.
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Chirag, Amandeep Bagga, Bhupesh Kumar Singh, and Mohammad Shabaz. "A Novel Optimized Graph-Based Transform Watermarking Technique to Address Security Issues in Real-Time Application." Mathematical Problems in Engineering 2021 (April 8, 2021): 1–27. http://dx.doi.org/10.1155/2021/5580098.

Full text
Abstract:
The multimedia technologies are gaining a lot of popularity these days. Many unauthorized persons are gaining the access of multimedia such as videos, audios, and images. The transmission of multimedia across the Internet by unauthorized person has led to the problem of illegal distribution. The problem arises when copyrighted data is getting accessed without the knowledge of copyright owner. The videos are the most attacked data during COVID-19 pandemic. In this paper, the frame selection video watermarking technique is proposed to tackle the issue. The proposed work enlightens frame selection followed by watermarking embedding and testing of the technique against various attacks. The embedding of the watermark is done on selected frames of the video. The additional security feature Hyperchaotic Encryption is applied on watermark before embedding. Watermark embedding is done using graph-based transform and singular-valued decomposition and the performance of the technique is further optimized using hybrid combination of grey wolf optimization and genetic algorithm. Many researchers face the challenge of quality loss after embedding of watermark. Proposed technique will aim to overcome those challenges. A total of 6 videos (Akiyo, Coastguard, Foreman, News, Bowing, and Pure Storage) are used for carrying out research work. The performance evaluation of the proposed technique has been carried out after processing it against practical video processing attacks Gaussian noise, sharpening, rotation, blurring, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
6

Ebrahim, Mansoor, Syed Hasan Adil, Kamran Raza, and Syed Saad Azhar Ali. "Block Compressive Sensing Single-View Video Reconstruction Using Joint Decoding Framework for Low Power Real Time Applications." Applied Sciences 10, no. 22 (November 10, 2020): 7963. http://dx.doi.org/10.3390/app10227963.

Full text
Abstract:
Several real-time visual monitoring applications such as surveillance, mental state monitoring, driver drowsiness and patient care, require equipping high-quality cameras with wireless sensors to form visual sensors and this creates an enormous amount of data that has to be managed and transmitted at the sensor node. Moreover, as the sensor nodes are battery-operated, power utilization is one of the key concerns that must be considered. One solution to this issue is to reduce the amount of data that has to be transmitted using specific compression techniques. The conventional compression standards are based on complex encoders (which require high processing power) and simple decoders and thus are not pertinent for battery-operated applications, i.e., VSN (primitive hardware). In contrast, compressive sensing (CS) a distributive source coding mechanism, has transformed the standard coding mechanism and is based on the idea of a simple encoder (i.e., transmitting fewer data-low processing requirements) and a complex decoder and is considered a better option for VSN applications. In this paper, a CS-based joint decoding (JD) framework using frame prediction (using keyframes) and residual reconstruction for single-view video is proposed. The idea is to exploit the redundancies present in the key and non-key frames to produce side information to refine the non-key frames’ quality. The proposed method consists of two main steps: frame prediction and residual reconstruction. The final reconstruction is performed by adding a residual frame with the predicted frame. The proposed scheme was validated on various arrangements. The association among correlated frames and compression performance is also analyzed. Various arrangements of the frames have been studied to select the one that produces better results. The comprehensive experimental analysis proves that the proposed JD method performs notably better than the independent block compressive sensing scheme at different subrates for various video sequences with low, moderate and high motion contents. Also, the proposed scheme outperforms the conventional CS video reconstruction schemes at lower subrates. Further, the proposed scheme was quantized and compared with conventional video codecs (DISCOVER, H-263, H264) at various bitrates to evaluate its efficiency (rate-distortion, encoding, decoding).
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Ray-I., Yu-Hsien Chu, Chia-Hui Wang, and Niang-Ying Huang. "Video-Like Lossless Compression of Data Cube for Big Data Query in Wireless Sensor Networks." WSEAS TRANSACTIONS ON COMMUNICATIONS 20 (August 10, 2021): 139–45. http://dx.doi.org/10.37394/23204.2021.20.19.

Full text
Abstract:
Wireless Sensor Networks (WSNs) contain many sensor nodes which are placed in chosen spatial area to temporally monitor the environmental changes. As the sensor data is big, it should be well organized and stored in cloud servers to support efficient data query. In this paper, we first adopt the streamed sensor data as "data cubes" to enhance data compression by video-like lossless compression (VLLC). With layered tree structure of WSNs, compression can be done on the aggregation nodes of edge computing. Then, an algorithm is designed to well organize and store these VLLC data cubes into cloud servers to support cost-effect big data query with parallel processing. Our experiments are tested by real-world sensor data. Results show that our method can save 94% construction time and 79% storage space to achieve the same retrieval time in data query when compared with a well-known database MySQL
APA, Harvard, Vancouver, ISO, and other styles
8

Tanseer, Iffrah, Nadia Kanwal, Mamoona Naveed Asghar, Ayesha Iqbal, Faryal Tanseer, and Martin Fleury. "Real-Time, Content-Based Communication Load Reduction in the Internet of Multimedia Things." Applied Sciences 10, no. 3 (February 8, 2020): 1152. http://dx.doi.org/10.3390/app10031152.

Full text
Abstract:
There is an increasing number of devices available for the Internet of Multimedia Things (IoMT). The demands these ever-more complex devices make are also increasing in terms of energy efficiency, reliability, quality-of-service guarantees, higher data transfer rates, and general security. The IoMT itself faces challenges when processing and storing massive amounts of data, transmitting it over low bandwidths, bringing constrained resources to bear and keeping power consumption under check. This paper’s research focuses on an efficient video compression technique to reduce that communication load, potentially generated by diverse camera sensors, and also improve bit-rates, while ensuring accuracy of representation and completeness of video data. The proposed method applies a video content-based solution, which, depending on the motion present between consecutive frames, decides on whether to send only motion information or no frame information at all. The method is efficient in terms of limiting the data transmitted, potentially conserving device energy, and reducing latencies by means of negotiable processing overheads. Data are also encrypted in the interests of confidentiality. Video quality measurements, along with a good number of Quality-of-Service measurements demonstrated the value of the load reduction, as is also apparent from a comparison with other related methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Pandit, Shraddha, Piyush Kumar Shukla, Akhilesh Tiwari, Prashant Kumar Shukla, Manish Maheshwari, and Rachana Dubey. "Review of video compression techniques based on fractal transform function and swarm intelligence." International Journal of Modern Physics B 34, no. 08 (March 30, 2020): 2050061. http://dx.doi.org/10.1142/s0217979220500617.

Full text
Abstract:
Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Zhuosheng, Simin Yu, Chengqing Li, Jinhu Lü, and Qianxue Wang. "Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission." International Journal of Bifurcation and Chaos 26, no. 09 (August 2016): 1650158. http://dx.doi.org/10.1142/s0218127416501583.

Full text
Abstract:
This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.
APA, Harvard, Vancouver, ISO, and other styles
11

Chai, Zhilei, Shen Li, Qunfang He, Mingsong Chen, and Wenjie Chen. "FPGA-Based ROI Encoding for HEVC Video Bitrate Reduction." Journal of Circuits, Systems and Computers 29, no. 11 (February 5, 2020): 2050182. http://dx.doi.org/10.1142/s0218126620501820.

Full text
Abstract:
The explosive growth of video applications has produced great challenges for data storage and transmission. In this paper, we propose a new ROI (region of interest) encoding solution to accelerate the processing and reduce the bitrate based on the latest video compression standard H.265/HEVC (High-Efficiency Video Coding). The traditional ROI extraction mapping algorithm uses pixel-based Gaussian background modeling (GBM), which requires a large number of complex floating-point calculations. Instead, we propose a block-based GBM to set up the background, which is in accord with the block division of HEVC. Then, we use the SAD (sum of absolute difference) rule to separate the foreground block from the background block, and these blocks are mapped into the coding tree unit (CTU) of HEVC. Moreover, the quantization parameter (QP) is adjusted according to the distortion rate automatically. The experimental results show that the processing speed on FPGA has reached a real-time level of 22 FPS (frames per second) for full high-definition videos ([Formula: see text]), and the bitrate is reduced by 10% on average with stable video quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Solokhina, T. V., Ya Ya Petrichkovich, A. A. Belyaev, I. A. Belyaev, and A. V. Egorov. "Dataflow synchronization mechanism for H.264 hardware video codec." Issues of radio electronics, no. 8 (August 7, 2019): 13–20. http://dx.doi.org/10.21778/2218-5453-2019-8-13-20.

Full text
Abstract:
Modern video compression standards require significant computational costs for their implementation. With a high rate of receipt of video data and significant computational costs, it may be preferable to use hardware rather than software compression tools. The article proposes a method for synchronizing data streams during hardware implementation of compression / decompression in accordance with the H.264 standard. The developed video codec is an IP core as part of an 1892ВМ14Я microcircuit operating under the control of an external processor core. The architecture and the main characteristics of the video codec are presented. To synchronize the operation of the computing blocks and the controller of direct access to the video memory, the video codec contains an event register, which is a set of data readiness flags for the blocks involved in processing. The experimental results of measuring performance characteristics on real video scenes with various formats of the transmitted image, which confirmed the high throughput of the developed video codec, are presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Kwan, Chiman, David Gribben, Bryan Chou, Bence Budavari, Jude Larkin, Akshay Rangamani, Trac Tran, Jack Zhang, and Ralph Etienne-Cummings. "Real-Time and Deep Learning Based Vehicle Detection and Classification Using Pixel-Wise Code Exposure Measurements." Electronics 9, no. 6 (June 18, 2020): 1014. http://dx.doi.org/10.3390/electronics9061014.

Full text
Abstract:
One key advantage of compressive sensing is that only a small amount of the raw video data is transmitted or saved. This is extremely important in bandwidth constrained applications. Moreover, in some scenarios, the local processing device may not have enough processing power to handle object detection and classification and hence the heavy duty processing tasks need to be done at a remote location. Conventional compressive sensing schemes require the compressed data to be reconstructed first before any subsequent processing can begin. This is not only time consuming but also may lose important information in the process. In this paper, we present a real-time framework for processing compressive measurements directly without any image reconstruction. A special type of compressive measurement known as pixel-wise coded exposure (PCE) is adopted in our framework. PCE condenses multiple frames into a single frame. Individual pixels can also have different exposure times to allow high dynamic ranges. A deep learning tool known as You Only Look Once (YOLO) has been used in our real-time system for object detection and classification. Extensive experiments showed that the proposed real-time framework is feasible and can achieve decent detection and classification performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Pawłowski, Paweł, Karol Piniarski, and Adam Dąbrowski. "Highly Efficient Lossless Coding for High Dynamic Range Red, Clear, Clear, Clear Image Sensors." Sensors 21, no. 2 (January 19, 2021): 653. http://dx.doi.org/10.3390/s21020653.

Full text
Abstract:
In this paper we present a highly efficient coding procedure, specially designed and dedicated to operate with high dynamic range (HDR) RCCC (red, clear, clear, clear) image sensors used mainly in advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS). The coding procedure can be used for a lossless reduction of data volume under developing and testing of video processing algorithms, e.g., in software in-the-loop (SiL) or hardware in-the-loop (HiL) conditions. Therefore, it was designed to achieve both: the state-of-the-art compression ratios and real-time compression feasibility. In tests we utilized FFV1 lossless codec and proved efficiency of up to 81 fps (frames per second) for compression and 87 fps for decompression performed on a single Intel i7 CPU.
APA, Harvard, Vancouver, ISO, and other styles
15

Kawai, Takaaki. "Video Slice: Image Compression and Transmission for Agricultural Systems." Sensors 21, no. 11 (May 26, 2021): 3698. http://dx.doi.org/10.3390/s21113698.

Full text
Abstract:
When agricultural automation systems are required to send cultivation field images to the cloud for field monitoring, pay-as-you-go mobile communication leads to high operation costs. To minimize cost, one can exploit a characteristic of cultivation field images wherein the landscape does not change considerably besides the appearance of the plants. Therefore, this paper presents a method that transmits only the difference data between the past and current images to minimize the amount of transmitted data. This method is easy to implement because the difference data are generated using an existing video encoder. Further, the difference data are generated based on an image at a specific time instead of the images at adjacent times, and thus the subsequent images can be reproduced even if the previous difference data are lost because of unstable mobile communication. A prototype of the proposed method was implemented with a MPEG-4 Visual video encoder. The amount of transmitted and received data on the medium access control layer was decreased to approximately 1/4 of that when using the secure copy protocol. The transmission time for one image was 5.6 s; thus, the proposed method achieved a reasonable processing time and a reduction of transmitted data.
APA, Harvard, Vancouver, ISO, and other styles
16

Prajapati, Y. N., and M. K. Srivastava. "Novel algorithms for protective digital privacy." IAES International Journal of Robotics and Automation (IJRA) 8, no. 3 (September 1, 2019): 184. http://dx.doi.org/10.11591/ijra.v8i3.pp184-188.

Full text
Abstract:
Video is the recording, reproducing, or broadcasting of moving visual images. Visual multimedia source that combines a sequence of images to form a moving picture. The video transmits a signal to a screen and processes the order in which the screen captures should be shown. Videos usually have audio components that correspond with the pictures being shown on the screen. Video compression technologies are about reducing and removing redundant video data so that a digital video file can be effectively sent over a network and stored on computer disks. With efficient compression techniques, a significant reduction in file size can be achieved with little or no adverse effect on the visual quality. The video quality, however, can be affected if the file size is further lowered by raising the compression level for a given compression technique. Security is about the protection of assets. Security, in information technology <a href="http://searchdatacenter.techtarget.com/definition/IT">(IT), </a>is the defense of digital information and IT assets against internal and external, malicious and accidental threats. This defense includes detection, prevention and response to threats through the use of <a href="http://searchsecurity.techtarget.com/definition/security-policy">security policies, </a>software tools and IT services. Security refers to protective digital privacy measures that are applied to prevent unauthorized access to computers, databases and websites. Cryptography is closely related to the disciplines of <a href="http://searchsecurity.techtarget.com/definition/cryptology">cryptology </a>and <a href="http://searchsecurity.techtarget.com/definition/cryptanalysis">cryptanalysis. </a>Cryptography includes techniques such as microdots, merging words with images, and other ways to hide information in storage or transit. However, in today's computer-centric world, cryptography is most often associated with scrambling <a href="http://searchsecurity.techtarget.com/definition/plaintext">plaintext </a>(ordinary text, sometimes referred to as clear text into <a href="http://searchcio-midmarket.techtarget.com/definition/ciphertext">cipher text </a>(a process called <a href="http://searchsecurity.techtarget.com/definition/encryption">encryption), </a>then back again (known as decryption). Cryptography is evergreen and developments. Cryptography protects users by providing functionality for the encryption of data and authentication of other users. Compression is the process of reducing the number of bits or bytes needed to represent a given set of data. It allows saving more data. The project aims to implement security algorithm for data security. The data will be first encrypted using security techniques and that are done at the same time then it takes less processing time and more speed compression techniques will applied. If encryption and compression are done at the same time then it takes less processing time and more speed.
APA, Harvard, Vancouver, ISO, and other styles
17

Hein, Daniel, Thomas Kraft, Jörg Brauchle, and Ralf Berger. "Integrated UAV-Based Real-Time Mapping for Security Applications." ISPRS International Journal of Geo-Information 8, no. 5 (May 8, 2019): 219. http://dx.doi.org/10.3390/ijgi8050219.

Full text
Abstract:
Security applications such as management of natural disasters and man-made incidents crucially depend on the rapid availability of a situation picture of the affected area. UAV-based remote sensing systems may constitute an essential tool for capturing aerial imagery in such scenarios. While several commercial UAV solutions already provide acquisition of high quality photos or real-time video transmission via radio link, generating instant high-resolution aerial maps is still an open challenge. For this purpose, the article presents a real-time processing tool chain, enabling generation of interactive aerial maps during flight. Key element of this tool chain is the combination of the Terrain Aware Image Clipping (TAC) algorithm and 12-bit JPEG compression. As a result, the data size of a common scenery can be reduced to approximately 0.4% of the original size, while preserving full geometric and radiometric resolution. Particular attention was paid to minimize computational costs to reduce hardware requirements. The full workflow was demonstrated using the DLR Modular Airborne Camera System (MACS) operated on a conventional aircraft. In combination with a commercial radio link, the latency between image acquisition and visualization in the ground station was about 2 s. In addition, the integration of a miniaturized version of the camera system into a small fixed-wing UAV is presented. It is shown that the described workflow is efficient enough to instantly generate image maps even on small UAV hardware. Using a radio link, these maps can be broadcasted to on-site operation centers and are immediately available to the end-users.
APA, Harvard, Vancouver, ISO, and other styles
18

SANKARAGOMATHI, B., L. GANESAN, and S. ARUMUGAM. "ENCODING VIDEO SEQUENCES IN FRACTAL-BASED COMPRESSION." Fractals 15, no. 04 (December 2007): 365–78. http://dx.doi.org/10.1142/s0218348x0700371x.

Full text
Abstract:
With the rapid increase in the use of computers and the Internet, the demand for higher transmission and better storage is increasing as well. This paper describes the different techniques for data (image-video) compression in general and, in particular, the new compression technique called fractal image compression. Fractal image compression is based on self-similarity, where one part of an image is similar to the other part of the same image. Low bit rate color image sequence coding is very important for video transmission and storage applications. The most significant aspect of this work is the development of color images using fractal-based color image compression, since little work has been done previously in this area. The results obtained show that the fractal-based compression works for the color images works as well as for the gray-scale images. Nevertheless, the encoding of the color images takes more time than the gray-scale images. Color images are usually compressed in a luminance-chrominance coordinate space, with the compression performed independently for each coordinate by applying the monochrome image processing techniques. For image sequence compression, the design of an accurate and efficient algorithm for computing motion to exploit the temporal redundancy has been one of the most active research areas in computer vision and image compression. Pixel-based motion estimation algorithms address pixel correspondence directly by identifying a set of local features and computing a match between these features across the frames. These direct techniques share the common pitfall of high computation complexity resulting from the dense vector fields produced. For block matching motion estimation algorithms, the quad-tree data structure is frequently used in image coding to recursively decompose an image plane into four non-overlapping rectangular blocks.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Hang Jun, Jian Wang, and Xiao Yong Ji. "Accelerating Color Space Conversion Using CUDA-Enabled Graphic Processing Units." Advanced Materials Research 716 (July 2013): 505–9. http://dx.doi.org/10.4028/www.scientific.net/amr.716.505.

Full text
Abstract:
Color space conversion (CSC) is an important kernel in the area of image and video processing applications including video compression. CSC is a compute-intensive time-consuming operation that consumes up to 40% of processing time of a highly optimised decoder. Several hardware and software implementations for CSC have been found. Hardware implementations can achieve a higher performance compared with software-only solutions. However, the flexibility of software solutions is desirable for various functional requirements and faster time to market. Multicore processors, especially programmable GPUs, provide an opportunity to increase the performance of CSC by exploiting data parallelism. In this paper, we present a novel approach for efficient implementation of color space conversion. The proposed approach has been implemented and verified using computed unified device architecture (CUDA) on graphics hardware. Our experiments results show that the speedup of up to17×can been obtained.
APA, Harvard, Vancouver, ISO, and other styles
20

Walid, Walid, Giorgio Armanno, Sandro Di Paola, Massimo Ruo Roch, Guido Masera, and Maurizio Martina. "VLSI Architectures of a Wiener Filter for Video Coding." Electronics 10, no. 16 (August 14, 2021): 1961. http://dx.doi.org/10.3390/electronics10161961.

Full text
Abstract:
In the modern age, the use of video has become fundamental in communication and this has led to its use through an increasing number of devices. The higher resolution required for images and videos leads to more memory space and more efficient data compression, obtained by improving video coding techniques. For this reason, the Alliance for Open Media (AOMedia) developed a new open-source and royalty-free codec, named AOMedia Video 1 (AV1). This work focuses on the Wiener filter, a specific loop restoration tool of the AV1 video coding format, which features a significant amount of computational complexity. A new hardware architecture implementing the separable symmetric normalized Wiener filter is presented. Furthermore, the paper details possible optimizations starting from the basic architecture. These optimizations allow the Wiener filter to achieve a 100× reduction in processing time, compared to existing works, and 5× improvement in megasamples per second.
APA, Harvard, Vancouver, ISO, and other styles
21

BHATTACHARYA, ARUP K., and SYED S. HAIDER. "A VLSI IMPLEMENTATION OF THE INVERSE DISCRETE COSINE TRANSFORM." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 02 (April 1995): 303–14. http://dx.doi.org/10.1142/s0218001495000146.

Full text
Abstract:
The Inverse Discrete Cosine Transform (IDCT) is an important function in HDTV, digital TV and multimedia systems complying with JPEG or MPEG standards for video compression. However, the IDCT is computationally intensive and therefore very expensive to implement in VLSI using direct matrix multiplication. By properly arranging the input coefficient sequence and the output data, the rows and columns of the transform matrix can be reordered to build modular regularity suitable for custom implementation in VLSI. This regularity can be exploited, so that a single permutation can be used to derive each output column from the previous one using a circular shift of an accumulator’s input data multiplied in a special sequence. This technique, using only one 1-dimensional IDCT processor and seven constant multipliers, and its implementation are presented. Operation of 58 MHz under worst case conditions is easily achieved, thus making the design applicable to a wide range of video and real time image processing applications. Fabricated in 0.5 micron triple metal CMOS technology, the IDCT contains 70,000 transistors occupying 7 mm2 square silicon. The design has been used on an AT&T MPEG video decoder chip.
APA, Harvard, Vancouver, ISO, and other styles
22

Yevseiev, Serhii, Ahmed Abdalla, Serhii Osiievskyi, Volodymyr Larin, and Mykhailo Lytvynenko. "DEVELOPMENT OF AN ADVANCED METHOD OF VIDEO INFORMATION RESOURCE COMPRESSION IN NAVIGATION AND TRAFFIC CONTROL SYSTEMS." EUREKA: Physics and Engineering 5 (September 30, 2020): 31–42. http://dx.doi.org/10.21303/2461-4262.2020.001405.

Full text
Abstract:
The Earth's aerospace monitoring (ASM) systems use state-of-the-art integrated information technologies that include radio-based detection and surveillance systems using telecommunications. One of the main tasks of ASM systems is to increase the efficiency of decision-making necessary for the timely prevention, detection, localization and elimination of crisis situations and their probable consequences. Modern conditions impose stricter requirements for efficiency, reliability and quality of the provided video data. To ensure compliance with the requirements, it is necessary to provide the appropriate capabilities of the onboard equipment. On the basis of the existing information and communication systems it is necessary to carry out: continuous or periodic assessment of a condition of objects of supervision and control; continuous (operational) collection, reception, transmission, processing, analysis and display of information resources. It is proposed to use UAVs (unmanned aerial vehicles) as a means to perform ASM tasks. The time of organizing communication sessions and delivery of information should vary from a few seconds to 2.5 hours. Untimely processing and delivery of a specific information resource in the management process leads to its obsolescence or loss of relevance, which contributes to erroneous decisions. One way to reduce time is to encode the data. To do this, it is proposed to use video compression algorithms. However, based on the analysis of the possibility of modern methods of video information compression, taking into account the specifics of the onboard equipment of the UAV, the coding problem is not completely solved. The research results show the expediency of using an improved method of video information compression to reduce the computing resources of the software and hardware complex of the onboard UAV equipment and to ensure the requirements for efficiency and reliability of data in modern threats to ASM systems as a whole.
APA, Harvard, Vancouver, ISO, and other styles
23

Бараннік, Володимир Вікторович, Сергій Олександрович Сідченко, Наталія Вячеславівна Бараннік, and Андрій Михайлович Хіменко. "Метод маскувального ущільнення службових даних в системах компресії відеозображень." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (June 2, 2021): 51–63. http://dx.doi.org/10.32620/reks.2021.2.05.

Full text
Abstract:
The demand for video privacy is constantly increasing. Simultaneously, it is necessary to solve an urgent scientific and applied problem, which consists in increasing the confidentiality of video information under conditions of a given time delay for its processing and delivery, while ensuring its reliability. The crypto compression transformations can be used to solve it. A service component is used as a conversion key, which is directly formed in the conversion process and contains information about the identified structural characteristics of the video data. Therefore, such information requires confidentiality. The existing methods of cryptography are designed to process a universal data stream and do not consider the structure and features of service components. It leads to the formation of redundant data, the use of an excessive number of operations, and an increase in processing time in the process of protecting service information using universal cryptography methods. Therefore, the article aims to develop a method for masking service data compression to ensure their confidentiality, considering the peculiarities of their formation by crypto compression methods. In modes with controlled loss of information quality, the elements of the service component are formed in a reduced dynamic range. Their length is 7 bits. To ensure the confidentiality of such elements, it is necessary to develop a method for masking overhead compression in video compression systems. On the one hand, overhead blocks should not contain redundant information. On the other hand, they must be formed from bit positions from different elements of the service components. On the other hand, they should be formed from bit positions from different elements of the service components. For that, it is proposed to organize the assembly of the elements of the service components. It is organized by combining 7-bit elements of service components into 8-bit complete sequences. Encryption blocks are formed from 8-bit sequences. The assembly of service components ensures the mixing of service data and reducing their quantity. To violate the structure of the representation of service components, it is proposed to additionally organize the permutation of 8-bit completed sequences. It provides a significant dispersion of the bit positions of the 7-bit overhead elements and the destruction of the correlation between the overhead elements. The correlation coefficients of the original and reconstructed images using encrypted service components are in the region of 0. The number of changing pixels is above the theoretical threshold value of 99.5341%.
APA, Harvard, Vancouver, ISO, and other styles
24

Chu, Chen, Jian Wang, Sen Ke Hou, Qi Lv, Guo Qiang Ma, and Xiao Yong Ji. "A Comparative Study of Color Space Conversion on Homogeneous and Heterogeneous Multicore." Applied Mechanics and Materials 519-520 (February 2014): 724–28. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.724.

Full text
Abstract:
Color space conversion (CSC) is an important kernel in the area of image and video processing applications including video compression. As a matrix math, this operation consumes up to 40% of processing time of a highly optimized decoder. Therefore, techniques which efficiently implement this conversion are desired. Multicore processors provide an opportunity to increase the performance of CSC by exploiting data parallelism. In this paper, we present three novel approaches for efficient implementation of color space conversion suitable for homogeneous and heterogeneous multicore. We compare the performance of color space conversion on a variety of platforms including OpenMP running on homogeneous multicore CPUs, CUDA with NVIDIA GPUs and OpenCL running on both NVIDIA and ATI GPUs. Our experimental results show that the speedup of3×, 17×and15×can been obtained, respectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Xie, Wen, Zeyang Yao, Erchao Ji, Hailong Qiu, Zewen Chen, Huiming Guo, Jian Zhuang, Qianjun Jia, and Meiping Huang. "Artificial Intelligence–based Computed Tomography Processing Framework for Surgical Telementoring of Congenital Heart Disease." ACM Journal on Emerging Technologies in Computing Systems 17, no. 4 (October 31, 2021): 1–24. http://dx.doi.org/10.1145/3457613.

Full text
Abstract:
Congenital heart disease (CHD) is the most common birth defect, accounting for one-third of all congenital birth defects. As with complicated intracardiac structural abnormalities, CHD is usually treated with surgical repair, and computed tomography (CT) is the main examination method for diagnosis of CHD and also provides anatomical information to surgeons. Currently, there exists a serious shortage of professional surgeons in developing countries. Compared with developed countries where large hospitals and cardiovascular disease centers have professional surgical teams with rich treatment experience, surgeons in developing countries and remote areas suffer from lack of professional surgical skills resulting with low surgical quality and high mortality. Recently, surgical telementoring has been popular to tackle the above problems, in which less-skilled surgeons can get real-time guidance from skilled surgeons remotely through audio and video transmission. However, there still exists difficulties in applying telementoring to CHD surgeries including high resource consumption on medical data transmission and storage, large image noise, and inconvenient and inefficient discussion between surgeons on CT. In this article, we proposed a framework with an image compression module, an image denoising module, and an image segmentation module based on CT images in CHD. We evaluated the above three modules and compared them with existing works, respectively, and the results show that our methods achieve much better performance. Furthermore, with 3D printing, VR technology, and 5G communications, our framework was successfully used in a real case study to treat a patient who needed surgical treatment.
APA, Harvard, Vancouver, ISO, and other styles
26

Малыгин, И. Г., and О. А. Королев. "High-speed algorithm for transmitting video information about emergency situations on transport objects." MORSKIE INTELLEKTUAL`NYE TEHNOLOGII), no. 1(51) (March 2, 2021): 64–70. http://dx.doi.org/10.37220/mit.2021.51.1.009.

Full text
Abstract:
Современные интеллектуальные видеосистемы наблюдения стали все больше акцентироваться на передачу в реальном времени высококачественного видео различных важных событий, в том числе чрезвычайных ситуаций. Для высокопроизводительных систем передачи видеоинформации нового поколения необходимы эффективные структурные решения, способные как к высокой скорости передачи, так и к высокой точности вычисления. Такие структуры должны обрабатывать огромные последовательности изображений, при этом каждый видеопоток должен характеризоваться высоким разрешением с минимальным шумом и искажениями, потребляя при этом как можно меньше мощности. Спектральные алгоритмы обработки видеоинформации являются наиболее распространенным способом передачи в реальном времени, в частности дискретное косинусное преобразование. При этом исходное изображение подвергается преобразованию из пространственной в частотную область с целью сжатия путём уменьшения или устранения избыточности визуальных данных. Неявное вычисление преобразования последовательности 8-точечного массива приводит к эффективному сжатию, требующему не более пятикратного выполнения операции умножения. В статье предложены архитектура с низкой структурой сложности и метод преобразования изображений на основе алгебры целых чисел. Modern intelligent video surveillance systems have become increasingly focused on real-time transmission of high-quality video of various important events, including emergencies. For high-performance video information transmission systems of the new generation, efficient structural solutions are needed that are capable of both high transmission speed and high calculation accuracy. Such structures must process huge sequences of images, and each video stream must be characterized by high resolution and with minimal noise and distortion, while consuming as little power as possible. Spectral algorithms for processing video information are the most common method of transmission in real time, in particular the discrete cosine transform. In this case, the original image is transformed from the spatial to the frequency domain in order to compress by reducing or eliminating the redundancy of visual data. Implicitly calculating the sequence transformation of an 8-point array results in efficient compression, requiring no more than five times the multiplication operation. In this paper, we propose an architecture with a low complexity structure and image transformation method based on the algebra of integers
APA, Harvard, Vancouver, ISO, and other styles
27

Barannik, Vladimir, Serhii Sidchenko, Natalia Barannik, and Valeriy Barannik. "Development of the method for encoding service data in cryptocompression image representation systems." Eastern-European Journal of Enterprise Technologies 3, no. 9(111) (June 30, 2021): 103–15. http://dx.doi.org/10.15587/1729-4061.2021.235521.

Full text
Abstract:
The demand for image confidentiality is constantly growing. At the same time, ensuring the confidentiality of video information must be organized subject to ensuring its reliability with a given time delay in processing and transmission. Methods of cryptocompression representation of images can be used to solve this problem. They are designed to simultaneously provide compression and protection of video information. The service component is used as the key of the cryptocompression transformation. However, it has a significant volume. It is 25 % of the original video data volume. A method for coding systems of service components in a differentiated basis on the second cascade of cryptocompression representation of images has been developed. The method is based on the developed scheme of data linearization from three-dimensional coordinates of representation in a two-dimensional matrix into a one-dimensional coordinate for one-to-one representation of this element in a vector. Linearization is organized horizontally line by line. On the basis of the developed method, a non-deterministic number of code values of information components is formed. They have non-deterministic lengths and are formed on a non-deterministic number of elements. The uncertainty of positioning of cryptocompression codograms in the general code stream is provided, which virtually eliminates the possibility of their unauthorized decryption. The method provides a reduction in the volume of the service component of the cryptocompression codogram. The service data volume is 6.25 % of the original video data volume. The method provides an additional reduction in the volume of cryptocompression representation of images without loss of information quality relative to the original video data on average from 1.08 to 1.54 times, depending on the degree of their saturation
APA, Harvard, Vancouver, ISO, and other styles
28

Ngo, Thien-Thu, VanDung Nguyen, Xuan-Qui Pham, Md-Alamgir Hossain, and Eui-Nam Huh. "Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition." Symmetry 12, no. 9 (August 21, 2020): 1397. http://dx.doi.org/10.3390/sym12091397.

Full text
Abstract:
Intelligent surveillance systems enable secured visibility features in the smart city era. One of the major models for pre-processing in intelligent surveillance systems is known as saliency detection, which provides facilities for multiple tasks such as object detection, object segmentation, video coding, image re-targeting, image-quality assessment, and image compression. Traditional models focus on improving detection accuracy at the cost of high complexity. However, these models are computationally expensive for real-world systems. To cope with this issue, we propose a fast-motion saliency method for surveillance systems under various background conditions. Our method is derived from streaming dynamic mode decomposition (s-DMD), which is a powerful tool in data science. First, DMD computes a set of modes in a streaming manner to derive spatial–temporal features, and a raw saliency map is generated from the sparse reconstruction process. Second, the final saliency map is refined using a difference-of-Gaussians filter in the frequency domain. The effectiveness of the proposed method is validated on a standard benchmark dataset. The experimental results show that the proposed method achieves competitive accuracy with lower complexity than state-of-the-art methods, which satisfies requirements in real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Hart, Douglas P. "High-Speed PIV Analysis Using Compressed Image Correlation." Journal of Fluids Engineering 120, no. 3 (September 1, 1998): 463–70. http://dx.doi.org/10.1115/1.2820685.

Full text
Abstract:
With the development of Holographic PIV (HPIV) and PIV Cinematography (PIVC), the need for a computationally efficient algorithm capable of processing images at video rates has emerged. This paper presents one such algorithm, sparse array image correlation. This algorithm is based on the sparse format of image data—a format well suited to the storage of highly segmented images. It utilizes an image compression scheme that retains pixel values in high intensity gradient areas eliminating low information background regions. The remaining pixels are stored in sparse format along with their relative locations encoded into 32 bit words. The result is a highly reduced image data set that retains the original correlation information of the image. Compression ratios of 30:1 using this method are typical. As a result, far fewer memory calls and data entry comparisons are required to accurately determine tracer particle movement. In addition, by utilizing an error correlation function, pixel comparisons are made through single integer calculations eliminating time consuming multiplication and floating point arithmetic. Thus, this algorithm typically results in much higher correlation speeds and lower memory requirements than spectral and image shifting correlation algorithms. This paper describes the methodology of sparse array correlation as well as the speed, accuracy, and limitations of this unique algorithm. While the study presented here focuses on the process of correlating images stored in sparse format, the details of an image compression algorithm based on intensity gradient thresholding is presented and its effect on image correlation is discussed to elucidate the limitations and applicability of compression based PIV processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Rohadi, Erfan, Anastasia Merry Christine, Arief Prasetyo, Rosa Andrie Asmara, Indrazno Siradjuddin, Ferdian Ronilaya, and Awan Setiawan. "Implementasi Video Streaming Lalu Lintas Kendaraan Dengan Server Raspberry Pi Menggunakan Protokol H.264." Jurnal Teknologi Informasi dan Ilmu Komputer 5, no. 5 (October 30, 2018): 629. http://dx.doi.org/10.25126/jtiik.2018551138.

Full text
Abstract:
<p><strong>Abstrak</strong></p><p><strong><br /></strong>Teknologi <em>video surveillance system </em>atau kamera pengawas sudah menjadi alat yang sangat penting karena mayoritas kebutuhan masyarakat sekarang ini menginginkan informasi yang cepat untuk diakses serta praktis dalam penggunaannya. Dalam penelitian ini sebuah protokol H.264 dipergunakan dalam <em>memproses video streaming</em>pada <em>video surveillance system</em>yang berfungsi sebagai pengirim dan pengontrol paket data <em>streaming</em>dari kamera pengawas ke penerima yaitu sebagai <em>user video surveillance system</em>. Analisis <em>frame video</em>pada protokol H.264 dilakukan pada live streaming server berupa embedded system yang terintegrasi pada video <em>surveillance system</em>dengan kamera pengawas. Dari hasil uji coba menunjukan bahwa Protokol H.264 memberikan kompresi kualitas <em>video</em>yang baik, sehingga implementasi <em>Video Streaming</em>lalu lintas kendaraan ini menjanjikan dapat membantu memudahkan masyarakat dalam mendapatkan informasi dan juga mengetahui kondisi lalu lintas secara<em>realtime </em>serta efektif dan efesien. Implementasi <em>Video streaming</em>secara <em>realtime</em>ini memantau kondisi lalu lintas di suatu Lokasi dengan pendeteksi ketersediaan kamera CCTV <em>(Closed Circuit Television)</em>dan <em>Raspberry pi</em>sebagai <em>server</em>. </p><p> </p><p><em><strong>Abstract</strong></em></p><p><em><strong></strong></em><em><span>Technology of video surveillance system has become a very important tool because the majority of the needs of today's society want information that is fast to access and practical in its use. In this study an H.264 protocol is used in processing video streaming in video surveillance system that functions as a sender and controller of streaming data packets from surveillance camera to receiver that is as user video surveillance system. The frame video analysis of the H.264 protocol has performed on a live streaming server in the form of embedded systems integrated in video surveillance system with surveillance cameras As a result, the system shows that the H.264 protocol provides good video quality compression, so the implementation of Video Streaming traffic this vehicle promises to help facilitate the public in getting information and also know the real time traffic conditions as well as effective and efficient. Implementation streaming video in real time this monitor traffic conditions in a location with the detection of the availability of CCTV (Closed Circuit Television) and Raspberry Pi cameras as a server.</span></em></p>
APA, Harvard, Vancouver, ISO, and other styles
31

Barannik, Vladimir, Yuriy Ryabukha, Pavlo Gurzhiy, Vitaliy Tverdokhlib, and Igor Shevchenko. "TRANSFORMANTS BIT REPRESENTATION ENCODING WITHIN VIDEO BIT RATE CONTROL." Information systems and technologies security, no. 1 (1) (2019): 52–56. http://dx.doi.org/10.17721/ists.2019.1.52-56.

Full text
Abstract:
The conceptual basements of constructing an effective encoding method within the bit rate control module of video traffic in the video data processing system at the source level are considered. The essence of using the proposed method in the course of the video stream bit rate controlling disclosed, namely, the principles of constructing the fragment of the frame code representation and approaches for determining the structural units of the individual video frame within which the control is performed. The method focuses on processing the bit representation of the DCT transformants, and at his processing stage transformant was considered as a structural component of the video stream frame at which the encoding is performed. At the same time, to ensure the video traffic bit rate controlling flexibility, decomposition is performed with respect to each of the transformants to the level of the plurality of bit planes. It is argued that the proposed approach is potentially capable to reducing the video stream bit rate in the worst conditions, that is, when component coding is performed. In addition, this principle of video stream fragmen code representation forming allows to control the level of error that can be made in the bit rate control process. However, in conditions where the bit representation of the transformant is encoded, the method is able to provide higher compression rates as a result of the fact that the values of the detection probability of binary series lengths and the values of detected lengths within the bit plane will be greater than in the case of component coding. This is explained by the structural features of the distribution of binary elements within each of the bit planes, which together form the transformer DCT. In particular, high-frequency transformer regions are most often formed by chains of zero elements. The solutions proposed in the development of the encoding method are able to provide sufficient flexibility to control the bit rate of the video stream, as well as the ability to quickly change the bit rate in a wide range of values
APA, Harvard, Vancouver, ISO, and other styles
32

Barannik, Volodymyr, Yuriy Ryabukha, Pavlo Hurzhii, Vitalii Tverdokhlib, and Oleh Kulitsa. "TRANSFORMANTS CODING TECHNOLOGY IN THE CONTROL SYSTEM OF VIDEO STREAMS BIT RATE." Cybersecurity: Education, Science, Technique 3, no. 7 (2020): 63–71. http://dx.doi.org/10.28925/2663-4023.2020.7.6371.

Full text
Abstract:
The conceptual basements of constructing an effective encoding method within the bit rate control module of video traffic in the video data processing system at the source level are considered. The essence of using the proposed method in the course of the video stream bit rate controlling disclosed, namely, the principles of constructing the fragment of the frame code representation and approaches for determining the structural units of the individual video frame within which the control is performed. The method focuses on processing the bit representation of the DCT transformants, and at this processing stage transformant was considered as a structural component of the video stream frame at which the encoding is performed. At the same time, to ensure the video traffic bit rate controlling flexibility, decomposition is performed with respect to each of the transformants to the level of the plurality of bit planes. It is argued that the proposed approach is potentially capable to reducing the video stream bit rate in the worst conditions, that is, when component coding is performed. In addition, this principle of video stream fragmen code representation forming allows to control the level of error that can be made in the bit rate control process. However, in conditions where the bit representation of the transformant is encoded, the method is able to provide higher compression rates as a result of the fact that the values of the detection probability of binary series lengths and the values of detected lengths within the bit plane will be greater than in the case of component coding. This is explained by the structural features of the distribution of binary elements within each of the bit planes, which together form the transformer DCT. In particular, high-frequency transformer regions are most often formed by chains of zero elements. The solutions proposed in the development of the encoding method are able to provide sufficient flexibility to control the bit rate of the video stream, as well as the ability to quickly change the bit rate in a wide range of values.
APA, Harvard, Vancouver, ISO, and other styles
33

Storch, Iago, Bruno Zatt, Luciano Agostini, Guilherme Correa, and Daniel Palomino. "Memory-aware Workload Balancing Technique based on Decision Trees for Parallel HEVC Video Coding." Journal of Integrated Circuits and Systems 15, no. 3 (December 28, 2020): 1–9. http://dx.doi.org/10.29292/jics.v15i3.96.

Full text
Abstract:
Video coding applications demand high computational effort to achieve high compression rates at a low perceptual quality expense. In order to reach acceptable encoding time for such applications, modern video coding standards have been em-ploying parallelism approaches to exploit multiprocessing plat-forms, such as the tiling tool from HEVC standard. When employing Tiles, each frame is divided into rectangular-shaped regions which can be encoded independently. However, alt-hough it is possible to distribute the data equally among the processing units when using Tiles, balancing the workload among processing units poses great challenges. Therefore, this paper proposes a workload balancing technique aiming to speed up the HEVC parallel encoding using Tiles. Different from other literature works, the proposed solution uses a novel approach employing static uniform tiling to avoid memory management difficulties that may emerge when dynamic tiling solutions are employed. The proposed technique relies on workload distribution history of previous frames to predict the workload distribution of the current frame. Then, the pro-posed technique balances the workload among Tiles by em-ploying a workload reduction scheme based on decision trees in the coding process. Experimental tests show that the pro-posed solution outperforms the standard uniform tiling and it is competitive with related works in terms of speedup. Moreo-ver, the solution optimizes resources usage in multiprocessing platforms, presents a negligible coding efficiency loss and avoids increasing memory bandwidth usage by 9.8%, on aver-age, when compared to dynamic tiling solutions, which can impact significantly the performance in memory-constrained platforms.
APA, Harvard, Vancouver, ISO, and other styles
34

Frost, T. M. E., and C. J. Theaker. "Real-time video data compression system." IEE Proceedings E Computers and Digital Techniques 137, no. 5 (1990): 337. http://dx.doi.org/10.1049/ip-e.1990.0041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hwang, Seokha, Seungsik Moon, Dongyun Kam, Inn-Yeal Oh, and Youngjoo Lee. "High-Throughput and Low-Latency Digital Baseband Architecture for Energy-Efficient Wireless VR Systems." Electronics 8, no. 7 (July 22, 2019): 815. http://dx.doi.org/10.3390/electronics8070815.

Full text
Abstract:
This paper presents a novel baseband architecture that supports high-speed wireless VR solutions using 60 GHz RF circuits. Based on the experimental observations by our previous 60 GHz transceiver circuits, the efficient baseband architecture is proposed to enhance the quality of transmission. To achieve a zero-latency transmission, we define an (106,920, 95,040) interleaved-BCH error-correction code (ECC), which removes iterative processing steps in the previous LDPC ECC standardized for the near-field wireless communication. Introducing the block-level interleaving, the proposed baseband processing successfully scatters the existing burst errors to the small-sized component codes, and recovers up to 1080 consecutive bit errors in a data frame of 106,920 bits. To support the high-speed wireless VR system, we also design the massive-parallel BCH encoder and decoder, which is tightly connected to the block-level interleaver and de-interleaver. Including the high-speed analog interfaces for the external devices, the proposed baseband architecture is designed in 65 nm CMOS, supporting a data rate of up to 12.8 Gbps. Experimental results show that the proposed wireless VR solution can transfer up to 4 K high-resolution video streams without using time-consuming compression and decompression, successfully achieving a transfer latency of 1 ms.
APA, Harvard, Vancouver, ISO, and other styles
36

Naik, Dr Pramod Kumar, Vyasaraj T, and Ramachandra Ballary. "Real Time Implementation of Video Compression Based on DWT." Journal of University of Shanghai for Science and Technology 23, no. 07 (July 3, 2021): 264–68. http://dx.doi.org/10.51201/jusst/21/07135.

Full text
Abstract:
In this paper highly efficient 3D Discrete Wavelet Transform architecture is designed and implemented on seven series FPGA. The throughput is analyzed and its performance matrices are compared with different video file format. Today top of the line high-end image and video consume huge amount of memory. The designed architecture of DWT based video compression is again executed in parallel processing mode and its execution time is tabulated demonstrates reduced the processing or execution time. This paper demonstrates the superiority of the designed architecture both in normal mode of execution and parallel processing mode of execution .We know that higher the throughput of the video processing design results in Low power consumption. The Internal Architecture of the design is explained in brief and is synthesized in Xilinx Vivado 17.4 and implemented on Zed board. Based on the experimental results of the design being implemented on FPGA, demonstrates the memory saving capabilities and superiority of this architecture. The resultant architecture has drastically reduced latency and has enhanced the speed of operation.
APA, Harvard, Vancouver, ISO, and other styles
37

Kannan, S. Thabasu, and S. Azhagu Senthil. "Evaluvation of Multiresolution Watermarking Algorithm." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 8 (August 30, 2017): 85. http://dx.doi.org/10.23956/ijarcsse.v7i8.29.

Full text
Abstract:
Now-a-days watermarking plays a pivotal role in most of the industries for providing security to their own as well as hired or leased data. This paper its main aim is to study the multiresolution watermarking algorithms and also choosing the effective and efficient one for improving the resistance in data compression. Computational savings from such a multiresolution watermarking framework is obvious. The multiresolutional property makes our watermarking scheme robust to image/video down sampling operation by a power of two in either space or time. There is no common framework for multiresolutional digital watermarking of both images and video. A multiresolution watermarking based on the wavelet transformation is selected in each frequency band of the Discrete Wavelet Transform (DWT) domain and therefore it can resist the destruction of image processing. The rapid development of Internet introduces a new set of challenging problems regarding security. One of the most significant problems is to prevent unauthorized copying of digital production from distribution. Digital watermarking has provided a powerful way to claim intellectual protection. We proposed an idea for enhancing the robustness of extracted watermarks. Watermark can be treated as a transmitted signal, while the destruction from attackers is regarded as a noisy distortion in channel. For the implementation, we have used minimum nine coordinate positions. The watermarking algorithms to be taken for this study are Corvi algorithm and Wang algorithm. In all graph, we have plotted X axis as peak signal to noise ratio (PSNR) and y axis as Correlation with original watermark. The threshold value ά is set to 5. The result is smaller than the threshold value then it is feasible, otherwise it is not.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Yun Geng, and San Xing Cao. "Real-Time Video Acquisition and Frame Compression Processing Technology Based on FFmpeg." Applied Mechanics and Materials 631-632 (September 2014): 494–97. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.494.

Full text
Abstract:
Currently, video has become the most popular form of media, traditional video field is facing simulated to digital transformation. Using the computer to process the video information has a broad application prospect in many areas. This paper proposes an approach of real-time video acquisition and frame compression processing technology by using the FFmpeg and PHP function to achieve the effect of time-lapse photography. Finally, through the Apache log test the system running status.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Lei, Xiaolin Wu, and Paul Bao. "Real-time lossless compression of mosaic video sequences." Real-Time Imaging 11, no. 5-6 (October 2005): 370–77. http://dx.doi.org/10.1016/j.rti.2005.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Stensland, Håkon Kvale, Vamsidhar Reddy Gaddam, Marius Tennøe, Espen Helgedagsrud, Mikkel Næss, Henrik Kjus Alstad, Carsten Griwodz, Pål Halvorsen, and Dag Johansen. "Processing Panorama Video in Real-time." International Journal of Semantic Computing 08, no. 02 (June 2014): 209–27. http://dx.doi.org/10.1142/s1793351x14400054.

Full text
Abstract:
There are many scenarios where high resolution, wide field of view video is useful. Such panorama video may be generated using camera arrays where the feeds from multiple cameras pointing at different parts of the captured area are stitched together. However, processing the different steps of a panorama video pipeline in real-time is challenging due to the high data rates and the stringent timeliness requirements. In our research, we use panorama video in a sport analysis system called Bagadus. This system is deployed at Alfheim stadium in Tromsø, and due to live usage, the video events must be generated in real-time. In this paper, we describe our real-time panorama system built using a low-cost CCD HD video camera array. We describe how we have implemented different components and evaluated alternatives. The performance results from experiments ran on commodity hardware with and without co-processors like graphics processing units (GPUs) show that the entire pipeline is able to run in real-time.
APA, Harvard, Vancouver, ISO, and other styles
41

Westwater, Raymond, and Borko Furht. "TheXYZAlgorithm For Real-Time Compression of Full-Motion Video." Real-Time Imaging 2, no. 1 (February 1996): 19–34. http://dx.doi.org/10.1006/rtim.1996.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Zhen-tao, Tao Li, and Jun-gang Han. "A novel reconfigurable data-flow architecture for real time video processing." Journal of Shanghai Jiaotong University (Science) 18, no. 3 (June 2013): 348–59. http://dx.doi.org/10.1007/s12204-013-1405-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Yun. "A Video Compression Method for Wireless Transmission." Advanced Materials Research 532-533 (June 2012): 1167–71. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1167.

Full text
Abstract:
With the ever-increasing expansion of wireless voice and data plan demands, bandwidth constrained wireless channels intervened with mobility issues lay out a great challenge in the hardware configuration, software development and algorithm design of communication systems. This paper presents our work on a modular and standards-independent DSP-based testbed designed for real-time video communication over low-bandwidth channels. In particular, the real-time transmission of MPEG-4 video and JPEG2000 still images over the Bluetooth wireless standard is demonstrated here. The testbed was designed to explore the possibilities, demands, and challenges of real-time wireless video transmission. The work could have tremendous near-future applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Ramachandran, S., and S. Srinivasan. "A Novel, Automatic Quality Control Scheme for Real Time Image Transmission." VLSI Design 14, no. 4 (January 1, 2002): 329–35. http://dx.doi.org/10.1080/10655140290011131.

Full text
Abstract:
A novel scheme to compute energy on-the-fly and thereby control the quality of the image frames dynamically is presented along with its FPGA implementation. This scheme is suitable for incorporation in image compression systems such as video encoders. In this new scheme, processing is automatically stopped when the desired quality is achieved for the image being processed by using a concept called pruning. Pruning also increases the processing speed by a factor of more than two when compared to the conventional method of processing without pruning. An MPEG-2 encoder implemented using this scheme is capable of processing good quality monochrome and color images of sizes up to 1024 × 768 pixels at the rate of 42 and 28 frames per second, respectively, with a compression ratio of over 17:1. The encoder is also capable of working in the fixed pruning level mode with user programmable features.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhu, Li, and Yi Min Yang. "Real-Time Multitasking Video Encoding Processing System of Multicore." Applied Mechanics and Materials 66-68 (July 2011): 2074–79. http://dx.doi.org/10.4028/www.scientific.net/amm.66-68.2074.

Full text
Abstract:
This paper achieved the optimize which is based on the Series processors Produced by NVIDIA, such as Geforce, Tegra, Nexus and so on, and discussed the future development of the video image processor. Expounded the most popular DSP optimization techniques and objectives in the current, to optimized the design for the methods of the various papers available in existence. Based on the NVIDIA's series of products, specific discussed CUDA GPU architecture based on NVIDIA's products, raised the hardware and algorithms of the current most popular video encoding equipment, based on real practical technology to improve the transmission and encoding of multimedia data.
APA, Harvard, Vancouver, ISO, and other styles
46

Kubas, Malwina, and Grzegorz Sarwas. "FastRIFE: Optimization of Real-Time Intermediate Flow Estimation for Video Frame Interpolation." Journal of WSCG 22, no. 1-2 (2021): 21–28. http://dx.doi.org/10.24132/jwscg.2021.29.3.

Full text
Abstract:
The problem of video inter-frame interpolation is an essential task in the field of image processing. Correctlyincreasing the number of frames in the recording while maintaining smooth movement allows to improve thequality of played video sequence, enables more effective compression and creating a slow-motion recording. Thispaper proposes the FastRIFE algorithm, which is some speed improvement of the RIFE (Real-Time IntermediateFlow Estimation) model. The novel method was examined and compared with other recently published algorithms.All source codes are available at:https://gitlab.com/malwinq/interpolation-of-images-for-slow-motion-videos.
APA, Harvard, Vancouver, ISO, and other styles
47

Sunny Joseph, Ajai, and Elizabeth Isaac. "GPU Accelerated real-time Melanoma Detection." International Journal of Engineering & Technology 7, no. 3 (June 27, 2018): 1208. http://dx.doi.org/10.14419/ijet.v7i3.13169.

Full text
Abstract:
Melanoma is recognized as one of the most dangerous type of skin cancer. A novel method to detect melanoma in real time with the help of Graphical Processing Unit (GPU) is proposed. Existing systems can process medical images and perform a diagnosis based on Image Processing technique and Artificial Intelligence. They are also able to perform video processing with the help of large hardware resources at the backend. This incurs significantly higher costs and space and are complex by both software and hardware. Graphical Processing Units have high processing capabilities compared to a Central Processing Unit of a system. Various approaches were used for implementing real time detection of Melanoma. The results and analysis based on various approaches and the best approach based on our study is discussed in this work. A performance analysis for the approaches on the basis of CPU and GPU environment is also discussed. The proposed system will perform real-time analysis of live medical video data and performs diagnosis. The system when implemented yielded an accuracy of 90.133% which is comparable to existing systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Yang, Yuandong Liu, and Lee D. Han. "Real-Time Piecewise Regression." Transportation Research Record: Journal of the Transportation Research Board 2643, no. 1 (January 2017): 9–18. http://dx.doi.org/10.3141/2643-02.

Full text
Abstract:
Ubiquitous sensing technologies make big data a trendy topic and a favored approach in transportation studies and applications, but the increasing volumes of data sets present remarkable challenges to data collection, storage, transfer, visualization, and processing. Fundamental aspects of big data in transportation are discussed, including how many data to collect and how to collect data effectively and economically. The focus is GPS trajectory data, which are used widely in this domain. An incremental piecewise regression algorithm is used to evaluate and compress GPS locations as they are produced. Row-wise QR decomposition and singular value decomposition are shown to be valid numerical algorithms for incremental regression. Sliding window–based piecewise regression can subsample the GPS streaming data instantaneously to preserve only the points of interest. Algorithm performance is evaluated completely as accuracy and compression power. A procedure is presented for users to choose the best parameter value for their GPS devices. Results of experiments with real-world trajectory data indicate that when the proper parameter value is selected, the proposed method achieves significant compression power (more than 10 times), maintains acceptable accuracy (less than 5 m), and always outperforms the fixed-rate sampling approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Horng, Mong-Fong, Hsu-Yang Kung, Chi-Hua Chen, and Feng-Jang Hwang. "Deep Learning Applications with Practical Measured Results in Electronics Industries." Electronics 9, no. 3 (March 19, 2020): 501. http://dx.doi.org/10.3390/electronics9030501.

Full text
Abstract:
This editorial introduces the Special Issue, entitled “Deep Learning Applications with Practical Measured Results in Electronics Industries”, of Electronics. Topics covered in this issue include four main parts: (I) environmental information analyses and predictions, (II) unmanned aerial vehicle (UAV) and object tracking applications, (III) measurement and denoising techniques, and (IV) recommendation systems and education systems. Four papers on environmental information analyses and predictions are as follows: (1) “A Data-Driven Short-Term Forecasting Model for Offshore Wind Speed Prediction Based on Computational Intelligence” by Panapakidis et al.; (2) “Multivariate Temporal Convolutional Network: A Deep Neural Networks Approach for Multivariate Time Series Forecasting” by Wan et al.; (3) “Modeling and Analysis of Adaptive Temperature Compensation for Humidity Sensors” by Xu et al.; (4) “An Image Compression Method for Video Surveillance System in Underground Mines Based on Residual Networks and Discrete Wavelet Transform” by Zhang et al. Three papers on UAV and object tracking applications are as follows: (1) “Trajectory Planning Algorithm of UAV Based on System Positioning Accuracy Constraints” by Zhou et al.; (2) “OTL-Classifier: Towards Imaging Processing for Future Unmanned Overhead Transmission Line Maintenance” by Zhang et al.; (3) “Model Update Strategies about Object Tracking: A State of the Art Review” by Wang et al. Five papers on measurement and denoising techniques are as follows: (1) “Characterization and Correction of the Geometric Errors in Using Confocal Microscope for Extended Topography Measurement. Part I: Models, Algorithms Development and Validation” by Wang et al.; (2) “Characterization and Correction of the Geometric Errors Using a Confocal Microscope for Extended Topography Measurement, Part II: Experimental Study and Uncertainty Evaluation” by Wang et al.; (3) “Deep Transfer HSI Classification Method Based on Information Measure and Optimal Neighborhood Noise Reduction” by Lin et al.; (4) “Quality Assessment of Tire Shearography Images via Ensemble Hybrid Faster Region-Based ConvNets” by Chang et al.; (5) “High-Resolution Image Inpainting Based on Multi-Scale Neural Network” by Sun et al. Two papers on recommendation systems and education systems are as follows: (1) “Deep Learning-Enhanced Framework for Performance Evaluation of a Recommending Interface with Varied Recommendation Position and Intensity Based on Eye-Tracking Equipment Data Processing” by Sulikowski et al. and (2) “Generative Adversarial Network Based Neural Audio Caption Model for Oral Evaluation” by Zhang et al.
APA, Harvard, Vancouver, ISO, and other styles
50

Boyun, Vitaliy. "Directions of Development of Intelligent Real Time Video Systems." Application and Theory of Computer Technology 2, no. 3 (April 27, 2017): 48. http://dx.doi.org/10.22496/atct.v2i3.65.

Full text
Abstract:
Real time video systems play a significant role in many fields of science and technology. The range of their applications is constantly increasing together with requirements to them, especially it concerns to real time video systems with the feedbacks. Conventional fundamentals and principles of real-time video systems construction are extremely redundant and do not take into consideration the peculiarities of real time processing and tasks, therefore they do not meet the system requirements neither in technical plan nor in informational and methodical one. Therefore, the purpose of this research is to increase responsiveness, productivity and effectiveness of real time video systems with a feedback during the operation with the high-speed objects and dynamic processes. The human visual analyzer is considered as a prototype for the construction of intelligent real time video systems. Fundamental functions, structural and physical peculiarities of adaptation and processes taking place in a visual analyzer relating to the information processing, are considered. High selectivity of information perception and wide parallelism of information processing on the retinal neuron layers and on the higher brain levels are most important peculiarities of a visual analyzer for systems with the feedback. The paper considers two directions of development of intelligent real time video systems. First direction based on increasing intellectuality of video systems at the cost of development of new information and dynamic models for video information perception processes, principles of control and reading parameters of video information from the sensor, adapting them to the requirements of concrete task, and combining of input processes with data processing. Second direction is associated with the development of new architectures for parallel perception and level-based processing of information directly on a video sensor matrix. The principles of annular and linear structures on the neurons layers, of close-range interaction and specialization of layers, are used to simplify the neuron network.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography