Academic literature on the topic 'Video compression. Real-time data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video compression. Real-time data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video compression. Real-time data processing"

1

Xu, Guo Sheng. "Design of Image Processing System Based on FPGA." Advanced Materials Research 403-408 (November 2011): 1281–84. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1281.

Full text
Abstract:
To speed up the image acquisition and make full use of effective information, a design method of CCD partial image scanning system is presented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experiments show that the system has stable performance, real-time data transmission and high quality images, feasible by adopting the algorithm and scheme proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Khosravi, Mohammad R., Sadegh Samadi, and Reza Mohseni. "Spatial Interpolators for Intra-Frame Resampling of SAR Videos: A Comparative Study Using Real-Time HD, Medical and Radar Data." Current Signal Transduction Therapy 15, no. 2 (December 1, 2020): 144–96. http://dx.doi.org/10.2174/2213275912666190618165125.

Full text
Abstract:
Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Guo Sheng. "Detection Design and Implementation of Image Capture and Processing System Based on FPGA." Advanced Materials Research 433-440 (January 2012): 4565–70. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.4565.

Full text
Abstract:
Due to the project in this article, a kind of image capture and processing system based on FPGA is proposed, the low cost high performance FPGA is selected as the main core, the design of the whole system including software and hardware are implemented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experimental results prove right and feasible by adopting the algorithm and scheme proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Zayneb, Abir Jaafar Hussain, Wasiq Khan, Thar Baker, Haya Al-Askar, Janet Lunn, Raghad Al-Shabandar, Dhiya Al-Jumeily, and Panos Liatsis. "Lossy and Lossless Video Frame Compression: A Novel Approach for High-Temporal Video Data Analytics." Remote Sensing 12, no. 6 (March 20, 2020): 1004. http://dx.doi.org/10.3390/rs12061004.

Full text
Abstract:
The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search.
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Chirag, Amandeep Bagga, Bhupesh Kumar Singh, and Mohammad Shabaz. "A Novel Optimized Graph-Based Transform Watermarking Technique to Address Security Issues in Real-Time Application." Mathematical Problems in Engineering 2021 (April 8, 2021): 1–27. http://dx.doi.org/10.1155/2021/5580098.

Full text
Abstract:
The multimedia technologies are gaining a lot of popularity these days. Many unauthorized persons are gaining the access of multimedia such as videos, audios, and images. The transmission of multimedia across the Internet by unauthorized person has led to the problem of illegal distribution. The problem arises when copyrighted data is getting accessed without the knowledge of copyright owner. The videos are the most attacked data during COVID-19 pandemic. In this paper, the frame selection video watermarking technique is proposed to tackle the issue. The proposed work enlightens frame selection followed by watermarking embedding and testing of the technique against various attacks. The embedding of the watermark is done on selected frames of the video. The additional security feature Hyperchaotic Encryption is applied on watermark before embedding. Watermark embedding is done using graph-based transform and singular-valued decomposition and the performance of the technique is further optimized using hybrid combination of grey wolf optimization and genetic algorithm. Many researchers face the challenge of quality loss after embedding of watermark. Proposed technique will aim to overcome those challenges. A total of 6 videos (Akiyo, Coastguard, Foreman, News, Bowing, and Pure Storage) are used for carrying out research work. The performance evaluation of the proposed technique has been carried out after processing it against practical video processing attacks Gaussian noise, sharpening, rotation, blurring, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
6

Ebrahim, Mansoor, Syed Hasan Adil, Kamran Raza, and Syed Saad Azhar Ali. "Block Compressive Sensing Single-View Video Reconstruction Using Joint Decoding Framework for Low Power Real Time Applications." Applied Sciences 10, no. 22 (November 10, 2020): 7963. http://dx.doi.org/10.3390/app10227963.

Full text
Abstract:
Several real-time visual monitoring applications such as surveillance, mental state monitoring, driver drowsiness and patient care, require equipping high-quality cameras with wireless sensors to form visual sensors and this creates an enormous amount of data that has to be managed and transmitted at the sensor node. Moreover, as the sensor nodes are battery-operated, power utilization is one of the key concerns that must be considered. One solution to this issue is to reduce the amount of data that has to be transmitted using specific compression techniques. The conventional compression standards are based on complex encoders (which require high processing power) and simple decoders and thus are not pertinent for battery-operated applications, i.e., VSN (primitive hardware). In contrast, compressive sensing (CS) a distributive source coding mechanism, has transformed the standard coding mechanism and is based on the idea of a simple encoder (i.e., transmitting fewer data-low processing requirements) and a complex decoder and is considered a better option for VSN applications. In this paper, a CS-based joint decoding (JD) framework using frame prediction (using keyframes) and residual reconstruction for single-view video is proposed. The idea is to exploit the redundancies present in the key and non-key frames to produce side information to refine the non-key frames’ quality. The proposed method consists of two main steps: frame prediction and residual reconstruction. The final reconstruction is performed by adding a residual frame with the predicted frame. The proposed scheme was validated on various arrangements. The association among correlated frames and compression performance is also analyzed. Various arrangements of the frames have been studied to select the one that produces better results. The comprehensive experimental analysis proves that the proposed JD method performs notably better than the independent block compressive sensing scheme at different subrates for various video sequences with low, moderate and high motion contents. Also, the proposed scheme outperforms the conventional CS video reconstruction schemes at lower subrates. Further, the proposed scheme was quantized and compared with conventional video codecs (DISCOVER, H-263, H264) at various bitrates to evaluate its efficiency (rate-distortion, encoding, decoding).
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Ray-I., Yu-Hsien Chu, Chia-Hui Wang, and Niang-Ying Huang. "Video-Like Lossless Compression of Data Cube for Big Data Query in Wireless Sensor Networks." WSEAS TRANSACTIONS ON COMMUNICATIONS 20 (August 10, 2021): 139–45. http://dx.doi.org/10.37394/23204.2021.20.19.

Full text
Abstract:
Wireless Sensor Networks (WSNs) contain many sensor nodes which are placed in chosen spatial area to temporally monitor the environmental changes. As the sensor data is big, it should be well organized and stored in cloud servers to support efficient data query. In this paper, we first adopt the streamed sensor data as "data cubes" to enhance data compression by video-like lossless compression (VLLC). With layered tree structure of WSNs, compression can be done on the aggregation nodes of edge computing. Then, an algorithm is designed to well organize and store these VLLC data cubes into cloud servers to support cost-effect big data query with parallel processing. Our experiments are tested by real-world sensor data. Results show that our method can save 94% construction time and 79% storage space to achieve the same retrieval time in data query when compared with a well-known database MySQL
APA, Harvard, Vancouver, ISO, and other styles
8

Tanseer, Iffrah, Nadia Kanwal, Mamoona Naveed Asghar, Ayesha Iqbal, Faryal Tanseer, and Martin Fleury. "Real-Time, Content-Based Communication Load Reduction in the Internet of Multimedia Things." Applied Sciences 10, no. 3 (February 8, 2020): 1152. http://dx.doi.org/10.3390/app10031152.

Full text
Abstract:
There is an increasing number of devices available for the Internet of Multimedia Things (IoMT). The demands these ever-more complex devices make are also increasing in terms of energy efficiency, reliability, quality-of-service guarantees, higher data transfer rates, and general security. The IoMT itself faces challenges when processing and storing massive amounts of data, transmitting it over low bandwidths, bringing constrained resources to bear and keeping power consumption under check. This paper’s research focuses on an efficient video compression technique to reduce that communication load, potentially generated by diverse camera sensors, and also improve bit-rates, while ensuring accuracy of representation and completeness of video data. The proposed method applies a video content-based solution, which, depending on the motion present between consecutive frames, decides on whether to send only motion information or no frame information at all. The method is efficient in terms of limiting the data transmitted, potentially conserving device energy, and reducing latencies by means of negotiable processing overheads. Data are also encrypted in the interests of confidentiality. Video quality measurements, along with a good number of Quality-of-Service measurements demonstrated the value of the load reduction, as is also apparent from a comparison with other related methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Pandit, Shraddha, Piyush Kumar Shukla, Akhilesh Tiwari, Prashant Kumar Shukla, Manish Maheshwari, and Rachana Dubey. "Review of video compression techniques based on fractal transform function and swarm intelligence." International Journal of Modern Physics B 34, no. 08 (March 30, 2020): 2050061. http://dx.doi.org/10.1142/s0217979220500617.

Full text
Abstract:
Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Zhuosheng, Simin Yu, Chengqing Li, Jinhu Lü, and Qianxue Wang. "Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission." International Journal of Bifurcation and Chaos 26, no. 09 (August 2016): 1650158. http://dx.doi.org/10.1142/s0218127416501583.

Full text
Abstract:
This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Video compression. Real-time data processing"

1

Arshad, Norhashim Mohd. "Real-time data compression for machine vision measurement systems." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wong, Chi Wah. "Studying real-time rate control in perceptual, modeling and efficient aspects /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20WONGC.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 205-212). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
3

Tsoi, Yiu-lun Kelvin, and 蔡耀倫. "Real-time scheduling techniques with QoS support and their applications in packet video transmission." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Parois, Ronan. "Codeur vidéo scalable haute-fidélité SHVC modulable et parallèle." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0016/document.

Full text
Abstract:
Après l'entrée dans l'ère du numérique, la consommation vidéo a évolué définissant de nouvelles tendances. Les contenus vidéo sont désormais accessibles sur de nombreuses plateformes (télévision, ordinateur, tablette, smartphone ... ) et par de nombreux moyens, comme les réseaux mobiles, les réseaux satellites, les réseaux terrestres, Internet ou le stockage Blu-ray par exemple. Parallèlement, l'expérience utilisateur s'améliore grâce à la définition de nouveaux formats comme l'Ultra Haute Définition (UHD), le « High Dynamic Range » (HDR) ou le « High Frame Rate » (HFR). Ces formats considèrent une augmentation respectivement de la résolution, de la dynamique des couleurs et de la fréquence d'image. Les nouvelles tendances de consommation et les améliorations des formats imposent de nouvelles contraintes auxquelles doivent répondre les codeurs vidéo actuels et futurs. Dans ce contexte, nous proposons une solution de codage vidéo permettant de répondre à des contraintes de codage multi-formats, multi-destinations, rapide et efficace en termes de compression. Cette solution s'appuie sur l'extension Scalable du standard de compression vidéo « High Efficiency Video Coding » (HEVC) définie en fin d'année 2014, aussi appelée SHVC. Elle permet de réaliser des codages scalables en produisant un unique bitstream à partir d'un codage sur plusieurs couches construites à partir d'une même vidéo à différentes échelles de résolutions, fréquences, niveaux de qualité, profondeurs des pixels ou espaces de couleur. Le codage SHVC améliore l'efficacité du codage HEVC grâce à une prédiction inter-couches qui consistent à employer les informations de codage issues des couches les plus basses. La solution proposée dans cette thèse s'appuie sur un codeur HEVC professionnel développé par la société Ateme qui intègre plusieurs niveaux de parallélisme (inter-images, intra-images, inter-blocs et inter-opérations) grâce à une architecture en pipeline. Deux instances parallèles de ce codeur sont synchronisées via un décalage inter-pipelines afin de réaliser une prédiction inter-couches. Des compromis entre complexité et efficacité de codage sont effectués au sein de cette prédiction au niveau des types d'image et des outils de prédiction. Dans un cadre de diffusion, par exemple, la prédiction inter-couches est effectuée sur les textures pour une image sur deux. A qualité constante, ceci permet d'économiser 18.5% du débit pour une perte de seulement 2% de la vitesse par rapport à un codage HEVC. L'architecture employée permet alors de réaliser tous les types de scalabilité supportés par l'extension SHVC. De plus, pour une scalabilité en résolution, nous proposons un filtre de sous-échantillonnage, effectué sur la couche de base, qui optimise le coût en débit global. Nous proposons des modes de qualité intégrant plusieurs niveaux de parallélisme et optimisations à bas niveau qui permettent de réaliser des codages en temps-réel sur des formats UHD. La solution proposée a été intégrée dans une chaîne de diffusion vidéo temps-réel et montrée lors de plusieurs salons, conférences et meetinqs ATSC 3.0
After entering the digital era, video consumption evolved and defined new trends. Video contents can now be accessed with many platforms (television, computer, tablet, smartphones ... ) and from many medias such as mobile network or satellite network or terrestrial network or Internet or local storage on Blu-ray disc for instance. In the meantime, users experience improves thanks to new video format such as Ultra High Definition (UHD) or High Dynamic Range (HOR) or High Frame Rate (HFR). These formats respectively enhance quality through resolution, dynamic range and frequency. New consumption trends and new video formats define new constraints that have to be resolved by currents and futures video encoders. In this context, we propose a video coding solution able to answer constraints such as multi-formats coding, multi­destinations coding, coding speed and coding efficiency in terms of video compression. This solution relies on the scalable extension of the standard « High Efficiency Video Coding » (HEVC) defined in 2014 also called SHVC. This extension enables scalable video coding by producing a single bitstream on several layers built from a common video at different scales of resolution, frequency, quality, bit depth per pixel or even colour gamut. SHVC coding enhance HEVC coding thanks to an inter-layer prediction that use coding information from lower layers. In this PhD thesis, the proposed solution is based on a professional video encoder, developed by Ateme company, able to perform parallelism on several levels (inter-frames, intra-frames, inter-blocks, inter-operations) thanks to a pipelined architecture. Two instances of this encoder run in parallel and are synchronised at pipeline level to enable inter-layer predictions. Some trade-off between complexity and coding efficiency are proposed on inter-layer prediction at slice and prediction tools levels. For instance, in a broadcast configuration, inter-layer prediction is processed on reconstructed pictures only for half the frames of the bitstream. In a constant quality configuration, it enables to save 18.5% of the coding bitrate for only 2% loss in terms of coding speed compared to equivalent HEVC coding. The proposed architecture is also able to perform all kinds of scalability supported in the SHVC extension. Moreover, in spatial scalability, we propose a down-sampling filter processed on the base layer that optimized global coding bitrate. We propose several quality modes with parallelism on several levels and low-level optimization that enable real-time video coding on UHD sequences. The proposed solution was integrated in a video broadcast chain and showed in several professional shows, conferences and at ATSC 3.0 meetings
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Lizhi. "A generic parallel processing framework for real-time software video compression." Thesis, Brunel University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, Chi-wah Alec, and 王梓樺. "Exploiting wireless link adaptation and region-of-interest processing to improve real-time scalable video transmission." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29804152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Iya, Nuruddeen Mohammed. "A multi-strategy approach for congestion-aware real-time video." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=228569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aved, Alexander. "Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5597.

Full text
Abstract:
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation. These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language. With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; success in addressing the “trust” issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Cedernaes, Erasmus. "Runway detection in LWIR video : Real time image processing and presentation of sensor data." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300690.

Full text
Abstract:
Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration. The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform. A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.
APA, Harvard, Vancouver, ISO, and other styles
10

Hinds, Jeffrey Alec Stanley. "Real-time video streaming using peer-to-peer for video distribution." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01262009-111433/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Video compression. Real-time data processing"

1

Wilberg, Jörg. Codesign for real-time video applications. Boston: Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Westwater, Raymond. Real-time video compression: Techniques and algorithms. Boston: Kluwer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bizon, Thomas P. Real-time transmission of digital video using variable-lengthbcoding. [Washington, DC: National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bizon, Thomas P. Real-time transmission of digital video using variable-lengthbcoding. [Washington, DC: National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bizon, Thomas P. Real-time transmission of digital video using variable-lengthbcoding. [Washington, DC: National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bizon, Thomas P. Real-time demonstration hardware for enhanced DPCM video compression algorithm. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bizon, Thomas P. Real-time demonstration hardware for enhanced DPCM video compression algorithm. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bizon, Thomas P. Real-time demonstration hardware for enhanced DPCM video compression algorithm. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kehtarnavaz, Nasser, and Matthias F. Carlsohn. Real-time image and video processing 2012: 19 April 2012, Brussels, Belgium. Edited by SPIE (Society) and B.-PHOT-Brussels Photonics Team. Bellingham, Washington: SPIE, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kehtarnavaz, Nasser, and Matthias F. Carlsohn. Real-time image and video processing 2010: 16 April 2010, Brussels, Belgium. Edited by SPIE (Society), B.-BHOT-Brussels Photonics Team, and Comité belge d'optique. Bellingham, Wash: SPIE, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Video compression. Real-time data processing"

1

Agnihotram, Gopichand, Rajesh Kumar, Pandurang Naik, and Rahul Yadav. "Virtual Conversation with Real-Time Prediction of Body Moments/Gestures on Video Streaming Data." In Machine Learning and Information Processing, 113–26. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1884-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hua, Liang, Jiayu Wang, and Xiao Hu. "Research on Real-Time Compression and Transmission Method of Motion Video Data Under Internet of Things." In Advances in Intelligent Systems and Computing, 17–24. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4572-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

García-Rodríguez, José, Juan Manuel García-Chamizo, Sergio Orts-Escolano, Vicente Morell-Gimenez, José Antonio Serra-Pérez, Anatassia Angelolopoulou, Alexandra Psarrou, Miguel Cazorla, and Diego Viejo. "Computer Vision Applications of Self-Organizing Neural Networks." In Robotic Vision, 129–38. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2672-0.ch008.

Full text
Abstract:
This chapter aims to address the ability of self-organizing neural network models to manage video and image processing in real-time. The Growing Neural Gas networks (GNG) with its attributes of growth, flexibility, rapid adaptation, and excellent quality representation of the input space makes it a suitable model for real time applications. A number of applications are presented, including: image compression, hand and medical image contours representation, surveillance systems, hand gesture recognition systems, and 3D data reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
4

Zemčík, P., P. Musil, and M. Musil. "Real-Time HDR Video Processing and Compression Using an FPGA." In High Dynamic Range Video, 145–54. Elsevier, 2017. http://dx.doi.org/10.1016/b978-0-12-809477-8.00007-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tudoroiu, Nicolae, Mohammed Zaheeruddin, Roxana-Elena Tudoroiu, and Sorin Mihai Radu. "Fault Detection, Diagnosis, and Isolation Strategy in Li-Ion Battery Management Systems of HEVs Using 1-D Wavelet Signal Analysis." In Wavelet Theory [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94554.

Full text
Abstract:
Nowadays, the wavelet transformation and the 1-D wavelet technique provide valuable tools for signal processing, design, and analysis, in a wide range of control systems industrial applications, audio image and video compression, signal denoising, interpolation, image zooming, texture analysis, time-scale features extraction, multimedia, electrocardiogram signals analysis, and financial prediction. Based on this awareness of the vast applicability of 1-D wavelet in signal processing applications as a feature extraction tool, this paper aims to take advantage of its ability to extract different patterns from signal data sets collected from healthy and faulty input-output signals. It is beneficial for developing various techniques, such as coding, signal processing (denoising, filtering, reconstruction), prediction, diagnosis, detection and isolation of defects. The proposed case study intends to extend the applicability of these techniques to detect the failures that occur in the battery management control system, such as sensor failures to measure the current, voltage and temperature inside an HEV rechargeable battery, as an alternative to Kalman filtering estimation techniques. The MATLAB simulation results conducted on a MATLAB R2020a software platform demonstrate the effectiveness of the proposed scheme in terms of detection accuracy, computation time, and robustness against measurement uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
6

Bourguet, Marie-Luce. "An Overview of Multimodal Interaction Techniques and Applications." In Encyclopedia of Human Computer Interaction, 451–56. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-562-7.ch068.

Full text
Abstract:
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmission of data over networks, large capacity and miniaturized storage devices, and quality of services; however, what fundamentally characterizes a multimedia application is that it does not understand the data (sound, graphics, video, etc.) that it manipulates. In contrast, intelligent multimedia systems at the crossing of the artificial intelligence and multimedia disciplines gradually have gained the ability to understand, interpret, and generate data with respect to content. Multimodal interfaces are a class of intelligent multimedia systems that make use of multiple and natural means of communication (modalities), such as speech, handwriting, gestures, and gaze, to support human-machine interaction. More specifically, the term modality describes human perception on one of the three following perception channels: visual, auditive, and tactile. Multimodality qualifies interactions that comprise more than one modality on either the input (from the human to the machine) or the output (from the machine to the human) and the use of more than one device on either side (e.g., microphone, camera, display, keyboard, mouse, pen, track ball, data glove). Some of the technologies used for implementing multimodal interaction come from speech processing and computer vision; for example, speech recognition, gaze tracking, recognition of facial expressions and gestures, perception of sounds for localization purposes, lip movement analysis (to improve speech recognition), and integration of speech and gesture information. In 1980, the put-that-there system (Bolt, 1980) was developed at the Massachusetts Institute of Technology and was one of the first multimodal systems. In this system, users simultaneously could speak and point at a large-screen graphics display surface in order to manipulate simple shapes. In the 1990s, multimodal interfaces started to depart from the rather simple speech-and-point paradigm to integrate more powerful modalities such as pen gestures and handwriting input (Vo, 1996) or haptic output. Currently, multimodal interfaces have started to understand 3D hand gestures, body postures, and facial expressions (Ko, 2003), thanks to recent progress in computer vision techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Bourguet, Marie-Luce. "An Overview of Multimodal Interaction Techniques and Applications." In Human Computer Interaction, 95–101. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-87828-991-9.ch008.

Full text
Abstract:
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmission of data over networks, large capacity and miniaturized storage devices, and quality of services; however, what fundamentally characterizes a multimedia application is that it does not understand the data (sound, graphics, video, etc.) that it manipulates. In contrast, intelligent multimedia systems at the crossing of the artificial intelligence and multimedia disciplines gradually have gained the ability to understand, interpret, and generate data with respect to content. Multimodal interfaces are a class of intelligent multimedia systems that make use of multiple and natural means of communication (modalities), such as speech, handwriting, gestures, and gaze, to support human-machine interaction. More specifically, the term modality describes human perception on one of the three following perception channels: visual, auditive, and tactile. Multimodality qualifies interactions that comprise more than one modality on either the input (from the human to the machine) or the output (from the machine to the human) and the use of more than one device on either side (e.g., microphone, camera, display, keyboard, mouse, pen, track ball, data glove). Some of the technologies used for implementing multimodal interaction come from speech processing and computer vision; for example, speech recognition, gaze tracking, recognition of facial expressions and gestures, perception of sounds for localization purposes, lip movement analysis (to improve speech recognition), and integration of speech and gesture information. In 1980, the put-that-there system (Bolt, 1980) was developed at the Massachusetts Institute of Technology and was one of the first multimodal systems. In this system, users simultaneously could speak and point at a large-screen graphics display surface in order to manipulate simple shapes. In the 1990s, multimodal interfaces started to depart from the rather simple speech-and-point paradigm to integrate more powerful modalities such as pen gestures and handwriting input (Vo, 1996) or haptic output. Currently, multimodal interfaces have started to understand 3D hand gestures, body postures, and facial expressions (Ko, 2003), thanks to recent progress in computer vision techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Das, Amlan Jyoti, Navajit Saikia, and Kandarpa Kumar Sarma. "Object Classification and Tracking in Real Time." In Emerging Technologies in Intelligent Applications for Image and Video Processing, 250–95. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9685-3.ch011.

Full text
Abstract:
Algorithms for automatic processing of visual data have been a topic of interest since the last few decades. Object tracking and classification methods are highly demanding in vehicle traffic control systems, surveillance systems for detecting unauthorized movement of vehicle and human, mobile robot applications, animal tracking, etc. There are still many challenging issues while dealing with dynamic background, occlusion, etc. in real time. This chapter presents an overview of various existing techniques for object detection, classification and tracking. As the most important requirements of tracking and classification algorithms are feature extraction and selection, different feature types are also included.
APA, Harvard, Vancouver, ISO, and other styles
9

Goswami, Srijan, Urmimala Dey, Payel Roy, Amira S. Ashour, and Nilanjan Dey. "Medical Video Processing." In Computer Vision, 1709–25. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch071.

Full text
Abstract:
In today's medical environments, imaging technology is extremely significant to provide information for accurate diagnosis. An increasing amount of graphical information from high resolution 3D scanners is being used for diagnoses. Improved medical data quality become one of the major aims of researchers. This leads to the development of various medical modalities supported by cameras that can provide videos for the human body internal for surgical purposes and more information for accurate diagnosis. The current chapter studied concept of the video processing, and its application in the medical domain. Based on the highlighted literatures, it is convinced that video processing and real time frame will have outstanding value in the clinical environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Goswami, Srijan, Urmimala Dey, Payel Roy, Amira Ashour, and Nilanjan Dey. "Medical Video Processing." In Advances in Multimedia and Interactive Technologies, 1–17. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1025-3.ch001.

Full text
Abstract:
In today's medical environments, imaging technology is extremely significant to provide information for accurate diagnosis. An increasing amount of graphical information from high resolution 3D scanners is being used for diagnoses. Improved medical data quality become one of the major aims of researchers. This leads to the development of various medical modalities supported by cameras that can provide videos for the human body internal for surgical purposes and more information for accurate diagnosis. The current chapter studied concept of the video processing, and its application in the medical domain. Based on the highlighted literatures, it is convinced that video processing and real time frame will have outstanding value in the clinical environments.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video compression. Real-time data processing"

1

Duan, Chenhui, Linbo Tang, Chen Wu, Cheng Li, Chen Li, and Baojun Zhao. "Real-time Remote Sensing Video Compression for On-orbit Heterogeneous Platform." In 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP). IEEE, 2019. http://dx.doi.org/10.1109/icsidp47821.2019.9173512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Katsigiannis, Stamos, Dimitris Maroulis, and Georgios Papaioannou. "A GPU based real-time video compression method for video conferencing." In 2013 18th International Conference on Digital Signal Processing (DSP). IEEE, 2013. http://dx.doi.org/10.1109/icdsp.2013.6622719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fryza, T. "Introduction to implementation of real time video compression method." In 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fedele, Nicola J., Alfonse A. Acampora, and Richard M. Bunting. "Real-Time Multi-Directional Data Compression Of Full Motion Video." In 1988 Los Angeles Symposium--O-E/LASE '88, edited by Gary W. Hughes, Patrick E. Mantey, and Bernice E. Rogowitz. SPIE, 1988. http://dx.doi.org/10.1117/12.944708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Galan-Hernandez, J. C., V. Alarcon-Aquino, O. Starostenko, and J. M. Ramirez-Cortes. "Wavelet-Based Foveated Compression Algorithm for Real-Time Video Processing." In 2010 IEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA). IEEE, 2010. http://dx.doi.org/10.1109/cerma.2010.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Hayani, Nazar, Naseer Al-Jawad, and Sabah Jassim. "Simultaneous video compression and encryption for real-time secure transmission." In 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2013. http://dx.doi.org/10.1109/ispa.2013.6703746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Meinich-Bache, Oyvind, Kjersti Engan, Tonje S. Birkenes, and Helge Myklebust. "Robust real-time chest compression rate detection from smartphone video." In 2017 10th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2017. http://dx.doi.org/10.1109/ispa.2017.8073560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Beckstead, Jeffrey A., Steven C. Aceto, Michelle D. Conerty, and Steven Nordhauser. "High-performance data and video recorder with real-time lossless compression." In Electronic Imaging '97, edited by Sethuraman Panchanathan and Frans Sijstermans. SPIE, 1997. http://dx.doi.org/10.1117/12.263520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kostrzewski, A. A., S. Ro, W. Wang, and T. P. Jannson. "Ultra-real-time video processing and compression in homeland security applications." In Defense and Security Symposium, edited by Edward M. Carapezza. SPIE, 2007. http://dx.doi.org/10.1117/12.718886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Da, Li, and Zhang Fan. "Real-time data compression bias estimation on netted radar." In 2010 10th International Conference on Signal Processing (ICSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icosp.2010.5655770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography