To see the other types of publications on this topic, follow the link: Motion blur.

Journal articles on the topic 'Motion blur'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Motion blur.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gosselin, Frédéric, and Claude Lamontagne. "Motion-Blur Illusions." Perception 26, no. 7 (1997): 847–55. http://dx.doi.org/10.1068/p260847.

Full text
Abstract:
The still-radii illusion, the figure-of-eight illusion, the band-of-heightened-intensity illusion and the dark-blurred-concentric-circles illusion have remained, until now, isolated relatively ill-explained phenomena. A single algorithmic model is proposed which explains these four visual illusions. In fact, this model predicts phenomena produced by motion of any gray-shaded patterns relative to the eyes (termed ‘motion-blur illusions’). Results of a computer simulation of the model are presented. A novel instance of the proposed class of illusions, which can be readily experienced by the reader, is introduced to illustrate the generality of the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Askari Javaran, Taiebeh, and Hamid Hassanpour. "Using a Blur Metric to Estimate Linear Motion Blur Parameters." Computational and Mathematical Methods in Medicine 2021 (October 28, 2021): 1–8. http://dx.doi.org/10.1155/2021/6048137.

Full text
Abstract:
Motion blur is a common artifact in image processing, specifically in e-health services, which is caused by the motion of a camera or scene. In linear motion cases, the blur kernel, i.e., the function that simulates the linear motion blur process, depends on the length and direction of blur, called linear motion blur parameters. The estimation of blur parameters is a vital and sensitive stage in the process of reconstructing a sharp version of a motion blurred image, i.e., image deblurring. The estimation of blur parameters can also be used in e-health services. Since medical images may be blurry, this method can be used to estimate the blur parameters and then take an action to enhance the image. In this paper, some methods are proposed for estimating the linear motion blur parameters based on the extraction of features from the given single blurred image. The motion blur direction is estimated using the Radon transform of the spectrum of the blurred image. To estimate the motion blur length, the relation between a blur metric, called NIDCT (Noise-Immune Discrete Cosine Transform-based), and the motion blur length is applied. Experiments performed in this study showed that the NIDCT blur metric and the blur length have a monotonic relation. Indeed, an increase in blur length leads to increase in the blurriness value estimated via the NIDCT blur metric. This relation is applied to estimate the motion blur. The efficiency of the proposed method is demonstrated by performing some quantitative and qualitative experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Watson, Andrew B., and Albert J. Ahumada. "14.2: Visible Motion Blur: A Perceptual Metric for Display Motion Blur." SID Symposium Digest of Technical Papers 41, no. 1 (2010): 184. http://dx.doi.org/10.1889/1.3500365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Georgeson, Mark A., and Stephen T. Hammett. "Seeing blur: ‘motion sharpenin’ without motion." Proceedings of the Royal Society of London. Series B: Biological Sciences 269, no. 1499 (2002): 1429–34. http://dx.doi.org/10.1098/rspb.2002.2029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Oktay, Tugrul, Harun Celik, and Ilke Turkmen. "Constrained control of helicopter vibration to reduce motion blur." Aircraft Engineering and Aerospace Technology 90, no. 9 (2018): 1326–36. http://dx.doi.org/10.1108/aeat-02-2017-0068.

Full text
Abstract:
Purpose The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration. Design/methodology/approach Constrained controllers are designed to reduce the motion blur on images taken by helicopter. Helicopter vibrations under tight and soft constrained controllers are modeled and added to images to show the performance of controllers on reducing blur. Findings The blur caused by vibration can be reduced via constrained control of helicopter. Research limitations/implications The motion of camera is modeled and assumed same as the motion of helicopter. In model of exposing image, image noise is neglected, and blur is considered as the only distorting effect on image. Practical implications Tighter constrained controllers can be implemented to take higher quality images by helicopters. Social implications Recently, aerial vehicles are widely used for aerial photography. Images taken by helicopters mostly suffer from motion blur. Reducing motion blur can provide users to take higher quality images by helicopters. Originality/value Helicopter control is performed to reduce motion blur on image for the first time. A control-oriented and physic-based model of helicopter is benefited. Helicopter vibration which causes motion blur is modeled as blur kernel to see the effect of helicopter vibration on taken images. Tight and soft constrained controllers are designed and compared to denote their performance in reducing motion blur. It is proved that images taken by helicopter can be prevented from motion blur by controlling helicopter tightly.
APA, Harvard, Vancouver, ISO, and other styles
6

Oberberger, Max, Matthäus G. Chajdas, and Rüdiger Westermann. "Spatiotemporal Variance-Guided Filtering for Motion Blur." Proceedings of the ACM on Computer Graphics and Interactive Techniques 5, no. 3 (2022): 1–13. http://dx.doi.org/10.1145/3543871.

Full text
Abstract:
Adding motion blur to a scene can help to convey the feeling of speed even at low frame rates. Monte Carlo ray tracing can compute accurate motion blur, but requires a large number of samples per pixel to converge. In comparison, rasterization, in combination with a post-processing filter, can generate fast, but not accurate motion blur from a single sample per pixel. We build upon a recent path tracing denoiser and propose its variant to simulate ray-traced motion blur, enabling fast and high-quality motion blur from a single sample per pixel. Our approach creates temporally coherent renderings by estimating the motion direction and variance locally, and using these estimates to guide wavelet filters at different scales. We compare image quality against brute force Monte Carlo methods and current post-processing motion blur. Our approach achieves real-time frame rates, requiring less than 4ms for full-screen motion blur at a resolution of 1920 x 1080 on recent graphics cards.
APA, Harvard, Vancouver, ISO, and other styles
7

Makkad, Satwinderpal S. "Range from motion blur." Optical Engineering 32, no. 8 (1993): 1915. http://dx.doi.org/10.1117/12.143301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Lixiang, and Jianping Tan. "Discovery, Quantitative Recurrence, and Inhibition of Motion-Blur Hysteresis Phenomenon in Visual Tracking Displacement Detection." Sensors 23, no. 19 (2023): 8024. http://dx.doi.org/10.3390/s23198024.

Full text
Abstract:
Motion blur is common in video tracking and detection, and severe motion blur can lead to failure in tracking and detection. In this work, a motion-blur hysteresis phenomenon (MBHP) was discovered, which has an impact on tracking and detection accuracy as well as image annotation. In order to accurately quantify MBHP, this paper proposes a motion-blur dataset construction method based on a motion-blur operator (MBO) generation method and self-similar object images, and designs APSF, a MBO generation method. The optimized sub-pixel estimation method of the point spread function (SPEPSF) is used to demonstrate the accuracy and robustness of the APSF method, showing the maximum error (ME) of APSF to be smaller than others (reduced by 86%, when motion-blur length > 20, motion-blur angle = 0), and the mean square error (MSE) of APSF to be smaller than others (reduced by 65.67% when motion-blur angle = 0). A fast image matching method based on a fast correlation response coefficient (FAST-PCC) and improved KCF were used with the motion-blur dataset to quantify MBHP. The results show that MBHP exists significantly when the motion blur changes and the error caused by MBHP is close to half of the difference of the motion-blur length between two consecutive frames. A general flow chart of visual tracking displacement detection with error compensation for MBHP was designed, and three methods for calculating compensation values were proposed: compensation values based on inter-frame displacement estimation error, SPEPSF, and no-reference image quality assessment (NR-IQA) indicators. Additionally, the implementation experiments showed that this error can be reduced by more than 96%.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Haoying, Ziran Zhang, Tingting Jiang, Peng Luo, Huajun Feng, and Zhihai Xu. "Real-World Deep Local Motion Deblurring." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 1314–22. http://dx.doi.org/10.1609/aaai.v37i1.25215.

Full text
Abstract:
Most existing deblurring methods focus on removing global blur caused by camera shake, while they cannot well handle local blur caused by object movements. To fill the vacancy of local deblurring in real scenes, we establish the first real local motion blur dataset (ReLoBlur), which is captured by a synchronized beam-splitting photographing system and corrected by a post-progressing pipeline. Based on ReLoBlur, we propose a Local Blur-Aware Gated network (LBAG) and several local blur-aware techniques to bridge the gap between global and local deblurring: 1) a blur detection approach based on background subtraction to localize blurred regions; 2) a gate mechanism to guide our network to focus on blurred regions; and 3) a blur-aware patch cropping strategy to address data imbalance problem. Extensive experiments prove the reliability of ReLoBlur dataset, and demonstrate that LBAG achieves better performance than state-of-the-art global deblurring methods and our proposed local blur-aware techniques are effective.
APA, Harvard, Vancouver, ISO, and other styles
10

Dongming, Li, Su Zhengbo, Su Wei, and Zhang Lijuan. "Research on Cross-Correlative Blur Length Estimation Algorithm in Motion Blur Image." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 1 (2016): 155–62. http://dx.doi.org/10.20965/jaciii.2016.p0155.

Full text
Abstract:
This paper proposes a motion blur length estimation method that is applied to motion blur image restoration. This method applies a cross-correlation algorithm to multi-frame motion-degraded images. In order to find the motion blur parameters, the Radon transform method is used to estimate the motion blur angle. We extract the gray value of pixels around the blur center, calculate the correlation for obtaining motion blur length, and use the Lucy-Richardson iterative algorithm to restore the degraded image. Experiment results show that this method can accurately estimate blur parameters, reduce noise, and obtain better restoration results. The method achieves good results on artificially blurred images and natural images (by the camera shake). The advantage of our algorithm that uses the Lucy-Richardson restoration algorithm compared with the Wiener filtering algorithm is made obvious with less computation time and better restored effects.
APA, Harvard, Vancouver, ISO, and other styles
11

Rodriguez, Bryan, Xinxiang Zhang, and Dinesh Rajan. "Probabilistic Modeling of Motion Blur for Time-of-Flight Sensors." Sensors 22, no. 3 (2022): 1182. http://dx.doi.org/10.3390/s22031182.

Full text
Abstract:
Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and point clouds since their behavior is not as well understood. As a prerequisite, we have previously developed a method for generating synthetic motion blur in a plane that is parallel to the sensor detector plane. In this work, as a major extension, we generalize our previously developed framework for synthetically generating linear and radial motion blur along planes that are at arbitrary angles with respect to the sensor detector plane. Our framework accurately captures the behavior of the real motion blur that is encountered using a Time-of-Flight (ToF) sensor. This work uses a probabilistic model that predicts the location of invalid pixels that are typically present within depth maps that contain real motion blur. More specifically, the probabilistic model considers different angles of motion paths and the velocity of an object with respect to the image plane of a ToF sensor. Extensive experimental results are shown that demonstrate how our framework can be applied to synthetically create radial, linear, and combined radial-linear motion blur. We quantify the accuracy of the synthetic generation method by comparing the resulting synthetic depth map to the experimentally captured depth map with motion. Our results indicate that our framework achieves an average Boundary F1 (BF) score of 0.7192 for invalid pixels for synthetic radial motion blur, an average BF score of 0.8778 for synthetic linear motion blur, and an average BF score of 0.62 for synthetic combined radial-linear motion blur.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Xuesen, Xianping Zhang, Wei Zhao, Jin Xu, Hongyu Wang, and Wonjun Song. "62‐3: The Effect of OLED Device Capacitance on Low Gray Levels Motion Blur." SID Symposium Digest of Technical Papers 55, S1 (2024): 539–41. http://dx.doi.org/10.1002/sdtp.17133.

Full text
Abstract:
In this study, we have demonstrated the phenomenon of motion blur in mobile phone during the process of application and analyzed the relevant factors that affect motion blur at low gray levels, including thin film transistor (TFT), organic light emitting diodes (OLED), and electronic code. Our findings indicated that the OLED capacitance has a more significant impact on phenomenon of motion blur than TFT and electronic code. Furthermore, we discovered that OLED capacitance is inversely proportional to the brightness of the first frame when switching from a black screen to white/red/green/blue screens. By adjusting holes accumulation at the OLED interface and the thickness of the common layer, it is possible to reduce OLED capacitance while improving objective value and subjective visual performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Donghyun, Hyeoksu Kwon, and Kyoungsu Oh. "Real-Time Motion Blur Using Multi-Layer Motion Vectors." Applied Sciences 14, no. 11 (2024): 4626. http://dx.doi.org/10.3390/app14114626.

Full text
Abstract:
Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method.
APA, Harvard, Vancouver, ISO, and other styles
14

Son, Hyeongseok, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, and Seungyong Lee. "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes." ACM Transactions on Graphics 40, no. 5 (2021): 1–18. http://dx.doi.org/10.1145/3453720.

Full text
Abstract:
For the success of video deblurring, it is essential to utilize information from neighboring frames. Most state-of-the-art video deblurring methods adopt motion compensation between video frames to aggregate information from multiple frames that can help deblur a target frame. However, the motion compensation methods adopted by previous deblurring methods are not blur-invariant, and consequently, their accuracy is limited for blurry frames with different blur amounts. To alleviate this problem, we propose two novel approaches to deblur videos by effectively aggregating information from multiple video frames. First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames. Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors. We combine these two processes to propose an effective recurrent video deblurring network that fully exploits deblurred previous frames. Experiments show that our method achieves the state-of-the-art performance both quantitatively and qualitatively compared to recent methods that use deep learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Luo, Jinhui, and Tao Bo. "Research on Fast Estimation Method of Fuzzy Parameters for Motion Blurred Images." Journal of Physics: Conference Series 2029, no. 1 (2021): 012111. http://dx.doi.org/10.1088/1742-6596/2029/1/012111.

Full text
Abstract:
Abstract Motion blur distortion is the most common type of image distortion in daily life. the research on motion-blurred image restoration technology has developed more mature. Classical algorithms such as Wiener filter and Kalman filter and various improved algorithms can achieve better results, but they take a long time and have great limitations in actual image restoration application scenarios. To solve this problem, this paper proposes an algorithm for fast restoration of image motion blur, an improved algorithm based on Randon transform to judge the image motion blur angle, and studies the algorithm for estimating the blur length under the condition that the blur angle is determined, and obtains two motion blur parameters of blurred image. Experimental results show that the fast image restoration method proposed in this paper has shorter time consumption, stronger anti-noise interference ability and better practicability.
APA, Harvard, Vancouver, ISO, and other styles
16

Argaw, Dawit Mureja, Junsik Kim, Francois Rameau, Jae Won Cho, and In So Kweon. "Optical Flow Estimation from a Single Motion-blurred Image." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 891–900. http://dx.doi.org/10.1609/aaai.v35i2.16172.

Full text
Abstract:
In most of computer vision applications, motion blur is regarded as an undesirable artifact. However, it has been shown that motion blur in an image may have practical interests in fundamental computer vision problems. In this work, we propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner. We design our network with transformer networks to learn globally and locally varying motions from encoded features of a motion-blurred input, and decode left and right frame features without explicit frame supervision. A flow estimator network is then used to estimate optical flow from the decoded features in a coarse-to-fine manner. We qualitatively and quantitatively evaluate our model through a large set of experiments on synthetic and real motion-blur datasets. We also provide in-depth analysis of our model in connection with related approaches to highlight the effectiveness and favorability of our approach. Furthermore, we showcase the applicability of the flow estimated by our method on deblurring and moving object segmentation tasks.
APA, Harvard, Vancouver, ISO, and other styles
17

HAN Xiao-fang, 韩小芳, and 胡家升 HU Jia-sheng. "Restoration of Motion Blur Image and Defocus Blur Image." ACTA PHOTONICA SINICA 41, no. 1 (2012): 87–93. http://dx.doi.org/10.3788/gzxb20124101.0087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Boracchi, Giacomo, and Alessandro Foi. "Uniform Motion Blur in Poissonian Noise: Blur/Noise Tradeoff." IEEE Transactions on Image Processing 20, no. 2 (2011): 592–98. http://dx.doi.org/10.1109/tip.2010.2062196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tiwari, Shamik, V. P. Shukla, S. R. Biradar, and A. K. Singh. "Blur parameters identification for simultaneous defocus and motion blur." CSI Transactions on ICT 2, no. 1 (2014): 11–22. http://dx.doi.org/10.1007/s40012-014-0039-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Abotula, Dileep Kumar, and Bodasingi Nalini. "Estimation and correction of motion blur in digital images." i-manager’s Journal on Image Processing 9, no. 4 (2022): 1. http://dx.doi.org/10.26634/jip.9.4.19285.

Full text
Abstract:
Digital images play a very important role in developing computer-aided systems. The motion blur and blur in such types of images affect the accuracy of the system. Therefore, it is a challenging task to estimate and remove the blur in the images. In the present paper, an attempt is made to use a Convolutional Neural Network (CNN) model to estimate and remove the blur in the images. The CNN model with different functions helps to improve the accuracy of removing blur from the images. Different network functions, such as ReLU and Sigmoid, and their combinations are analyzed for the modeling of CNN. The performance of CNN is analyzed with different parameters, such as blur estimation, PSNR, RMSE, SSIM, and MSE. The performance is measured by considering different image categories, such as more blur images, less blur images, dark blur images, and biomedical images. Considering the parameters, it is observed that CNN with ReLU and Sigmoid functions is giving better performance than other network functions. It is observed that CNN models are giving successful performance to remove blur and correct the blur than any other traditional models.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Shiqiang, Shijie Zhang, Mingfeng Ning, and Botian Zhou. "Motion Blurred Star Image Restoration Based on MEMS Gyroscope Aid and Blur Kernel Correction." Sensors 18, no. 8 (2018): 2662. http://dx.doi.org/10.3390/s18082662.

Full text
Abstract:
Under dynamic conditions, motion blur is introduced to star images obtained by a star sensor. Motion blur affects the accuracy of the star centroid extraction and the identification of stars, further reducing the performance of the star sensor. In this paper, a star image restoration algorithm is investigated to reduce the effect of motion blur on the star image. The algorithm includes a blur kernel calculation aided by a MEMS gyroscope, blur kernel correction based on the structure of the star strip, and a star image reconstruction method based on scaled gradient projection (SGP). Firstly, the motion trajectory of the star spot is deduced, aided by a MEMS gyroscope. Moreover, the initial blur kernel is calculated by using the motion trajectory. Then, the structure information star strip is extracted by Delaunay triangulation. Based on the structure information, a blur kernel correction method is presented by utilizing the preconditioned conjugate gradient interior point algorithm to reduce the influence of bias and installation deviation of the gyroscope on the blur kernel. Furthermore, a speed-up image reconstruction method based on SGP is presented for time-saving. Simulated experiment results demonstrate that both the blur kernel determination and star image reconstruction methods are effective. A real star image experiment shows that the accuracy of the star centroid extraction and the number of identified stars increase after restoration by the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Qi Shen, and Jian Gong Chen. "PSF Estimation and Image Restoration for Motion Blurred Images." Advanced Materials Research 562-564 (August 2012): 2124–27. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.2124.

Full text
Abstract:
Point spread function (PSF) estimation and image restoration algorithm are the hotspots In the research of motion blurred image restoration. In order to improve the efficacy of image restoration, an improved algorithm named quadric transforms (QT) method is proposed in this paper by analyzing the restoration process of motion blurred images. Firstly, Fourier transform and homomorphism transform are applied to the original motion blurred image, and then the Fourier transform and homomorphism transform are used again to the obtained spectrum image. Secondly, the motion blur direction is estimated by Radon transform. Thirdly, the motion blur length is found by differential autocorrelation operations. Finally, utilizing the estimated blur direction and blur length, the motion blurred image is restored by Wiener filtering. Experimental results show that the proposed QT method can get more accurate estimated motion blur angles than the primary transform (PT, that is, Fourier transform and homomorphism transform are used one time) method and can get better restored images under the meaning of peak signal to noise ratio (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Bo, Lianghua Huang, Jianbing Shen, Ling Shao, Ming-Hsuan Yang, and Fatih Porikli. "Visual Tracking Under Motion Blur." IEEE Transactions on Image Processing 25, no. 12 (2016): 5867–76. http://dx.doi.org/10.1109/tip.2016.2615812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pulujkar, Mosami P., and Shaila D. Apte. "Demosaicking Images with Motion Blur." Journal of Medical Imaging and Health Informatics 2, no. 4 (2012): 373–77. http://dx.doi.org/10.1166/jmihi.2012.1111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pulujkar, MosamiP, and ShailaD Apte. "Demosaicking Images with Motion Blur." Journal of Medical Imaging and Health Informatics 3, no. 1 (2013): 17–21. http://dx.doi.org/10.1166/jmihi.2013.1128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wloka, Matthias M., and Robert C. Zeleznik. "Interactive real-time motion blur." Visual Computer 12, no. 6 (1996): 283–95. http://dx.doi.org/10.1007/s003710050065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Agrawal, Amit, Yi Xu, and Ramesh Raskar. "Invertible motion blur in video." ACM Transactions on Graphics 28, no. 3 (2009): 1–8. http://dx.doi.org/10.1145/1531326.1531401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tani, Jacopo, Sandipan Mishra, and John T. Wen. "Motion Blur-Based State Estimation." IEEE Transactions on Control Systems Technology 24, no. 3 (2016): 1012–19. http://dx.doi.org/10.1109/tcst.2015.2473004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wloka, Matthias M., and Robert C. Zeleznik. "Interactive real-time motion blur." Visual Computer 12, no. 6 (1996): 283–95. http://dx.doi.org/10.1007/bf01782290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hong, MinhPhuoc, Jinhyung Choi, and Kyoungsu Oh. "Real-Time Motion Blur using Approximated Motion Trails." Journal of Korea Game Society 17, no. 1 (2017): 17–26. http://dx.doi.org/10.7583/jkgs.2017.17.1.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jha, Tantra Nath. "Velocity Detection from a Motion Blur Image Using Radon Transformation." Tribhuvan University Journal 32, no. 2 (2018): 243–48. http://dx.doi.org/10.3126/tuj.v32i2.24721.

Full text
Abstract:
Motion blur is the result when the camera shutter remains open for an extended period of time and a relative motion between camera and object occurs. An approach for velocity detection based on motion blurred images has been implemented by the Radon transformation. The motion blur parameters are first estimated from the acquired images by using Radon transformation and then used to detect the speed of the moving object in the scene. Here established a link between the motion blur information of a 2D image and camera manufacturer’s data sheet and its calibration
APA, Harvard, Vancouver, ISO, and other styles
32

Chang, Chia-Feng, Jiunn-Lin Wu, and Ting-Yu Tsai. "A Single Image Deblurring Algorithm for Nonuniform Motion Blur Using Uniform Defocus Map Estimation." Mathematical Problems in Engineering 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/6089650.

Full text
Abstract:
One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.
APA, Harvard, Vancouver, ISO, and other styles
33

Vimal, Vrince. "Mixture of Gaussian Blur Kernel Representation for Blind Image Restoration." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 1 (2019): 589–95. http://dx.doi.org/10.17762/turcomat.v10i1.13553.

Full text
Abstract:
The use of blind image restoration, sharpness of edges may frequently be restored using previous information from a picture. De-blurring is the technique of taking out blurring flaws of the steady photographs, including motion or defocus aberration-related blur. the appearance of fast-moving the appearance of fast-moving entities flashing in still images flashing in a still photograph is known as motion blur. When an image is blurred using a Gaussian function, the result is a Gaussian blur. The employment of different sparse priors, either for the implicit photos or the motion blur kernels, contributes to the success of contemporary single-image approaches. De-blurring is the technique of taking out blurring flaws of the steady photographs, including motion or defocus aberration-related blur. The apparent flashing of quickly moving item in a static photograph is known as motion blur. When a picture is blurred utilizing a Gaussian function, the result is a Gaussian blur. The employment of different sparse priors, either for the latent photos or the motion blur kernels, contributes to the success of contemporary single-image approaches. On digital datasets, KSR also discovers effective kernel matrix approximation to hasten blurring and provide effective de-blur performances. The licence plate, which serves as the vehicle's distinctive identifier, is an important indicator of speeding or hit-and-run cars. However, the image of a fast-moving car taken by a security camera is usually blurred and not even humanly discernible.These observed plate pictures are frequently poor resolution and have significant edge information lost, which presents a significant challenge to the current blind deblurring techniques. The blur kernel may be thought of as a linear uniform convolution and parametrically modelled with angle and length for licence plate picture blurring brought on by rapid movement. This research suggests unique technique for locating the blur kernel based on sparse representation. We determine the angle of the kernel by looking at the sparse representation coefficients of the restored picture because the retrieved photo has the highest sparse representation when the kernel angle coincides with the real motion angle. Afterwards, using Radon transform in Fourier domain, we estimate the size of the motion kernel. Even when the licence plate is impossible for a person to read, our system handles big motion blur rather effectively. We assess our method using actual photographs and contrast it with a number of well-known cutting-edge blind image deblurring techniques. Experimental findings show that our suggested technique is superior in terms of efficiency and resilience.
APA, Harvard, Vancouver, ISO, and other styles
34

Dohr, S., M. Muick, B. Schachinger, and M. Gruber. "IMAGE MOTION COMPENSATION – THE VEXCEL APPROACH." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 333–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-333-2022.

Full text
Abstract:
Abstract. Motion compensation in general and forward motion compensation in particular was an important milestone in aerial imaging when presented for film-based camera systems in the late 90ts of the last century. It focused on the forward motion compensation to enhance the image quality when flight speed and image scale produce such motion blur even at short exposure time. Another development and milestone in aerial photogrammetry, the active mount, contributed as well to reduce motion blur.When digital aerial cameras replaced the film-based camera systems in the first decade of the 21st century, forward motion compensation (FMC) could be implemented as an electronic feature of the CCD sensors, namely the time delayed integration (TDI) feature, which worked fine and did not require a mechanical component. Not all cameras could make use of that but large format frame cameras like DMC and UltraCam were able to compensate forward motion blur exploiting this feature of the electronic sensor component.Since CMOS sensors were replacing the CCD sensor component of digital aerial cameras there was a need to implement the FMC mechanism by another solution. One approach was based on a mechanical device able to move the sensor along the flight path of the aircraft like it was the approach for film cameras.At that time Vexcel Imaging decided to develop a more versatile solution based on software and without any additional mechanical part in the camera body. This solution was designed to not only compensate for a uniform compensation to the forward motion but also for angular motion blur and for different scales in one and the same image. This is especially important for the oblique viewing direction of a camera when foreground and background of an oblique scene show different scales in one and the same image.The need to compensate for motion blur is evident when large scale aerial imaging is required, and best image quality is expected. Motion blur is caused from the speed of the aircraft over ground, the image scale, and an angular component – the angular motion blur - caused from turbulences if they exist.The magnitude of the forward motion blur can be estimated when multiplying aircraft speed and image scale and exposure time (e.g. speed over ground 75 m/sec, scale 1/10000 and exposure time 0,001 seconds leads to 0,0075 mm or 7,5 µm in the image). Different image scales result in different magnitude of motion blur. This is evident for oblique camera systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Eunsung, Eunjung Chae, Hejin Cheong, and Joonki Paik. "Fast Motion Deblurring Using Sensor-Aided Motion Trajectory Estimation." Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/649272.

Full text
Abstract:
This paper presents an image deblurring algorithm to remove motion blur using analysis of motion trajectories and local statistics based on inertial sensors. The proposed method estimates a point-spread-function (PSF) of motion blur by accumulating reweighted projections of the trajectory. A motion blurred image is then adaptively restored using the estimated PSF and spatially varying activity map to reduce both restoration artifacts and noise amplification. Experimental results demonstrate that the proposed method outperforms existing PSF estimation-based motion deconvolution methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed in various imaging devices because of its efficient implementation without an iterative computational structure.
APA, Harvard, Vancouver, ISO, and other styles
36

Arslan, Ahmet, Gokhan Koray Gultekin, and Afsar Saranli. "IMU-aided adaptive mesh-grid based video motion deblurring." PeerJ Computer Science 10 (November 25, 2024): e2540. http://dx.doi.org/10.7717/peerj-cs.2540.

Full text
Abstract:
Motion blur is a problem that degrades the visual quality of images for human perception and also challenges computer vision tasks. While existing studies mostly focus on deblurring algorithms to remove uniform blur due to their computational efficiency, such approaches fail when faced with non-uniform blur. In this study, we propose a novel algorithm for motion deblurring that utilizes an adaptive mesh-grid approach to manage non-uniform motion blur with a focus on reducing the computational cost. The proposed method divides the image into a mesh-grid and estimates the blur point spread function (PSF) using an inertial sensor. For each video frame, the size of the grid cells is determined adaptively according to the in-frame spatial variance of blur magnitude which is a proposed metric for the blur non-uniformity in the video frame. The adaptive mesh-size takes smaller values for higher variances, increasing the spatial accuracy of the PSF estimation. Two versions of the adaptive mesh-size algorithm are studied, optimized for either best quality or balanced performance and computation cost. Also, a trade-off parameter is defined for changing the mesh-size according to application requirements. The experiments, using real-life motion data combined with simulated motion blur demonstrate that the proposed adaptive mesh-size algorithm can achieve 5% increase in PSNR quality gain together with a 19% decrease in computation time on the average when compared to the constant mesh-size method.
APA, Harvard, Vancouver, ISO, and other styles
37

Kashani, Hany, Graham Wright, Ali Ursani, Garry Liu, Masoud Hashemi, and Narinder Paul. "Restricting motion effects in CT coronary angiography." British Journal of Radiology 92, no. 1103 (2019): 20190384. http://dx.doi.org/10.1259/bjr.20190384.

Full text
Abstract:
Objective: Evaluation of coronary CT image blur using multi segment reconstruction algorithm. Methods: Cardiac motion was simulated in a Catphan. CT coronary angiography was performed using 320 × 0.5 mm detector array and 275 ms gantry rotation. 1, 2 and 3 segment reconstruction algorithm, three heart rates (60, 80 and 100bpm), two peak displacements (4, 8 mm) and three cardiac phases (55, 35, 75%) were used. Wilcoxon test compared image blur from the different reconstruction algorithms. Results: Image blur for 1, 2 and 3 segments in: 60 bpm, 75% R–R interval and 8 mm peak displacement: 0.714, 0.588, 0.571 mm (1.18, 0.6, 0.4 mm displacement) 80 bpm, 35% R–R interval and 8 mm peak displacement: 0.869, 0.606, 0.606 mm (1.57, 0.79,0.52 mm displacement) 100 bpm, 35% R–R interval and 4 mm peak displacement: 0.645, 0.588, 0.571 mm (0.98, 0.49, 0.33 mm displacement). The median image blur overall for 1 and 2 segments was 0.714 mm and 0.588 mm respectively (p < 0.0001). Conclusion: Two-segment reconstruction significantly reduces image blur. Advances in knowledge: Multisegment reconstruction algorithms during CT coronary angiography are a useful method to reduce image blur, improve visualization of the coronary artery wall and help the early detection of the plaque.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Shanshan, Qingbin Huang, and Minghui Wang. "Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation." Applied Sciences 12, no. 20 (2022): 10606. http://dx.doi.org/10.3390/app122010606.

Full text
Abstract:
Multi-frame super-resolution makes up for the deficiency of sensor hardware and significantly improves image resolution by using the information of inter-frame and intra-frame images. Inaccurate blur kernel estimation will enlarge the distortion of the estimated high-resolution image. Therefore, multi-frame blind super resolution with unknown blur kernel is more challenging. For the purpose of reducing the impact of inaccurate motion estimation and blur kernel estimation on the super-resolved image, we propose a novel method combining motion estimation, blur kernel estimation and super resolution. The confidence weight of low-resolution images and the parameter value of the motion model obtained in image reconstruction are added to the modified motion estimation and blur kernel estimation. At the same time, Jacobian matrix, which can better describe the motion change, is introduced to further correct the error of motion estimation. Based on the results acquired from the experiments on synthetic data and real data, the superiority of the proposed method over others is obvious. The reconstructed high-resolution image retains the details of the image effectively, and the artifacts are greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Songnan, Jiawei Zhang, Jinshan Pan, et al. "Learning to Deblur Face Images via Sketch Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11523–30. http://dx.doi.org/10.1609/aaai.v34i07.6818.

Full text
Abstract:
The success of existing face deblurring methods based on deep neural networks is mainly due to the large model capacity. Few algorithms have been specially designed according to the domain knowledge of face images and the physical properties of the deblurring process. In this paper, we propose an effective face deblurring algorithm based on deep convolutional neural networks (CNNs). Motivated by the conventional deblurring process which usually involves the motion blur estimation and the latent clear image restoration, the proposed algorithm first estimates motion blur by a deep CNN and then restores latent clear images with the estimated motion blur. However, estimating motion blur from blurry face images is difficult as the textures of the blurry face images are scarce. As most face images share some common global structures which can be modeled well by sketch information, we propose to learn face sketches by a deep CNN so that the sketches can help the motion blur estimation. With the estimated motion blur, we then develop an effective latent image restoration algorithm based on a deep CNN. Although involving the several components, the proposed algorithm is trained in an end-to-end fashion. We analyze the effectiveness of each component on face image deblurring and show that the proposed algorithm is able to deblur face images with favorable performance against state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Oktay, Tugrul, Harun Celik, and Ilke Turkmen. "Maximizing autonomous performance of fixed-wing unmanned aerial vehicle to reduce motion blur in taken images." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 232, no. 7 (2018): 857–68. http://dx.doi.org/10.1177/0959651818765027.

Full text
Abstract:
In this study, reducing motion blur in images taken by our unmanned aerial vehicle is investigated. Since shakes of unmanned aerial vehicle cause motion blur in taken images, autonomous performance of our unmanned aerial vehicle is maximized to prevent it from shakes. In order to maximize autonomous performance of unmanned aerial vehicle (i.e. to reduce motion blur), initially, camera mounted unmanned aerial vehicle dynamics are obtained. Then, optimum location of unmanned aerial vehicle camera is estimated by considering unmanned aerial vehicle dynamics and autopilot parameters. After improving unmanned aerial vehicle by optimum camera location, dynamics and controller parameters, it is called as improved autonomous controlled unmanned aerial vehicle. Also, unmanned aerial vehicle with camera fixed at the closest point to center of gravity is called as standard autonomous controlled unmanned aerial vehicle. Both improved autonomous controlled and standard autonomous controlled unmanned aerial vehicles are performed in real time flights, and approximately same trajectories are tracked. In order to compare performance of improved autonomous controlled and standard autonomous controlled unmanned aerial vehicles in reducing motion blur, a motion blur kernel model which is derived using recorded roll, pitch and yaw angles of unmanned aerial vehicle is improved. Finally, taken images are simulated to examine effect of unmanned aerial vehicle shakes. In comparison with standard autonomous controlled flight, important improvements on reducing motion blur are demonstrated by improved autonomous controlled unmanned aerial vehicle.
APA, Harvard, Vancouver, ISO, and other styles
41

Kang, Ruidan, Jiajin Li, Xiaojun Teng, Boyan Lv, and Cangzhi Wu. "P‐45: An Evaluation Method of Moving Picture Response Time for Organic Light‐Emitting Diode Motion Blur." SID Symposium Digest of Technical Papers 54, no. 1 (2023): 1682–84. http://dx.doi.org/10.1002/sdtp.16922.

Full text
Abstract:
The motion image quality of organic light‐emitting diode (OLED) degrades by motion blur due to the hold‐type display. Moving picture response time (MPRT) is a representative index to evaluate the performance of the moving image. A method is proposed to evaluate the motion blur of OLED display.
APA, Harvard, Vancouver, ISO, and other styles
42

Hayashi, Toshiyuki, and Takashi Tsubouchi. "Estimation and Sharpening of Blur in Degraded Images Captured by a Camera on a Moving Object." Sensors 22, no. 4 (2022): 1635. http://dx.doi.org/10.3390/s22041635.

Full text
Abstract:
In this research, we aim to propose an image sharpening method to make it easy to identify concrete cracks from blurred images captured by a moving camera. This study is expected to help realize social infrastructure maintenance using a wide range of robotic technologies, and to solve the future labor shortage and shortage of engineers. In this paper, a method to estimate parameters of motion blur for Point Spread Function (PSF) is mainly discussed, where we assume that there are two main degradation factors caused by the camera, out-of-focus blur and motion blur. A major contribution of this paper is that the parameters can properly be estimated from a sub-image of the object under inspection if the sub-image contains uniform speckled texture. Here, the cepstrum of the sub-image is fully utilized. Then, a filter convoluted PSF which consists of convolution with PSF (motion blur) and PSF (out-of focus blur) can be utilized for deconvolution of the blurred image for sharpening with significant effect. PSF (out-of-focus blur) is a constant function unique to each camera and lens, and can be confirmed before or after shooting. PSF (motion blur), on the other hand, needs to be estimated on a case-by-case basis since the amount and direction of camera movement varies depending on the time of shooting. Previous research papers have sometimes encountered difficulties in estimating the parameters of motion blur because of the emphasis on generality. In this paper, the main object is made of concrete, and on the surface of it there are speckled textures. We hypothesized that we can narrow down the candidates of parameters of motion blur by using these speckled patterns. To verify this hypothesis, we conducted experiments to confirm and examine the following two points using a general-purpose camera used in actual bridge inspections: 1. Influence on the cepstrum when the isolated point-like texture unique to concrete structures is used as a feature point. 2. Selection method of multiple images to narrow down the candidate minima of the cepstrum. It is novel that the parameters of motion blur can be well estimated by using the unique speckled pattern on the surface of the object.
APA, Harvard, Vancouver, ISO, and other styles
43

Kwon, Hyeok-su, Donghyun Lee, and Kyoungsu Oh. "Real-Time Motion Blur using Multi–layer Motion vector." Journal of Korea Game Society 23, no. 4 (2023): 93–101. http://dx.doi.org/10.7583/jkgs.2023.23.4.93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhuang, Hong. "Enhanced DeblurGAN: An advanced combinatorial model for motion blur removal in low-light photography." Applied and Computational Engineering 51, no. 1 (2024): 20–25. http://dx.doi.org/10.54254/2755-2721/51/20241152.

Full text
Abstract:
This article aims to address the challenge of eliminating low-light motion blur, a problem that lacks effective solutions, despite being crucial in various application scenarios. For instance, it can help in the identification of moving individuals or license plates during nocturnal surveillance, filming running videos after dark, and managing animals in rural areas at night. These examples represent commonplace and significant scenarios. These are all important domains, but few approaches are effective at handling such specific cases simultaneously. This paper utilizes a fusion model to increase the brightness of an image while preserving the photographic details. The motion blur is subsequently eliminated from the brightness-enhanced image. This results in the enhancement of image details and the removal of motion blur. Comparing the model proposed in this paper with the commonly used Deblur model, it becomes apparent that the new model effectively enhances brightness in low-light motion blur while preserving image details and reducing much of the blur. This implies that the model is more versatile, as it can be used not only for images but also for low-light videos.
APA, Harvard, Vancouver, ISO, and other styles
45

Tomionko, Joseph, Moussa Magara Traoré, and Drissa Traoré. "Blur and Motion Blur Influence on Recognition Performance of Color Face." WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 19 (November 17, 2022): 272–76. http://dx.doi.org/10.37394/23209.2022.19.28.

Full text
Abstract:
Face recognition is an existing and one of the most prominent biometrics techniques, including the processing of images. It is widely used in many applications. The performance of such systems is directly due to face image quality. Since blur and motion blur are common imagery problems, this paper explores the influence of such disturbances on color face recognition performance. The research described in this paper compares the performance of the face recognition algorithm based on the Haar features and Local Binary Patterns Histograms when it uses color face images of good quality, images with added Gaussian blur and motion blur, as well as enhanced images.
APA, Harvard, Vancouver, ISO, and other styles
46

Tabellion, Eric, Nikhil Karnad, Noa Glaser, Ben Weiss, David E. Jacobs, and Yael Pritch. "Computational Long Exposure Mobile Photography." ACM Transactions on Graphics 42, no. 4 (2023): 1–15. http://dx.doi.org/10.1145/3592124.

Full text
Abstract:
Long exposure photography produces stunning imagery, representing moving elements in a scene with motion-blur. It is generally employed in two modalities, producing either a foreground or a background blur effect. Foreground blur images are traditionally captured on a tripod-mounted camera and portray blurred moving foreground elements, such as silky water or light trails, over a perfectly sharp background landscape. Background blur images, also called panning photography, are captured while the camera is tracking a moving subject, to produce an image of a sharp subject over a background blurred by relative motion. Both techniques are notoriously challenging and require additional equipment and advanced skills. In this paper, we describe a computational burst photography system that operates in a hand-held smartphone camera app, and achieves these effects fully automatically, at the tap of the shutter button. Our approach first detects and segments the salient subject. We track the scene motion over multiple frames and align the images in order to preserve desired sharpness and to produce aesthetically pleasing motion streaks. We capture an under-exposed burst and select the subset of input frames that will produce blur trails of controlled length, regardless of scene or camera motion velocity. We predict inter-frame motion and synthesize motion-blur to fill the temporal gaps between the input frames. Finally, we composite the blurred image with the sharp regular exposure to protect the sharpness of faces or areas of the scene that are barely moving, and produce a final high resolution and high dynamic range (HDR) photograph. Our system democratizes a capability previously reserved to professionals, and makes this creative style accessible to most casual photographers.
APA, Harvard, Vancouver, ISO, and other styles
47

Nagiub, Mena, Thorsten Beuth, Ganesh Sistu, Heinrich Gotzig, and Ciarán Eising. "Depth Prediction Improvement for Near-Field iToF Lidar in Low-Speed Motion State." Sensors 24, no. 24 (2024): 8020. https://doi.org/10.3390/s24248020.

Full text
Abstract:
Current deep learning-based phase unwrapping techniques for iToF Lidar sensors focus mainly on static indoor scenarios, ignoring motion blur in dynamic outdoor scenarios. Our paper proposes a two-stage semi-supervised method to unwrap ambiguous depth maps affected by motion blur in dynamic outdoor scenes. The method trains on static datasets to learn unwrapped depth map prediction and then adapts to dynamic datasets using continuous learning methods. Additionally, blind deconvolution is introduced to mitigate the blur. The combined use of these methods produces high-quality depth maps with reduced blur noise.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, ChangMo, Kyongho Lim, and Tae-Yong Park. "P‐39: Simulation of Perceived Motion Blur on 480Hz OLED Monitor." SID Symposium Digest of Technical Papers 55, no. 1 (2024): 1519–22. http://dx.doi.org/10.1002/sdtp.17843.

Full text
Abstract:
Refresh rate is an important specification for gaming OLED monitors. With the development of the gaming industry and the graphics processing unit (GPU), the demand for gaming monitors that supporting high refresh rates is increasing. In this paper, simulation methods for perceived motion blur are proposed to predict the degree of blur according to the refresh rate. Experimental results indicate that the proposed simulation methods are quite effective in predicting the degree of motion blur. In addition, this paper presents the predicted results of blur as the refresh rate increases up to 480Hz.
APA, Harvard, Vancouver, ISO, and other styles
49

Rønnow, Mads J. L., Ulf Assarsson, and Marco Fratarcangeli. "Fast analytical motion blur with transparency." Computers & Graphics 95 (April 2021): 36–46. http://dx.doi.org/10.1016/j.cag.2021.01.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Huei-Yung. "Depth from motion and defocus blur." Optical Engineering 45, no. 12 (2006): 127201. http://dx.doi.org/10.1117/1.2403851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!