Academic literature on the topic 'Motion blur'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Motion blur.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Motion blur"

1

Gosselin, Frédéric, and Claude Lamontagne. "Motion-Blur Illusions." Perception 26, no. 7 (1997): 847–55. http://dx.doi.org/10.1068/p260847.

Full text
Abstract:
The still-radii illusion, the figure-of-eight illusion, the band-of-heightened-intensity illusion and the dark-blurred-concentric-circles illusion have remained, until now, isolated relatively ill-explained phenomena. A single algorithmic model is proposed which explains these four visual illusions. In fact, this model predicts phenomena produced by motion of any gray-shaded patterns relative to the eyes (termed ‘motion-blur illusions’). Results of a computer simulation of the model are presented. A novel instance of the proposed class of illusions, which can be readily experienced by the reader, is introduced to illustrate the generality of the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Askari Javaran, Taiebeh, and Hamid Hassanpour. "Using a Blur Metric to Estimate Linear Motion Blur Parameters." Computational and Mathematical Methods in Medicine 2021 (October 28, 2021): 1–8. http://dx.doi.org/10.1155/2021/6048137.

Full text
Abstract:
Motion blur is a common artifact in image processing, specifically in e-health services, which is caused by the motion of a camera or scene. In linear motion cases, the blur kernel, i.e., the function that simulates the linear motion blur process, depends on the length and direction of blur, called linear motion blur parameters. The estimation of blur parameters is a vital and sensitive stage in the process of reconstructing a sharp version of a motion blurred image, i.e., image deblurring. The estimation of blur parameters can also be used in e-health services. Since medical images may be blurry, this method can be used to estimate the blur parameters and then take an action to enhance the image. In this paper, some methods are proposed for estimating the linear motion blur parameters based on the extraction of features from the given single blurred image. The motion blur direction is estimated using the Radon transform of the spectrum of the blurred image. To estimate the motion blur length, the relation between a blur metric, called NIDCT (Noise-Immune Discrete Cosine Transform-based), and the motion blur length is applied. Experiments performed in this study showed that the NIDCT blur metric and the blur length have a monotonic relation. Indeed, an increase in blur length leads to increase in the blurriness value estimated via the NIDCT blur metric. This relation is applied to estimate the motion blur. The efficiency of the proposed method is demonstrated by performing some quantitative and qualitative experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Watson, Andrew B., and Albert J. Ahumada. "14.2: Visible Motion Blur: A Perceptual Metric for Display Motion Blur." SID Symposium Digest of Technical Papers 41, no. 1 (2010): 184. http://dx.doi.org/10.1889/1.3500365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Georgeson, Mark A., and Stephen T. Hammett. "Seeing blur: ‘motion sharpenin’ without motion." Proceedings of the Royal Society of London. Series B: Biological Sciences 269, no. 1499 (2002): 1429–34. http://dx.doi.org/10.1098/rspb.2002.2029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Oktay, Tugrul, Harun Celik, and Ilke Turkmen. "Constrained control of helicopter vibration to reduce motion blur." Aircraft Engineering and Aerospace Technology 90, no. 9 (2018): 1326–36. http://dx.doi.org/10.1108/aeat-02-2017-0068.

Full text
Abstract:
Purpose The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration. Design/methodology/approach Constrained controllers are designed to reduce the motion blur on images taken by helicopter. Helicopter vibrations under tight and soft constrained controllers are modeled and added to images to show the performance of controllers on reducing blur. Findings The blur caused by vibration can be reduced via constrained control of helicopter. Research limitations/implications The motion of camera is modeled and assumed same as the motion of helicopter. In model of exposing image, image noise is neglected, and blur is considered as the only distorting effect on image. Practical implications Tighter constrained controllers can be implemented to take higher quality images by helicopters. Social implications Recently, aerial vehicles are widely used for aerial photography. Images taken by helicopters mostly suffer from motion blur. Reducing motion blur can provide users to take higher quality images by helicopters. Originality/value Helicopter control is performed to reduce motion blur on image for the first time. A control-oriented and physic-based model of helicopter is benefited. Helicopter vibration which causes motion blur is modeled as blur kernel to see the effect of helicopter vibration on taken images. Tight and soft constrained controllers are designed and compared to denote their performance in reducing motion blur. It is proved that images taken by helicopter can be prevented from motion blur by controlling helicopter tightly.
APA, Harvard, Vancouver, ISO, and other styles
6

Oberberger, Max, Matthäus G. Chajdas, and Rüdiger Westermann. "Spatiotemporal Variance-Guided Filtering for Motion Blur." Proceedings of the ACM on Computer Graphics and Interactive Techniques 5, no. 3 (2022): 1–13. http://dx.doi.org/10.1145/3543871.

Full text
Abstract:
Adding motion blur to a scene can help to convey the feeling of speed even at low frame rates. Monte Carlo ray tracing can compute accurate motion blur, but requires a large number of samples per pixel to converge. In comparison, rasterization, in combination with a post-processing filter, can generate fast, but not accurate motion blur from a single sample per pixel. We build upon a recent path tracing denoiser and propose its variant to simulate ray-traced motion blur, enabling fast and high-quality motion blur from a single sample per pixel. Our approach creates temporally coherent renderings by estimating the motion direction and variance locally, and using these estimates to guide wavelet filters at different scales. We compare image quality against brute force Monte Carlo methods and current post-processing motion blur. Our approach achieves real-time frame rates, requiring less than 4ms for full-screen motion blur at a resolution of 1920 x 1080 on recent graphics cards.
APA, Harvard, Vancouver, ISO, and other styles
7

Makkad, Satwinderpal S. "Range from motion blur." Optical Engineering 32, no. 8 (1993): 1915. http://dx.doi.org/10.1117/12.143301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Lixiang, and Jianping Tan. "Discovery, Quantitative Recurrence, and Inhibition of Motion-Blur Hysteresis Phenomenon in Visual Tracking Displacement Detection." Sensors 23, no. 19 (2023): 8024. http://dx.doi.org/10.3390/s23198024.

Full text
Abstract:
Motion blur is common in video tracking and detection, and severe motion blur can lead to failure in tracking and detection. In this work, a motion-blur hysteresis phenomenon (MBHP) was discovered, which has an impact on tracking and detection accuracy as well as image annotation. In order to accurately quantify MBHP, this paper proposes a motion-blur dataset construction method based on a motion-blur operator (MBO) generation method and self-similar object images, and designs APSF, a MBO generation method. The optimized sub-pixel estimation method of the point spread function (SPEPSF) is used to demonstrate the accuracy and robustness of the APSF method, showing the maximum error (ME) of APSF to be smaller than others (reduced by 86%, when motion-blur length > 20, motion-blur angle = 0), and the mean square error (MSE) of APSF to be smaller than others (reduced by 65.67% when motion-blur angle = 0). A fast image matching method based on a fast correlation response coefficient (FAST-PCC) and improved KCF were used with the motion-blur dataset to quantify MBHP. The results show that MBHP exists significantly when the motion blur changes and the error caused by MBHP is close to half of the difference of the motion-blur length between two consecutive frames. A general flow chart of visual tracking displacement detection with error compensation for MBHP was designed, and three methods for calculating compensation values were proposed: compensation values based on inter-frame displacement estimation error, SPEPSF, and no-reference image quality assessment (NR-IQA) indicators. Additionally, the implementation experiments showed that this error can be reduced by more than 96%.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Haoying, Ziran Zhang, Tingting Jiang, Peng Luo, Huajun Feng, and Zhihai Xu. "Real-World Deep Local Motion Deblurring." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 1314–22. http://dx.doi.org/10.1609/aaai.v37i1.25215.

Full text
Abstract:
Most existing deblurring methods focus on removing global blur caused by camera shake, while they cannot well handle local blur caused by object movements. To fill the vacancy of local deblurring in real scenes, we establish the first real local motion blur dataset (ReLoBlur), which is captured by a synchronized beam-splitting photographing system and corrected by a post-progressing pipeline. Based on ReLoBlur, we propose a Local Blur-Aware Gated network (LBAG) and several local blur-aware techniques to bridge the gap between global and local deblurring: 1) a blur detection approach based on background subtraction to localize blurred regions; 2) a gate mechanism to guide our network to focus on blurred regions; and 3) a blur-aware patch cropping strategy to address data imbalance problem. Extensive experiments prove the reliability of ReLoBlur dataset, and demonstrate that LBAG achieves better performance than state-of-the-art global deblurring methods and our proposed local blur-aware techniques are effective.
APA, Harvard, Vancouver, ISO, and other styles
10

Dongming, Li, Su Zhengbo, Su Wei, and Zhang Lijuan. "Research on Cross-Correlative Blur Length Estimation Algorithm in Motion Blur Image." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 1 (2016): 155–62. http://dx.doi.org/10.20965/jaciii.2016.p0155.

Full text
Abstract:
This paper proposes a motion blur length estimation method that is applied to motion blur image restoration. This method applies a cross-correlation algorithm to multi-frame motion-degraded images. In order to find the motion blur parameters, the Radon transform method is used to estimate the motion blur angle. We extract the gray value of pixels around the blur center, calculate the correlation for obtaining motion blur length, and use the Lucy-Richardson iterative algorithm to restore the degraded image. Experiment results show that this method can accurately estimate blur parameters, reduce noise, and obtain better restoration results. The method achieves good results on artificially blurred images and natural images (by the camera shake). The advantage of our algorithm that uses the Lucy-Richardson restoration algorithm compared with the Wiener filtering algorithm is made obvious with less computation time and better restored effects.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Motion blur"

1

Sieberth, Till. "Motion blur in digital images : analys, detection and correction of motion blur in photogrammetry." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/20212.

Full text
Abstract:
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming.
APA, Harvard, Vancouver, ISO, and other styles
2

Cho, Taeg Sang. "Motion blur removal from photographs." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62385.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 135-143).<br>One of the long-standing challenges in photography is motion blur. Blur artifacts are generated from relative motion between a camera and a scene during exposure. While blur can be reduced by using a shorter exposure, this comes at an unavoidable trade-off with increased noise. Therefore, it is desirable to remove blur computationally. To remove blur, we need to (i) estimate how the image is blurred (i.e. the blur kernel or the point-spread function) and (ii) restore a natural looking image through deconvolution. Blur kernel estimation is challenging because the algorithm needs to distinguish the correct imageblur pair from incorrect ones that can also adequately explain the blurred image. Deconvolution is also difficult because the algorithm needs to restore high frequency image contents attenuated by blur. In this dissertation, we address a few aspects of these challenges. We introduce an insight that a blur kernel can be estimated by analyzing edges in a blurred photograph. Edge profiles in a blurred image encode projections of the blur kernel, from which we can recover the blur using the inverse Radon transform. This method is computationally attractive and is well suited to images with many edges. Blurred edge profiles can also serve as additional cues for existing kernel estimation algorithms. We introduce a method to integrate this information into a maximum-a-posteriori kernel estimation framework, and show its benefits. Deconvolution algorithms restore information attenuated by blur using an image prior that exploits a heavy-tailed gradient profile of natural images. We show, however, that such a sparse prior does not accurately model textures, thereby degrading texture renditions in restored images. To address this issue, we introduce a content-aware image prior that adapts its characteristics to local textures. The adapted image prior improves the quality of textures in restored 6 images. Sometimes even the content-aware image prior may be insufficient for restoring rich textures. This issue can be addressed by matching the restored image's gradient distribution to its original image's gradient distribution, which is estimated directly from the blurred image. This new image deconvolution technique called iterative distribution reweighting (IDR) improves the visual realism of reconstructed images. Subject motion can also cause blur. Removing subject motion blur is especially challenging because the blur is often spatially variant. In this dissertation, we address a restricted class of subject motion blur: the subject moves at a constant velocity locally. We design a new computational camera that improves the local motion estimation and, at the same time, reduces the image information loss due to blur.<br>by Taeg Sang Cho.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ44103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rekleitis, Ioannis. "Visual motion estimation based on motion blur interpretation." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20154.

Full text
Abstract:
When the relative velocity between the different objects in a scene and the camera is relative large---compared with the camera's exposure time---in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative velocity from one or, most of the time, more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. This thesis presents an algorithm that estimates the velocity vector of an image patch using the motion blur only, in two steps. The information used for the estimation of the velocity vectors is extracted from the frequency domain, and the most computationally expensive operation is the Fast Fourier Transform that transforms the image from the spatial to the frequency domain. Consequently, the complexity of the algorithm is bound by this operation into O( nlog(n)). The first step consists of using the response of a family of steerable filters applied on the log of the Power Spectrum in order to calculate the orientation of the velocity vector. The second step uses a technique called Cepstral Analysis. More precisely, the log power spectrum is treated as another signal and we examine the Inverse Fourier Transform of it in order to estimate the magnitude of the velocity vector. Experiments have been conducted on artificially blurred images and with real world data, and an error analysis on these results is also presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Wolf, Johanna. "Motion Blur with point-based rendering." Zurich : ETH, Eidgenössische Technische Hochschule, D-INFK, Institut für Visual Computing, 2008. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meredith-Jones, Ryan. "Point-sampling algorithms for simulating motion blur." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0012/MQ53389.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Uzer, Ferit. "Camera Motion Blur And Its Effect On Feature Detectors." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612475/index.pdf.

Full text
Abstract:
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most common usage scenario. In this case, the motion of the camera due to the motion of the moving platform as well as the resulting shocks or vibrations causes a number of distortions on video frame sequences. Two most important ones are the frame-to-frame changes of the line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these two, namely motion blur plays a particularly dominant role in determining the performance of many vision algorithms used in mobile robotics. It is caused by the relative motion between the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly an undesirable phenomenon in computer vision not only because it degrades the quality of images but also causes other feature extraction procedures to degrade or fail. Although there are many studies on feature based tracking, navigation, object recognition algorithms in the computer vision and robotics literature, there is no comprehensive work on the effects of motion blur on different image features and their extraction. In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our attention on motion blur induced degradation of a number of popular feature detectors. We investigate and characterize this degradation using video sequences captured by the vision system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are most commonly used for mobile robotics applications. The performance degradation of these feature detectors due to motion blur are categorized to analyze the effect of legged locomotion on feature performance for perception. These analysis results are obtained as a first step towards the stabilization and restoration of video sequences captured by our experimental legged robotic platform and towards the development of motion blur robust vision system.
APA, Harvard, Vancouver, ISO, and other styles
8

Eisen, Paul S. "Characterizing the perceived quality degradation of still-camera motion blur." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-08232007-111933/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jones, Nathaniel Earl. "Real-time geometric motion blur for a deforming polygonal mesh." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/468.

Full text
Abstract:
Motion blur is one important method for increasing the visual quality of real-time applications. This is increasingly true in the area of interactive applications, where designers often seek to add graphical flair or realism to their programs. These applications often have animated characters with a polygonal mesh wrapped around an animated skeleton; and as the skeleton moves the mesh deforms with it. This thesis presents a method for adding a geometric motion blur to a deforming polygonal mesh. The scheme presented tracks an object's motion silhouette, and uses this to create a polygonal mesh. When this mesh is added to the scene, it gives the appearance of a motion blur on a single object or particular character. The method is generic enough to work on nearly any type of moving polygonal model. Examples are given that show how the method could be expanded and how changes could be made to improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Gibb, Andrew. "Dealing with time varying motion blur in image feature matching." Thesis, University of Bristol, 2018. http://hdl.handle.net/1983/994dc762-8be3-4e29-8a8b-3ef28dd88ed1.

Full text
Abstract:
Motion blur is present in many images and can be due to many causes: From shaky hand held photographs, the panning of 24 frames-per-second feature film cameras, a broadcast camera following a sprinter, or a camera on an autonomous robot. Judicious choice of camera parameters, illumination, and object speed can mitigate motion blur in some circumstances, but often it is unavoidable, or even desirable. For example, in the particular case of feature film and broadcast video, some amount of motion blur is desired, as it aids the creation of the illusion of a moving object, given a succession of still images, presented rapidly. For video analysis however, motion blur remains an obstacle. Much of the work to date in visual analysis, and particularly in image matching, has not addressed motion blur. In the cases where both images are similarly blurred, this is not problematic, as these images appear similar, and can readily be identified as such. However when the motion blur differs between frames, many existing approaches fail or offer significantly reduced performance. This thesis presents experiments that verifies the model of motion blur, which relates un-blurred images to blurred ones, as a rectangular filter. It then proposes a modification to phase correlation, which is based on this rectangular filter model of motion blur. This is shown to perform as well as the best existing methods from the literature. Finally, modifications to SIFT descriptor matching are proposed and tested. One of the methods increases the accuracy of correct matching of SIFT features by up to 60%, for the case of matching a non-blurred image region to a blurred one.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Motion blur"

1

Zimmermann, Axel, Hans-Joachim Müller, and Andy Denzler. Andy Denzler: Blur motion paintings. Galerie von Braunbehrens, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

(Firm), Onedotzero, ed. Motion blur: Onedotzero: graphic moving imagemakers. Laurence King, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Meredith-Jones, Ryan. Point-sampling algorithms for simulating motion blur. National Library of Canada, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Favaro, Paolo. 3-D shape estimation and image restoration: Exploiting defocus and motion blur. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goss, Keith Michael. Multi-dimensional polygon-based rendering for motion blur and depth of field. Brunel University, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1945-, Flowers Phyllis Jean, ed. A blur of mass motion: Reaching into the poetry written by a teenager as she battled manic depression : the poetry and writings of Erin Winona Flowers. Westview Pub., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Institute, British Film, ed. Blue velvet. British Film Institute, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wiley, Sally D. Blue ice in motion: The story of Alaska's glaciers. Alaska Natural History Association, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nooijer, Menno de. artist, editor, Nooijer, Françoise de, artist, editor, Delpeut, Peter, author of introduction, Hutchison Michele translator, and Barends & Pijnappel (Firm), eds. Is heaven blue? Voetnoot, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Southern, Terry. Blue movie. Grove Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Motion blur"

1

Driemeyer, Thomas. "Motion Blur." In mental ray® Handbooks. Springer Vienna, 2001. http://dx.doi.org/10.1007/978-3-7091-3809-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kemp, Jonathan. "Motion blur." In Film on Video. Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Joshi, Neel. "Motion Blur." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_512-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Joshi, Neel. "Motion Blur." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Driemeyer, Thomas. "Motion Blur." In mental ray® Handbooks. Springer Vienna, 2000. http://dx.doi.org/10.1007/978-3-7091-3697-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Joshi, Neel. "Motion Blur." In Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Magnenat-Thalmann, Nadia, and Daniel Thalmann. "Antialiasing and motion blur." In Image Synthesis. Springer Japan, 1987. http://dx.doi.org/10.1007/978-4-431-68060-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kulla, Christopher, and Thiago Ize. "Motion Blur Corner Cases." In Ray Tracing Gems II. Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7185-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Upadhyay, Dhananjay K., Nikhil Kumar, and Neeta Kandpal. "Estimation of Motion Blur Parameter." In Springer Proceedings in Physics. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9259-1_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Goss, Keith M. "Polyhedral Rendering for Motion Blur." In Models and Techniques in Computer Animation. Springer Japan, 1993. http://dx.doi.org/10.1007/978-4-431-66911-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Motion blur"

1

Rao, M. Purnachandra, Sahana M. Prabhu, A. N. Rajagopalan, and Guna Seetharaman. "Camouflaging Motion Blur." In the 2014 Indian Conference. ACM Press, 2014. http://dx.doi.org/10.1145/2683483.2683568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shengyang Dai and Ying Wu. "Motion from blur." In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sroubek, Filip, and Jan Kotera. "Motion Blur Prior." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paramanand, C., and A. N. Rajagopalan. "Motion blur for motion segmentation." In 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Klyuvak, Andriy, Oksana Kliuva, and Ruslan Skrynkovskyy. "Partial Motion Blur Removal." In 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP). IEEE, 2018. http://dx.doi.org/10.1109/dsmp.2018.8478595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ritchie, Matt, Greg Modern, and Kenny Mitchell. "Split second motion blur." In ACM SIGGRAPH 2010 Talks. ACM Press, 2010. http://dx.doi.org/10.1145/1837026.1837048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gong, Dong, Jie Yang, Lingqiao Liu, et al. "From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Knezevic, Katarina, Emilija Mandic, Ranko Petrovic, and Branka Stojanovic. "Blur and Motion Blur Influence on Face Recognition Performance." In 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8587028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhong, Zhihang, Mingdeng Cao, Xiang Ji, Yinqiang Zheng, and Imari Sato. "Blur Interpolation Transformer for Real-World Motion from Blur." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharan, Lavanya, Zhe Han Neo, Kenny Mitchell, and Jessica K. Hodgins. "Simulated motion blur does not improve player experience in racing game." In Motion. ACM Press, 2013. http://dx.doi.org/10.1145/2522628.2522653.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Motion blur"

1

Rajagopalan, Ambasamudram N. Framework for Processing Videos in the Presence of Spatially Varying Motion Blur. Defense Technical Information Center, 2016. http://dx.doi.org/10.21236/ada636878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bhatt, Parth, Curtis Edson, and Ann MacLean. Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS). Michigan Technological University, 2022. http://dx.doi.org/10.37099/mtu.dc.michigantech-p/16366.

Full text
Abstract:
Imagery collected via Unmanned Aerial System (UAS) platforms has become popular in recent years due to improvements in a Digital Single-Lens Reflex (DSLR) camera (centimeter and sub-centimeter), lower operation costs as compared to human piloted aircraft, and the ability to collect data over areas with limited ground access. Many different application (e.g., forestry, agriculture, geology, archaeology) are already using and utilizing the advantages of UAS data. Although, there are numerous UAS image processing workflows, for each application the approach can be different. In this study, we developed a processing workflow of UAS imagery collected in a dense forest (e.g., coniferous/deciduous forest and contiguous wetlands) area allowing users to process large datasets with acceptable mosaicking and georeferencing errors. Imagery was acquired with near-infrared (NIR) and red, green, blue (RGB) cameras with no ground control points. Image quality of two different UAS collection platforms were observed. Agisoft Metashape, a photogrammetric suite, which uses SfM (Structure from Motion) techniques, was used to process the imagery. The results showed that an UAS having a consumer grade Global Navigation Satellite System (GNSS) onboard had better image alignment than an UAS with lower quality GNSS.
APA, Harvard, Vancouver, ISO, and other styles
3

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Full text
Abstract:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
APA, Harvard, Vancouver, ISO, and other styles
4

Moran, Nava, Richard Crain, and Wolf-Dieter Reiter. Regulation by Light of Plant Potassium Uptake through K Channels: Biochemical, Physiological and Biophysical Study. United States Department of Agriculture, 1995. http://dx.doi.org/10.32747/1995.7571356.bard.

Full text
Abstract:
The swelling of plant motor cells is regulated by various signals with almost unknown mediators. One of the obligatory steps in the signaling cascade is the activation of K+-influx channels -K+ channels activated by hyperpolarization (KH channels). We thus explored the regulation of these channels in our model system, motor cell protoplasts from Samanea saman, using patch-clamp in the "whole cell" configuration. (a) The most novel finding was that the activity of KH channels in situ varied with the time of the day, in positive correlation with cell swelling: in Extensor cells KH channels were active in the earlier part of the day, while in Flexor cells only during the later part of the day; (b) High internal pH promoted the activity of these channels in Extensor cells, opposite to the behavior of the equivalent channels in guard cells, but in conformity with the predicted behavior of the putative KH channel, cloned from S. saman recently; (c) HIgh external K+ concentration increased (KH channel currents in Flexor cells. BL depolarized the Flexor cells, as detected in cell-attached patch-clamp recording, using KD channels (the K+-efflux channels) as "voltage-sensing devices". Subsequent Red-Light (RL) pulse followed by Darkness, hyperpolarized the cell. We attribute these changes to the inhibition of the H+-pump by BL and its reactivation by RL, as they were abolished by an H+-pump inhibitor. BL increased also the activity KD channels, in a voltage-independent manner - in all probability by an independent signaling pathway. Blue-Light (BL), which stimulates shrinking of Flexor cells, evoked the IP3 signaling cascade (detected directly by IP3 binding assay), known to mobilize cytosolic Ca2+. Nevertheless, cytosolic Ca2+ . did not activate the KD channel in excised, inside-out patches. In this study we established a close functional similarity of the KD channels between Flexor and Extensior cells. Thus the differences in their responses must stem from different links to signaling in both cell types.
APA, Harvard, Vancouver, ISO, and other styles
5

Rickels, Wilfried. Database and report on currently already existing or announced ocean NETs projects, including a world map of projects. OceanNets, 2023. http://dx.doi.org/10.3289/oceannets_d1.8_v3.

Full text
Abstract:
The Carbon Dioxide Removal (CDR) market is experiencing rapid development, with different regions adopting distinct approaches. In Europe, the progress is primarily driven top-down through the implementation of regulations aimed at integrating CDR into various climate instrument pillars within the EU. In contrast, the United States is witnessing a bottom-up growth trajectory, characterized by the emergence of start-ups, carbon registries, marketplaces, and insurance companies, all playing a role in the expansion of the CDR sector. This surge in CDR-related businesses has been further catalyzed by substantial subsidies, particularly through the recent adjustments made to the 45Q tax credit system. The amendments were introduced as part of the "Inflation Reduction Act" (IRA) and the "Bipartisan Infrastructure Law" (BIL). Under these modifications, significant tax credits are offered for carbon capture and utilization at point sources, with subsequent storage (CCS). Notably, the tax credits have increased to 60 USD/tCO2 for carbon capture and utilization and storage at point sources, and to 85 USD/tCO2 for direct air capture and storage. The tax credits go even higher, amounting to 130 and 180 USD/tCO2, respectively, for utilization and storage if the carbon is directly removed from the air. In addition to these measures, the IRA and BIL also allocate substantial funding for forestry and sequestration projects, carbon transport infrastructure, and carbon removal hubs to test and develop technologies. Simultaneously, some top-down initiatives have been set in motion in the US, exemplified by the introduction of the Carbon Dioxide Removal Market Development Act as part of California's Cap-and-Trade Program. This act mandates emitting entities to offset a certain percentage of their emissions through CDR in subsequent years, culminating in full compensation of emissions with CDR by 2045. Moreover, the act emphasizes the promotion of domestic development by requiring that at least 50% of the negative emissions credits used by an emitting entity originate from CDR processes that directly mitigate climate impacts within the state. Against this backdrop, it comes as no surprise that the CDR start-up scene is predominantly dominated by US companies, with ocean-based removal companies accounting for approximately 10 percent of the market. However, despite their presence, ocean-based CDR projects are currently limited, with the majority focused on blue carbon projects, particularly mangrove restoration, and only a few exploring other ocean-based CDR methods. The land-based portion of the CDR market appears to be effectively addressing accounting, verification, and registry aspects, primarily due to market demand or existing regulations. Nevertheless, the development of such bottom-up approaches remains less likely for open access schemes like ocean-based CDR initiatives.
APA, Harvard, Vancouver, ISO, and other styles
6

Christopher, David A., and Avihai Danon. Plant Adaptation to Light Stress: Genetic Regulatory Mechanisms. United States Department of Agriculture, 2004. http://dx.doi.org/10.32747/2004.7586534.bard.

Full text
Abstract:
Original Objectives: 1. Purify and biochemically characterize RB60 orthologs in higher plant chloroplasts; 2. Clone the gene(s) encoding plant RB60 orthologs and determine their structure and expression; 3. Manipulate the expression of RB60; 4. Assay the effects of altered RB60 expression on thylakoid biogenesis and photosynthetic function in plants exposed to different light conditions. In addition, we also examined the gene structure and expression of RB60 orthologs in the non-vascular plant, Physcomitrella patens and cloned the poly(A)-binding protein orthologue (43 kDa RB47-like protein). This protein is believed to a partner that interacts with RB60 to bind to the psbA5' UTR. Thus, to obtain a comprehensive view of RB60 function requires analysis of its biochemical partners such as RB43. Background &amp; Achievements: High levels of sunlight reduce photosynthesis in plants by damaging the photo system II reaction center (PSII) subunits, such as D1 (encoded by the chloroplast tpsbAgene). When the rate of D1 synthesis is less than the rate of photo damage, photo inhibition occurs and plant growth is decreased. Plants use light-activated translation and enhanced psbAmRNA stability to maintain D1 synthesis and replace the photo damaged 01. Despite the importance to photosynthetic capacity, these mechanisms are poorly understood in plants. One intriguing model derived from the algal chloroplast system, Chlamydomonas, implicates the role of three proteins (RB60, RB47, RB38) that bind to the psbAmRNA 5' untranslated leader (5' UTR) in the light to activate translation or enhance mRNA stability. RB60 is the key enzyme, protein D1sulfide isomerase (Pill), that regulates the psbA-RN :Binding proteins (RB's) by way of light-mediated redox potentials generated by the photosystems. However, proteins with these functions have not been described from higher plants. We provided compelling evidence for the existence of RB60, RB47 and RB38 orthologs in the vascular plant, Arabidopsis. Using gel mobility shift, Rnase protection and UV-crosslinking assays, we have shown that a dithiol redox mechanism which resembles a Pill (RB60) activity regulates the interaction of 43- and 30-kDa proteins with a thermolabile stem-loop in the 5' UTR of the psbAmRNA from Arabidopsis. We discovered, in Arabidopsis, the PD1 gene family consists of II members that differ in polypeptide length from 361 to 566 amino acids, presence of signal peptides, KDEL motifs, and the number and positions of thioredoxin domains. PD1's catalyze the reversible formation an disomerization of disulfide bonds necessary for the proper folding, assembly, activity, and secretion of numerous enzymes and structural proteins. PD1's have also evolved novel cellular redox functions, as single enzymes and as subunits of protein complexes in organelles. We provide evidence that at least one Pill is localized to the chloroplast. We have used PDI-specific polyclonal and monoclonal antisera to characterize the PD1 (55 kDa) in the chloroplast that is unevenly distributed between the stroma and pellet (containing membranes, DNA, polysomes, starch), being three-fold more abundant in the pellet phase. PD1-55 levels increase with light intensity and it assembles into a high molecular weight complex of ~230 kDa as determined on native blue gels. In vitro translation of all 11 different Pill's followed by microsomal membrane processing reactions were used to differentiate among PD1's localized in the endoplasmic reticulum or other organelles. These results will provide.1e insights into redox regulatory mechanisms involved in adaptation of the photosynthetic apparatus to light stress. Elucidating the genetic mechanisms and factors regulating chloroplast photosynthetic genes is important for developing strategies to improve photosynthetic efficiency, crop productivity and adaptation to high light environments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!