Academic literature on the topic 'Defocus blur estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Defocus blur estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Defocus blur estimation"

1

Ishihara, Shin, Antonin Sulc, and Imari Sato. "Depth estimation using spectrally varying defocus blur." Journal of the Optical Society of America A 38, no. 8 (2021): 1140. http://dx.doi.org/10.1364/josaa.422059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karaali, Ali, and Claudio Rosito Jung. "Edge-Based Defocus Blur Estimation With Adaptive Scale Selection." IEEE Transactions on Image Processing 27, no. 3 (2018): 1126–37. http://dx.doi.org/10.1109/tip.2017.2771563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Chia-Feng, Jiunn-Lin Wu, and Ting-Yu Tsai. "A Single Image Deblurring Algorithm for Nonuniform Motion Blur Using Uniform Defocus Map Estimation." Mathematical Problems in Engineering 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/6089650.

Full text
Abstract:
One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Haoyuan, Xiuqin Su, and Songmao Chen. "Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation." Applied Sciences 10, no. 7 (2020): 2437. http://dx.doi.org/10.3390/app10072437.

Full text
Abstract:
Image blurs are a major source of degradation in an imaging system. There are various blur types, such as motion blur and defocus blur, which reduce image quality significantly. Therefore, it is essential to develop methods for recovering approximated latent images from blurry ones to increase the performance of the imaging system. In this paper, an image blur removal technique based on sparse optimization is proposed. Most existing methods use different image priors to estimate the blur kernel but are unable to fully exploit local image information. The proposed method adopts an image prior based on nonzero measurement in the image gradient domain and introduces an analytical solution, which converges quickly without additional searching iterations during the optimization. First, a blur kernel is accurately estimated from a single input image with an alternating scheme and a half-quadratic optimization algorithm. Subsequently, the latent sharp image is revealed by a non-blind deconvolution algorithm with the hyper-Laplacian distribution-based priors. Additionally, we analyze and discuss its solutions for different prior parameters. According to the tests we conducted, our method outperforms similar methods and could be suitable for dealing with image blurs in real-life applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Huei-Yung, Chin-Chen Chang, and Xin-Han Chou. "No-reference objective image quality assessment using defocus blur estimation." Journal of the Chinese Institute of Engineers 40, no. 4 (2017): 341–46. http://dx.doi.org/10.1080/02533839.2017.1314193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Xinxin, Ronggang Wang, Xiubao Jiang, Wenmin Wang, and Wen Gao. "Spatially variant defocus blur map estimation and deblurring from a single image." Journal of Visual Communication and Image Representation 35 (February 2016): 257–64. http://dx.doi.org/10.1016/j.jvcir.2016.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Rongjiang. "Defocus blur estimation in calibrated multi-view images for 3D archaeological documentation." Digital Applications in Archaeology and Cultural Heritage 14 (September 2019): e00109. http://dx.doi.org/10.1016/j.daach.2019.e00109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zia, Ali, Jun Zhou, and Yongsheng Gao. "Exploring Chromatic Aberration and Defocus Blur for Relative Depth Estimation From Monocular Hyperspectral Image." IEEE Transactions on Image Processing 30 (2021): 4357–70. http://dx.doi.org/10.1109/tip.2021.3071682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Deschênes, F., D. Ziou, and P. Fuchs. "Improved estimation of defocus blur and spatial shifts in spatial domain: a homotopy-based approach." Pattern Recognition 36, no. 9 (2003): 2105–25. http://dx.doi.org/10.1016/s0031-3203(03)00040-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deschênes, F., D. Ziou, and P. Fuchs. "An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts." Image and Vision Computing 22, no. 1 (2004): 35–57. http://dx.doi.org/10.1016/j.imavis.2003.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Defocus blur estimation"

1

Karaali, Ali. "Spatially varying defocus blur estimation and applications." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157544.

Full text
Abstract:
Esta tese apresenta dois métodos diferentes de estimativa de desfocagem usando uma única imagem. Ambos os métodos assumem uma função de espalhamento de ponto (Point Spread Function - PSF) Gaussiana e exploram a razão de magnitudes de gradientes de versões re-borradas da imagem original com escalas diferentes nas bordas da imagem, o que fornece uma expressão matemática fechada para borramento local. A primeira abordagem calcula perfis 1D ao longo de pontos de borda ortogonais ao contorno local, e avalia a localização da borda (máximo da derivada primeira) para selecionar adaptativamente o número de escalas no re-borramento. Considerando o consumo de tempo de explorar perfis de aresta orientados 1D, um segundo método foi proposto com base em gradientes de imagem diretamente no domínio 2D, e os parâmetros de re-borramento locais foram selecionados com base na concordância de um detector de bordas calculado em várias escalas. Dada uma estimativa inicial da escala de desfocagem nas posições de borda proporcionada por qualquer um destes dois métodos, é também proposto um passo de correção que atenua os erros introduzidos pela discretização da formulação contínua. Um novo método de filtragem local que suaviza as estimativas refinadas ao longo dos contornos de imagem também é proposto, e um filtro de domínio conjunto (jointdomain filter) rápido é explorado para propagar informações de desfocagem para toda a imagem, gerando o mapa de desfocagem completo. Os resultados experimentais em imagens sintéticas e reais mostram que os métodos propostos apresentam resultados promissores para a estimativa de borramento por desfoco, com um bom compromisso entre qualidade e tempo de execução quando comparados a técnicas estado-da-arte. Para lidar com sequências de vídeo desfocadas, a consistência temporal também foi incluída no modelo proposto. Mais precisamente, Filtros de Kalman foram aplicados para gerar estimativas temporais suaves para cada pixel quando a aparência local da sequência de vídeo não varia muito, permitindo transições durante mudanças drásticas da aparência local, que podem se relacionar com oclusões/desoclusões. Finalmente, esta tese também mostra aplicações dos métodos propostos para a estimativa de desfocagem de imagem e vídeo. Um novo método de redimensionamento (retargeting) de imagens é proposto para fotos tiradas por câmera com baixa profundidade de campo. O método inclui informação de desfocamento local no contexto do método seam carving, visando preservar objetos em foco com melhor qualidade visual. Assumindo que os pixels em foco estejam relacionados às regiões de interesse de uma imagem com desfocamento, o método de redimensionamento proposto começa com um método de corte (cropping), o qual remove as partes sem importância (borradas) da imagem, e então o método seam carving é aplicado com uma nova função de energia que prioriza as regiões em foco. Os resultados experimentais mostram que o método proposto funciona melhor na preservação de objetos em foco do que outras técnicas de redimensionamento de imagens. A tese também explora o método de estimação de desfocagem proposto no contexto de des-borramento de imagens e sequências de vídeo, e os resultados foram comparados com vários outros métodos de estimação de desfocagem. Os resultados obtidos mostram que as métricas tipicamente usadas para avaliar métodos de estimação de desfocagem (por exemplo, erro absoluto médio) podem não estar correlacionadas com a qualidade das métricas de imagem desfocada, como a Relação Sinal-Ruído de Pico.<br>This dissertation presents two different defocus blur estimation methods for still images. Both methods assume a Gaussian Point Spread Function (PSF) and explore the ratio of gradient magnitudes of reblurred images computed at edge location with different scales, which provides a closed form mathematical formulation for the local blur assuming continuous-time signals. The first approach computes 1D profiles along edge points orthogonal to the local contour, and evaluate the location of the edge (maximum of the derivative) to adaptively select the number of reblurring scales. Considering the time consumption of exploring 1D oriented edge profiles, a second method was proposed based on 2D multiscale image gradients, and local reblurring parameters were selected based on the agreement of an edge detector computed at several scales. Given an initial estimate of the blur scale at edge locations provided by either of these two methods, a correction step that accounts for the discretization of the continuous formulation is also proposed. A novel local filtering method that smooths the refined estimates along the image contours is also proposed, and a fast joint domain filter is explored to propagate blur information to the whole image to generate the full blur map. Experimental results on synthetic and real images show that the proposed methods have promising results for defocus blur estimation, with a good trade off between running time and accuracy when compared to state-of-the art defocus blur estimation methods. To deal with blurry video sequences, temporal consistency was also included in the proposed model. More precisely, Kalman Filters were applied to generate smooth temporal estimates for each pixel when the local appearance of the video sequence does not vary much, and allowing sharp transitions during drastic local appearance changes, which might relate to occlusions/disocclusions. Finally, this dissertation also shows applications of the proposed methods for image and video blur estimation. A new image retargeting method is proposed for photos taken by a shallow Depth of Field (DoF) camera. The method includes defocus blur information with the seam carving framework aiming to preserve in-focus objects with better visual quality. Assuming the in-focus pixels related to regions of interest of a blurry image, the proposed retargeting method starts with a cropping method, which removes the unimportant parts (blurry) of the image, then the seam carving method is applied with a novel energy function that prioritizes in-focus regions. Experimental results show that the proposed blur aware retargeting method works better at preserving in-focus objects than other well known competitive retargeting methods. The dissertation also explores the proposed blur estimation method in the context of image and video deblurring, and results were compared with several other blur estimation methods. The obtained results show that metrics typically used to evaluate blur estimation methods (e.g. Mean Absolute Error) might not be correlated with the quality of deblurred image metrics, such as Peak Signal to Noise Ratio.
APA, Harvard, Vancouver, ISO, and other styles
2

Pinheiro, de Carvalho Marcela. "Deep Depth from Defocus : Neural Networks for Monocular Depth Estimation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS609.

Full text
Abstract:
L'estimation de profondeur à partir d'une seule image est maintenant cruciale pour plusieurs applications, de la robotique à la réalité virtuelle. Les approches par apprentissage profond dans les tâches de vision par ordinateur telles que la reconnaissance et la classification d'objets ont également apporté des améliorations au domaine de l'estimation de profondeur. Dans cette thèse, nous développons des méthodes pour l'estimation en profondeur avec un réseau de neurones profond en explorant différents indices, tels que le flou de défocalisation et la sémantique. Nous menons également plusieures expériences pour comprendre la contribution de chaque indice à la performance du modèle et sa capacité de généralisation. Dans un premier temps, nous proposons un réseau de neurones convolutif efficace pour l'estimation de la profondeur ainsi qu'une stratégie d'entraînement basée sur les réseaux génératifs adversaires conditionnels. Notre méthode permet d'obtenir des performances parmis les meilleures sur les jeux de données standard. Ensuite, nous proposons d'explorer le flou de défocalisation, une information optique fondamentalement liée à la profondeur. Nous montrons que ces modèles sont capables d'apprendre et d'utiliser implicitement cette information pour améliorer les performances et dépasser les limitations connues des approches classiques d'estimation de la profondeur par flou de défocalisation. Nous construisons également une nouvelle base de données avec de vraies images focalisées et défocalisées que nous utilisons pour valider notre approche. Enfin, nous explorons l'utilisation de l'information sémantique, qui apporte une information contextuelle riche, en apprenant à la prédire conjointement avec la profondeur par une approache multi-tâche<br>Depth estimation from a single image is a key instrument for several applications from robotics to virtual reality. Successful Deep Learning approaches in computer vision tasks as object recognition and classification also benefited the domain of depth estimation. In this thesis, we develop methods for monocular depth estimation with deep neural network by exploring different cues: defocus blur and semantics. We conduct several experiments to understand the contribution of each cue in terms of generalization and model performance. At first, we propose an efficient convolutional neural network for depth estimation along with a conditional Generative Adversarial framework. Our method achieves performances among the best on standard datasets for depth estimation. Then, we propose to explore defocus blur cues, which is an optical information deeply related to depth. We show that deep models are able to implicitly learn and use this information to improve performance and overcome known limitations of classical Depth-from-Defocus. We also build a new dataset with real focused and defocused images that we use to validate our approach. Finally, we explore the use of semantic information, which brings rich contextual information while learned jointly to depth on a multi-task approach. We validate our approaches with several datasets containing indoor, outdoor and aerial images
APA, Harvard, Vancouver, ISO, and other styles
3

Lertrusdachakul, Intuon. "A novel 3D recovery method by dynamic (de)focused projection." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00695321.

Full text
Abstract:
This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object's depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object's depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
APA, Harvard, Vancouver, ISO, and other styles
4

Lertrusdachakul, Intuoun. "A novel 3D recovery method by dynamic (de)focused projection." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS057/document.

Full text
Abstract:
Ce mémoire présente une nouvelle méthode pour l’acquisition 3D basée sur la lumière structurée. Cette méthode unifie les techniques de depth from focus (DFF) et depth from defocus (DFD) en utilisant une projection dynamique (dé)focalisée. Avec cette approche, le système d’acquisition d’images est construit de manière à conserver la totalité de l’objet nette sur toutes les images. Ainsi, seuls les motifs projetés sont soumis aux déformations de défocalisation en fonction de la profondeur de l’objet. Quand les motifs projetés ne sont pas focalisés, leurs Point Spread Function (PSF) sont assimilées à une distribution gaussienne. La profondeur finale est calculée en utilisant la relation entre les PSF de différents niveaux de flous et les variations de la profondeur de l’objet. Notre nouvelle estimation de la profondeur peut être utilisée indépendamment. Elle ne souffre pas de problèmes d’occultation ou de mise en correspondance. De plus, elle gère les surfaces sans texture et semi-réfléchissante. Les résultats expérimentaux sur des objets réels démontrent l’efficacité de notre approche, qui offre une estimation de la profondeur fiable et un temps de calcul réduit. La méthode utilise moins d’images que les approches DFF et contrairement aux approches DFD, elle assure que le PSF est localement unique<br>This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object’s depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object’s depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique
APA, Harvard, Vancouver, ISO, and other styles
5

Chou, Xinhan, and 周昕翰. "Defocus Blur Identification for Depth Estimation and Image Quality Assessment." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51786532470766991315.

Full text
Abstract:
碩士<br>國立中正大學<br>電機工程研究所<br>101<br>In this thesis, we present a defocus blur identification technique based on histogram analysis of an image. The image defocus process is formulated by incorporating the non-linear camera response and intensity dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. The presented technique is first applied to depth measurement using the defocus information. It is also used for image quality assessment applications, specifically associated with optical defocus blur. We have performed the experiments on both the real scene images. The results have demonstrated the robustness and feasibility of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
6

Jamleh, Hani Ousamah Morad, and 詹霖. "Shallow Depth Map Estimation from Image Defocus Blur Point Spread Function Information." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/22213002179549490348.

Full text
Abstract:
博士<br>國立臺灣大學<br>電子工程學研究所<br>102<br>The aim of this research is addressing both the influence of the limited aperture size of the optical imaging system of the camera, and the defocus aberration influence on output images in order to measure useful information such as defocus and depth through the MTF (Modulation Transfer Function), further we analyze the existing defocus levels by measuring the size of blur kernels. One of the goals of our study is to make shallow depth photos with blurry background; photographers need to use cameras such as SLR (single-lens reflex) not only for carefully choosing the best position with respect to the object but also changing the lens effective focal length or aperture size in order to obtain an artistic effect mostly desired in many types of photographs (e.g. portraits), which is not available for normal camera users who prefer to use low cost compact point-and-shot cameras; for their ease of use and convenience. Nowadays, the size of TFT-LCDs (thin-film-transistor liquid-crystal displays) is getting larger, as a result; it becomes harder to inspect defects that may exist which usually require a human visual examiner to judge the severity of the defects on the final product. These defects; so called mura (Japanese shorthand) are defined as visual blemish with non-uniform shapes and boundaries. It is becoming a very serious unpleasant effect which needs to be detected and inspected in order to characterize the LCD’s quality. Through this research, we essentially propose two contributions. One that given only two images taken under different camera parameters, we measure a reliable defocus map based on scale-space analysis, then we propagate the defocus measures over edges to the entire image using matting process, eventually we will have a refined dense defocus map, which is utilized in applications such as amplifying the existing blurriness yielding a shallow depth photos from all focused images. On the other hand, it helps extracting the foreground object shape and isolating it from the background. The second contribution is experimentally detecting many types of MURA defects on LCD panels by some low-complex effective post-processing imaging techniques. Practically; we utilize the computational photography techniques to amplify defocus levels and to detect low contrast defects such as MURA. Our Computational techniques will allow the average photographers to capture more appealing photos, and the LCD manufacturers to increase their Engineer’s efficiencies and performance. We strongly proof that this study will enable cameras and automated vision systems to embed useful computation with few user interventions.
APA, Harvard, Vancouver, ISO, and other styles
7

Chowdhury, Prodipto. "Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network." Thesis, 2018. http://hdl.handle.net/1805/17925.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
APA, Harvard, Vancouver, ISO, and other styles
8

(5931032), Prodipto Chowdhury. "ESTIMATION OF DEPTH FROM DEFOCUS BLUR IN VIRTUAL ENVIRONMENTS COMPARING GRAPH CUTS AND CONVOLUTIONAL NEURAL NETWORK." Thesis, 2019.

Find full text
Abstract:
Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Defocus blur estimation"

1

Favaro, Paolo. 3-D shape estimation and image restoration: Exploiting defocus and motion blur. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Soatto, Stefano, and Paolo Favaro. 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Soatto, Stefano, and Paolo Favaro. 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur. Springer, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Soatto, Stefano, and Paolo Favaro. 3-D Shape Estimation and Image Restoration. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Defocus blur estimation"

1

Carvalho, Marcela, Bertrand Le Saux, Pauline Trouvé-Peloux, Andrés Almansa, and Frédéric Champagnat. "Deep Depth from Defocus: How Can Defocus Blur Improve 3D Estimation Using Dense Neural Networks?" In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11009-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gajjar, Ruchi, Tanish Zaveri, and Aastha Vaniawala. "Estimation of defocus blur radius using convolutional neural network." In Technologies for Sustainable Development. CRC Press, 2020. http://dx.doi.org/10.1201/9780429321573-45.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Defocus blur estimation"

1

Shi, Jianping, Li Xu, and Jiaya Jia. "Just noticeable defocus blur detection and estimation." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pi, Futao, Yi Zhang, Gang Lu, and Baochuan Pang. "Defocus blur estimation from multi-scale gradients." In Fifth International Conference on Machine Vision (ICMV 12), edited by Yulin Wang, Liansheng Tan, and Jianhong Zhou. SPIE, 2013. http://dx.doi.org/10.1117/12.2012432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Ching-Hui, Hui Zhou, and Timo Ahonen. "Blur-Aware Disparity Estimation from Defocus Stereo Images." In 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karaali, Ali, and Claudio Rosito Jung. "Adaptive scale selection for multiresolution defocus blur estimation." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mahmoudpour, Saeed, and Manbae Kim. "Superpixel-based depth map estimation using defocus blur." In 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. http://dx.doi.org/10.1109/icip.2016.7532832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Jongsu, Ahmed S. Fathi, and Sangseob Song. "Defocus blur estimation using a Cellular Neural Network." In 2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA 2010). IEEE, 2010. http://dx.doi.org/10.1109/cnna.2010.5430249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rajabzadeh, Tayebeh, Abedin Vahedian, and Hamidreza Pourreza. "Static Object Depth Estimation Using Defocus Blur Levels Features." In 2010 6th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM). IEEE, 2010. http://dx.doi.org/10.1109/wicom.2010.5600643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karaali, Ali, Claudio Rosita Jung, and Francois Pitie. "Temporal Consistency for Still Image Based Defocus Blur Estimation Methods." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gajjar, Ruchi, and Tanish Zaveri. "Defocus blur parameter estimation using polynomial expression and signature based methods." In 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN). IEEE, 2017. http://dx.doi.org/10.1109/spin.2017.8049918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Taketomi, Yuzo, Hiroshi Ikeoka, and Takayuki Hamamoto. "Depth estimation based on defocus blur using a single image taken by a tilted lens optics camera." In 2013 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2013. http://dx.doi.org/10.1109/ispacs.2013.6704583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!