Literatura científica selecionada sobre o tema "Depth of field fusion"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Depth of field fusion".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Depth of field fusion"

1

汪, 嘉欣. "A Fast Multi-Exposure Fusion Algorithm for Ultra Depth of Field Fusion." Modeling and Simulation 13, no. 03 (2024): 3797–806. http://dx.doi.org/10.12677/mos.2024.133346.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Zhaoyu, Qiankun Liu, Ke Xu, and Xiaoyang Liu. "Weighted Fusion Method of Marine Gravity Field Model Based on Water Depth Segmentation." Remote Sensing 16, no. 21 (2024): 4107. http://dx.doi.org/10.3390/rs16214107.

Texto completo da fonte
Resumo:
Among the marine gravity field models derived from satellite altimetry, the Scripps Institution of Oceanography (SIO) series and Denmark Technical University (DTU) series models are the most representative and are often used to integrate global gravity field models, which were inverted by the deflection of vertical method and sea surface height method, respectively. The fusion method based on the offshore distance used in the EGM2008 model is just model stitching, which cannot realize the true fusion of the two types of marine gravity field models. In the paper, a new fusion method based on water depth segmentation is proposed, which established the Precision–Depth relationship of each model in each water depth segment in the investigated area, then constructed the FUSION model by weighted fusion based on the precision predicted from the Precision–Depth relationship at each grid in the whole region. The application in the South China Sea shows that the FUSION model built by the new fusion method has better accuracy than SIO28 and DTU17, especially in shallow water and offshore areas. Within 20 km offshore, the RMS of the FUSION model is 5.10 mGal, which is 8% and 4% better than original models, respectively. Within 100 m of shallow water, the accuracy of the FUSION model is 4.01 mGal, which is 14% and 12% higher than the original models, respectively. A further analysis shows that the fusion model is in better agreement with the seabed topography than original models. The new fusion method can blend the effective information of original models to provide a higher-precision marine gravity field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Wang, Shuzhen, Haili Zhao, and Wenbo Jing. "Fast all-focus image reconstruction method based on light field imaging." ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.

Texto completo da fonte
Resumo:
To achieve high-quality imaging of all focal planes with large depth of field information, a fast all-focus image reconstruction technology based on light field imaging is proposed: combining light field imaging to collect field of view information, and using light field reconstruction to obtain a multi-focus image source set, using the improved NSML image fusion method performs image fusion to quickly obtain an all-focus image with a large depth of field. Experiments have proved that this method greatly reduces the time consumed in the image fusion process by simplifying the calculation process of NSML, and improves the efficiency of image fusion. This method not only achieves excellent fusion image quality, but also improves the real-time performance of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chen, Jiaxin, Shuo Zhang, and Youfang Lin. "Attention-based Multi-Level Fusion Network for Light Field Depth Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.

Texto completo da fonte
Resumo:
Depth estimation from Light Field (LF) images is a crucial basis for LF related applications. Since multiple views with abundant information are available, how to effectively fuse features of these views is a key point for accurate LF depth estimation. In this paper, we propose a novel attention-based multi-level fusion network. Combined with the four-branch structure, we design intra-branch fusion strategy and inter-branch fusion strategy to hierarchically fuse effective features from different views. By introducing the attention mechanism, features of views with less occlusions and richer textures are selected inside and between these branches to provide more effective information for depth estimation. The depth maps are finally estimated after further aggregation. Experimental results shows the proposed method achieves state-of-the-art performance in both quantitative and qualitative evaluation, which also ranks first in the commonly used HCI 4D Light Field Benchmark.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Piao, Yongri, Miao Zhang, Xiaohui Wang, and Peihua Li. "Extended depth of field integral imaging using multi-focus fusion." Optics Communications 411 (March 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Fu, Bangshao, Xunbo Yu, Xin Gao, et al. "Depth-of-field enhancement in light field display based on fusion of voxel information on the depth plane." Optics and Lasers in Engineering 183 (December 2024): 108543. http://dx.doi.org/10.1016/j.optlaseng.2024.108543.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Oucherif, Sabrine Djedjiga, Mohamad Motasem Nawaf, Jean-Marc Boï, et al. "Enhancing Facial Expression Recognition through Light Field Cameras." Sensors 24, no. 17 (2024): 5724. http://dx.doi.org/10.3390/s24175724.

Texto completo da fonte
Resumo:
In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancing FER accuracy. Our findings show that the model using SA images surpasses state-of-the-art performance, achieving 88.13% ± 7.42% accuracy under the subject-specific evaluation protocol and 91.88% ± 3.25% under the subject-independent evaluation protocol. These results highlight our model’s potential in enhancing FER accuracy and robustness, outperforming existing methods. Furthermore, our multimodal fusion approach, integrating SA, AiF, and depth images, demonstrates substantial improvements over unimodal models. The decision-level fusion strategy, particularly using average weights, proved most effective, achieving 90.13% ± 4.95% accuracy under the subject-specific evaluation protocol and 93.33% ± 4.92% under the subject-independent evaluation protocol. This approach leverages the complementary strengths of each modality, resulting in a more comprehensive and accurate FER system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li, and Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks." Remote Sensing 11, no. 5 (2019): 487. http://dx.doi.org/10.3390/rs11050487.

Texto completo da fonte
Resumo:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset).
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Gao, Yuxuan, Haiwei Zhang, Zhihong Chen, Lifang Xue, Yinping Miao, and Jiamin Fu. "Enhanced light field depth estimation through occlusion refinement and feature fusion." Optics and Lasers in Engineering 184 (January 2025): 108655. http://dx.doi.org/10.1016/j.optlaseng.2024.108655.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

De, Ishita, Bhabatosh Chanda, and Buddhajyoti Chattopadhyay. "Enhancing effective depth-of-field by image fusion using mathematical morphology." Image and Vision Computing 24, no. 12 (2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Teses / dissertações sobre o assunto "Depth of field fusion"

1

Duan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hua, Xiaoben, and Yuxia Yang. "A Fusion Model For Enhancement of Range Images." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.

Texto completo da fonte
Resumo:
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.<br>Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.

Texto completo da fonte
Resumo:
Dans de nombreuses situations, la dynamique des capteurs ou la profondeur de champ des appareils photographiques conventionnels sont insuffisantes pour capturer fidèlement des scènes naturelles. Une méthode classique pour contourner ces limitations est de fusionner des images acquises avec des paramètres de prise de vue variables. Ces méthodes nécessitent que les images soient parfaitement alignées et que les scènes soient statiques, faute de quoi des artefacts (fantômes) ou des structures irrégulières apparaissent lors de la fusion. Le but de cette thèse est de développer des techniques permettant de traiter directement des images dynamiques et non-alignées, en exploitant des mesures de similarité locales par patchs entre images.Dans la première partie de cette thèse, nous présentons une méthode pour la fusion d'images de scènes dynamiques capturées avec des temps d'exposition variables. Notre méthode repose sur l'utilisation jointe d'une normalisation de contraste, de combinaisons non-locales de patchs et de régularisations. Ceci permet de produire de manière efficace des images contrastées et bien exposées, même dans des cas difficiles (objets en mouvement, scènes non planes, déformations optiques, etc.).Dans la deuxième partie de la thèse nous proposons, toujours dans des cas dynamiques, une méthode de fusion d'images acquises avec des mises au point variables. Le cœur de notre méthode repose sur une comparaison de patchs entre images ayant des niveaux de flou variables.Nos méthodes ont été évaluées sur des bases de données classiques et sur d'autres, nouvelles, crées pour les besoins de ce travail. Les expériences montrent la robustesse des méthodes aux distortions géométriques, aux variations d'illumination et au flou. Ces méthodes se comparent favorablement à des méthodes de l'état de l'art, à un coût algorithmique moindre. En marge de ces travaux, nous analysons également la capacité de l'algorithme PatchMatch à reconstruire des images en présence de flou et de changements d'illumination, et nous proposons différentes stratégies pour améliorer ces reconstructions<br>The most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ramirez, Hernandez Pavel. "Extended depth of field." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.

Texto completo da fonte
Resumo:
In this thesis the extension of the depth of field of optical systems is investigated. The problem of achieving extended depth of field (EDF) while preserving the transverse resolution is also addressed. A new expression for the transport of intensity equation in the prolate spheroidal coordinates system is derived, with the aim of investigating the phase retrieval problem with applications to EDF. A framework for the optimisation of optical systems with EDF is also introduced, where the main motivation is to find an appropriate scenario that will allow a convex optimisation solution leading to global optima. The relevance in such approach is that it does not depend on the optimisation algorithms since each local optimum is a global one. The multi-objective optimisation framework for optical systems is also discussed, where the main focus is the optimisation of pupil plane masks. The solution for the multi-objective optimisation problem is presented not as a single mask but as a set of masks. Convex frameworks for this problem are further investigated and it is shown that the convex optimisation of pupil plane masks is possible, providing global optima to the optimisation problems for optical systems. Seven masks are provided as examples of the convex optimisation solutions for optical systems, in particular 5 pupil plane masks that achieve EDF by factors of 2, 2.8, 2.9, 4 and 4.3, including two pupil masks that besides of extending the depth of field, are super-resolving in the transverse planes. These are shown as examples of solutions to particular optimisation problems in optical systems, where convexity properties have been given to the original problems to allow a convex optimisation, leading to optimised masks with a global nature in the optimisation scenario.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sikdar, Ankita. "Depth based Sensor Fusion in Object Detection and Tracking." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Villarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /." Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Botcherby, Edward J. "Aberration free extended depth of field microscopy." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.

Texto completo da fonte
Resumo:
In recent years, the confocal and two photon microscopes have become ubiquitous tools in life science laboratories. The reason for this is that both these systems can acquire three dimensional image data from biological specimens. Specifically, this is done by acquiring a series of two-dimensional images from a set of equally spaced planes within the specimen. The resulting image stack can be manipulated and displayed on a computer to reveal a wealth of information. These systems can also be used in time lapse studies to monitor the dynamical behaviour of specimens by recording a number of image stacks at a sequence of time points. The time resolution in this situation is, however, limited by the maximum speed at which each constituent image stack can be acquired. Various techniques have emerged to speed up image acquisition and in most practical implementations a single, in-focus, image can be acquired very quickly. However, the real bottleneck in three dimensional imaging is the process of refocusing the system to image different planes. This is commonly done by physically changing the distance between the specimen and imaging lens, which is a relatively slow process. It is clear with the ever-increasing need to image biologically relevant specimens quickly that the speed limitation imposed by the refocusing process must be overcome. This thesis concerns the acquisition of data from a range of specimen depths without requiring the specimen to be moved. A new technique is demonstrated for two photon microscopy that enables data from a whole range of specimen depths to be acquired simultaneously so that a single two dimensional scan records extended depth of field image data directly. This circumvents the need to acquire a full three dimensional image stack and hence leads to a significant improvement in the temporal resolution for acquiring such data by more than an order of magnitude. In the remainder of this thesis, a new microscope architecture is presented that enables scanning to be carried out in three dimensions at high speed without moving the objective lens or specimen. Aberrations introduced by the objective lens are compensated by the introduction of an equal and opposite aberration with a second lens within the system enabling diffraction limited performance over a large range of specimen depths. Focusing is achieved by moving a very small mirror, allowing axial scan rates of several kHz; an improvement of some two orders of magnitude. This approach is extremely general and can be applied to any form of optical microscope with the very great advantage that the specimen is not disturbed. This technique is developed theoretically and experimental results are shown that demonstrate its potential application to a broad range of sectioning methods in microscopy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Texto completo da fonte
Resumo:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.<br>I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Luraas, Knut. "Clinical aspects of Critical Flicker Fusion perimetry : an in-depth analysis." Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/39684/.

Texto completo da fonte
Resumo:
The thesis evaluated, in three studies, the clinical potential of Critical Flicker Fusion perimetry (CFFP) undertaken using the Octopus 311 perimeter. The influence of the learning effect on the outcome of CFFP was evaluated, in each eye at each of five visits each separated by one week, for 28 normal individuals naïve to perimetry, 10 individuals with ocular hypertension (OHT) and 11 with open angle glaucoma (OAG) all of whom were experienced in Standard Automated perimetry (SAP). An improvement occurred in the height, rather than in the shape, of the visual field and was largest for those with OAG. The normal individuals reached optimum performance at the third visit and those with OHT or with OAG at the fourth or fifth visits. The influence of ocular media opacity was investigated in 22 individuals with age-related cataract who were naïve to both SAP and CFFP. All individuals underwent both CFFP and SAP in each eye at each of four visits each separated by one week. At the third and fourth visit, glare disability (GD) was measured with 100% and 10% contrast EDTRS LogMAR visual acuity charts in the presence, and absence, of three levels of glare using the Brightness Acuity Tester. The visual field for CFF improved in height, only. Little correlation was present between the various measures of GD and the visual field, largely due to the narrow range of cataract severity. The influence of optical defocus for both CFFP and SAP was investigated, in one designated eye at each of two visits, in 16 normal individuals all of whom had taken part in the first study. Sensitivity for SAP declined with increase in defocus whilst that for CFFP increased. The latter was attributed to the influence of the Granit-Harper Law arising from the increased size of the defocused stimulus.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Livros sobre o assunto "Depth of field fusion"

1

Heyen, William. Depth of field: Poems. Carnegie Mellon University Press, 2005.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

1944-, Slattery Dennis Patrick, and Corbett Lionel, eds. Depth psychology: Meditations from the field. Daimon, 2000.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Donal, Cooper, Leino Marika, Henry Moore Institute (Leeds, England), and Victoria and Albert Museum, eds. Depth of field: Relief sculpture in Renaissance Italy. Peter Lang, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Buch, Neeraj. Precast concrete panel systems for full-depth pavement repairs: Field trials. Office of Infrastructure, Office of Pavement Technology, Federal Highway Administration, U.S. Department of Transportation, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Geoffrey, Cocks, Diedrick James, and Perusek Glenn, eds. Depth of field: Stanley Kubrick, film, and the uses of history. University of Wisconsin Press, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ruotoistenmäki, Tapio. Estimation of depth to potential field sources using the Fourier amplitude spectrum. Geologian tutkimuskeskus, 1987.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Goss, Keith Michael. Multi-dimensional polygon-based rendering for motion blur and depth of field. Brunel University, 1986.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jose, Peñaranda Victor, Photobank Philippines Inc, and NCCP/Human Rights Desk, eds. Depth of field: Photographs of poverty, repression, and struggle in the Philippines. Photobank Philippines, 1987.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Rantamäki, Karin. Particle-in-cell simulations of the near-field of a lower hybrid grill. VTT Technical Research Centre of Finland, 2003.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Timmins 2003 Field Conference Ore Deposits at Depth 2003. Timmins 2003 Field Conference: Ore deposits at depth challenges and opportunities, Technical sessions abstract volume and field trip guide, CIM. Canadian Institute of Mining and Metallurgy, 2003.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Capítulos de livros sobre o assunto "Depth of field fusion"

1

Zhang, Yukun, Yongri Piao, Xinxin Ji, and Miao Zhang. "Dynamic Fusion Network for Light Field Depth Estimation." In Pattern Recognition and Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liu, Xinshi, Dongmei Fu, Chunhong Wu, and Ze Si. "The Depth Estimation Method Based on Double-Cues Fusion for Light Field Images." In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019). Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0474-7_67.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kexun, Zheng, Gan Feifei, Zhao Daiyao, Chen Xiao, Liu Xianggang, and Zhang Ning. "Research on Identification of Deep Leakage Channels in Karst Pumped Storage Reservoirs Based on Multi Field Data Fusion." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-9184-2_21.

Texto completo da fonte
Resumo:
AbstractThe most prominent engineering geological problem of pumped storage power station reservoirs in karst areas is karst leakage, the development of karst leakage channels has a significant impact on the selection of reservoir locations, layout of engineering buildings, design of anti-seepage measures, and engineering costs. Therefore, the survey and evaluation of reservoir leakage channels are the foundation for the construction of pumped storage power station reservoirs in karst areas. The field analysis method plays an important role in karst leakage survey. Traditional karst groundwater field analysis methods, based on the representative indicators of each field measured and determined by experience, fail to fully reflect the temporal and spatial change information of each field indicator, and the data cannot be fully utilized and compared for verification. The multi field data fusion analysis method for karst groundwater proposed in this article comprehensively considers the relationship between measured field indicators and leakage sources, natural conditions, adjacent spaces, and different time field indicators, and obtains the characteristic values of the tracer index, background index, gradient index, and time series index of each field, and overlay calculation of single field comprehensive eigenvalues and multi field composite eigenvalues, which can realize the fusion of multiple fields and multiple indicators, amplify the abnormal location signal of seepage, and delineate the location of centralized seepage, so as to quantitatively determine the location information of the seepage channel of karst groundwater. This method is applied to the survey of karst leakage in the lower reservoir of a pumped storage power station in Guizhou Province, field data fusion analysis shows that there is an abnormal seepage field in the anti-seepage curtain line of the site, and there is good evidence for the temperature and conductivity field data. There is a deep karst leakage channel in the reservoir; The burial depth of the channel is more than 170 m below the normal water level. The research results can provide support for subsequent anti-seepage methods and engineering treatments, as well as relevant engineering experience for other projects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Gooch, Jan W. "Depth of Field." In Encyclopedic Dictionary of Polymers. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3432.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Atchison, David A., and George Smith. "Depth-of-Field." In Optics of the Human Eye, 2nd ed. CRC Press, 2023. http://dx.doi.org/10.1201/9781003128601-24.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kemp, Jonathan. "Depth of field." In Film on Video. Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ravitz, Jeff, and James L. Moody. "Depth of Field." In Lighting for Televised Live Events. Routledge, 2021. http://dx.doi.org/10.4324/9780429288982-11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cai, Ziyun, Yang Long, Xiao-Yuan Jing, and Ling Shao. "Adaptive Visual-Depth Fusion Transfer." In Computer Vision – ACCV 2018. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Turaev, Vladimir, and Alexis Virelizier. "Fusion categories." In Monoidal Categories and Topological Field Theory. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49834-8_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Blaker, Alfred A. "Enough Depth, Enough Sharpness." In Applied Depth of Field. Routledge, 2024. http://dx.doi.org/10.4324/9781003565192-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Depth of field fusion"

1

Ma, Jinshan, and Changxiang Wang. "Image dehazing algorithm of transmittance fusion based on light field depth information." In 4th International Conference on Signal Image Processing and Communication, edited by Xianye Ben and Lei Chen. SPIE, 2024. http://dx.doi.org/10.1117/12.3041001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Tan, Feng, Huiping Deng, Sen Xiang, and Jin Wu. "Light field depth estimation based on fusion of multi-scale semantic and geometric information." In 2024 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2024. https://doi.org/10.1109/vcip63160.2024.10849895.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Qiao, Lingfeng, Guodong Liu, Cheng Lu, and Binghui Lu. "Research on Target Localization Technology based on Depth of field fusion of generative adversarial networks." In 2024 IEEE International Conference on Mechatronics and Automation (ICMA). IEEE, 2024. http://dx.doi.org/10.1109/icma61710.2024.10633036.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Liu, Qiankun, Zhaoyu Chen, and Ke Xu. "Marine Gravity Field Model Fusion Method Based on Water Depth: A Case Study of the South China Sea." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10642576.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Tondo, Gledson Rodrigo, Guido Morgenthal, and Charles Riley. "Portable depth sensing with the iPhone time-of-flight LiDAR." In IABSE Congress, San José 2024: Beyond Structural Engineering in a Changing World. International Association for Bridge and Structural Engineering (IABSE), 2024. https://doi.org/10.2749/sanjose.2024.0937.

Texto completo da fonte
Resumo:
&lt;p&gt;Modern smartphones offer diverse features, including ample storage, wireless data transfer, and various sensors, making them valuable for structural data collection. This study investigates the iPhone's LiDAR system for depth data collection, defining its field of view and assessing its performance for static and dynamic targets. We analyse limitations such as phone-to-target distance and noise properties. Measurement comparisons with a laser displacement transducer are conducted under different conditions to characterise the sensor’s properties. Discussions on the results include insights into Apple's AI-based sensor fusion framework, which enhances data resolution but potentially compromises accuracy in dynamic measurements. We demonstrate the system's practicality through modal analysis of a steel cantilever, revealing potential for bridge inspection and autonomous structural diagnostics via non-contact vibration sensing.&lt;/p&gt;
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Paakkonen, Scott T., Samuel F. Lockwood, Daniel H. Pope, Valerie G. Horner, Edgar A. Morris, and Daniel P. Werner. "The Role of Coatings and Cathodic Protection in Microbiologically Influenced Corrosion." In CORROSION 1993. NACE International, 1993. https://doi.org/10.5006/c1993-93293.

Texto completo da fonte
Resumo:
Abstract A field study was performed to assess the influence of local soil conditions (biological and chemical) on the ability to achieve "adequate" cathodic protection and reduce corrosion, including microbiologically influenced corrosion (MIC). The performance of four coatings was also assessed. Two sites were chosen based on inherent differences in soil corrosiveness. Site A was adjacent to a pipeline site at which MIC was implicated in the corrosion process leading to a failure. Site B was a site adjacent to the same pipeline (one-quarter mile from Site A) that had shown significantly less corrosion. The test system matrix consisted of four coating groups [fusion bonded epoxy (FBE), polyethylene-backed tape (PBT), coal tar enamel (CTE), and bare] and three levels of cathodic protection (native potential, -0.85 volts, and -1.2 volts vs. Cu/CuSO4) at Sites A and B. Intentional holidays were created in the coatings of the coupons which were placed at pipe depth and cathodically maintained for seven months. The results showed that levels of chemical species important in the MIC process were higher in corrosion products than in the nearby soil. Differences in pitting corrosion were shown to be due to an interaction between local soil conditions and cathodic protection level. A threefold increase in mean maximum pit depth was observed on native potential coupons at Site A as compared to Site B. Increased levels of cathodic protection reduced the mean maximum pit depth on coupons at both sites, however, those at Site A were significantly deeper that those at Site B for both -0.85 volts and -1.2 volts vs. Cu/CuSO4. With the exception of iron, no significant chemical differences were observed in soils from both sites. Levels of sulfate-reducing bacteria (SRB) were higher in soils from Site A than Site B. Levels of SRB were also higher in corrosion products from Site A coupons than Site B coupons. High levels of acid-producing bacteria (APB) and aerobes were found in soil and corrosion product samples from both sites. All coatings performed well, though some differences in current requirements were observed between coating groups.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chai, Ruimin, and Mengjiao Du. "Multiscale feature fusion-based monocular depth estimation." In 4th International Conference on Electronic Information Engineering and Data Processing (EIEDP 2025), edited by Azlan Bin Mohd Zain and Lei Chen. SPIE, 2025. https://doi.org/10.1117/12.3067068.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Li, Han, Yukai Ma, Yaqing Gu, Kewei Hu, Yong Liu, and Xingxing Zuo. "RadarCam-Depth: Radar-Camera Fusion for Depth Estimation with Learned Metric Scale." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610929.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Song, Tieshuai, Bin Yang, Jun Wang, Guidong He, Zhao Dong, and Fengjun Zhong. "mmWave Radar and Image Fusion for Depth Completion: a Two-Stage Fusion Network." In 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706480.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Hilmarsen, Henrik, Nicholas Dalhaug, Trym Anthonsen Nygård, Edmund Førland Brekke, Rudolf Mester, and Annette Stahl. "Maritime Tracking-By-Detection with Object Mask Depth Retrieval Through Stereo Vision and Lidar*." In 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706307.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Depth of field fusion"

1

McLean, William E. ANVIS Objective Lens Depth of Field. Defense Technical Information Center, 1996. http://dx.doi.org/10.21236/ada306571.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Al-Mutawaly, Nafia, Hubert de Bruin, and Raymond D. Findlay. Magnetic Nerve Stimulation: Field Focality and Depth of Penetration. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada411028.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Peng, Y. K. M. Spherical torus, compact fusion at low field. Office of Scientific and Technical Information (OSTI), 1985. http://dx.doi.org/10.2172/6040602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Paul, A. C., and V. K. Neil. Fixed Field Alternating Gradient recirculator for heavy ion fusion. Office of Scientific and Technical Information (OSTI), 1991. http://dx.doi.org/10.2172/5828376.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cathey, W. T., Benjamin Braker, and Sherif Sherif. Analysis and Design Tools for Passive Ranging and Reduced-Depth-of-Field Imaging. Defense Technical Information Center, 2003. http://dx.doi.org/10.21236/ada417814.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

G.J. Kramer, R. Nazikian, and and E. Valeo. Correlation Reflectometry for Turbulence and Magnetic Field Measurements in Fusion Plasmas. Office of Scientific and Technical Information (OSTI), 2002. http://dx.doi.org/10.2172/808282.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Claycomb, William R., Roy Maxion, Jason Clark, et al. Deep Focus: Increasing User Depth of Field" to Improve Threat Detection (Oxford Workshop Poster)". Defense Technical Information Center, 2014. http://dx.doi.org/10.21236/ada610980.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Boots, Byron, Jacob Sacks, Kevin Choi, et al. Downhole Sensing and Event-Driven Sensor Fusion for Depth-of-Cut Based Autonomous Fault Response and Drilling Optimization. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/2430188.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Grabowski, Theodore C. Directed Energy HPM, PP, & PPS Efforts: Magnetized Target Fusion - Field Reversed Configuration. Defense Technical Information Center, 2006. http://dx.doi.org/10.21236/ada460910.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Hasegawa, Akira, and Liu Chen. A D-He/sup 3/ fusion reactor based on a dipole magnetic field. Office of Scientific and Technical Information (OSTI), 1989. http://dx.doi.org/10.2172/5819503.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!