Academic literature on the topic 'Multi-Light Image Collections'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Light Image Collections.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-Light Image Collections"

1

Fattal, Raanan, Maneesh Agrawala, and Szymon Rusinkiewicz. "Multiscale shape and detail enhancement from multi-light image collections." ACM Transactions on Graphics 26, no. 3 (2007): 51. http://dx.doi.org/10.1145/1276377.1276441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pintus, Ruggero, Alberto Jaspe Villanueva, Antonio Zorcolo, Markus Hadwiger, and Enrico Gobbetti. "A practical and efficient model for intensity calibration of multi-light image collections." Visual Computer 37, no. 9-11 (2021): 2755–67. http://dx.doi.org/10.1007/s00371-021-02172-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pintus, R., T. G. Dulecha, I. Ciortan, E. Gobbetti, and A. Giachetti. "State‐of‐the‐art in Multi‐Light Image Collections for Surface Visualization and Analysis." Computer Graphics Forum 38, no. 3 (2019): 909–34. http://dx.doi.org/10.1111/cgf.13732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ströbel, Bernhard, Sebastian Schmelzle, Nico Blüthgen, and Michael Heethoff. "An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging." ZooKeys 759 (May 17, 2018): 1–27. http://dx.doi.org/10.3897/zookeys.759.24584.

Full text
Abstract:
Digitization of natural history collections is a major challenge in archiving biodiversity. In recent years, several approaches have emerged, allowing either automated digitization, extended depth of field (EDOF) or multi-view imaging of insects. Here, we present DISC3D: a new digitization device for pinned insects and other small objects that combines all these aspects. A PC and a microcontroller board control the device. It features a sample holder on a motorized two-axis gimbal, allowing the specimens to be imaged from virtually any view. Ambient, mostly reflection-free illumination is ascertained by two LED-stripes circularly installed in two hemispherical white-coated domes (front-light and back-light). The device is equipped with an industrial camera and a compact macro lens, mounted on a motorized macro rail. EDOF images are calculated from an image stack using a novel calibrated scaling algorithm that meets the requirements of the pinhole camera model (a unique central perspective). The images can be used to generate a calibrated and real color texturized 3Dmodel by ‘structure from motion’ with a visibility consistent mesh generation. Such models are ideal for obtaining morphometric measurement data in 1D, 2D and 3D, thereby opening new opportunities for trait-based research in taxonomy, phylogeny, eco-physiology, and functional ecology.
APA, Harvard, Vancouver, ISO, and other styles
5

Ströbel, Bernhard, Sebastian Schmelzle, Nico Blüthgen, and Michael Heethoff. "An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging." ZooKeys 759 (May 17, 2018): 1–27. https://doi.org/10.3897/zookeys.759.24584.

Full text
Abstract:
Digitization of natural history collections is a major challenge in archiving biodiversity. In recent years, several approaches have emerged, allowing either automated digitization, extended depth of field (EDOF) or multi-view imaging of insects. Here, we present DISC3D: a new digitization device for pinned insects and other small objects that combines all these aspects. A PC and a microcontroller board control the device. It features a sample holder on a motorized two-axis gimbal, allowing the specimens to be imaged from virtually any view. Ambient, mostly reflection-free illumination is ascertained by two LED-stripes circularly installed in two hemispherical white-coated domes (front-light and back-light). The device is equipped with an industrial camera and a compact macro lens, mounted on a motorized macro rail. EDOF images are calculated from an image stack using a novel calibrated scaling algorithm that meets the requirements of the pinhole camera model (a unique central perspective). The images can be used to generate a calibrated and real color texturized 3Dmodel by 'structure from motion' with a visibility consistent mesh generation. Such models are ideal for obtaining morphometric measurement data in 1D, 2D and 3D, thereby opening new opportunities for trait-based research in taxonomy, phylogeny, eco-physiology, and functional ecology.
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Nan, Anran Yang, Hao Chen, and Chun Du. "SatGS: Remote Sensing Novel View Synthesis Using Multi- Temporal Satellite Images with Appearance-Adaptive 3DGS." Remote Sensing 17, no. 9 (2025): 1609. https://doi.org/10.3390/rs17091609.

Full text
Abstract:
Novel view synthesis of remote sensing scenes from satellite images is a meaningful but challenging task. Due to the wide temporal span of image acquisition, satellite image collections often exhibit significant appearance variations, such as seasonal changes and shadow movements, as well as transient objects, making it difficult to reconstruct the original scene accurately. Previous work has noted that a large amount of image variation in satellite images is caused by changing light conditions. To address this, researchers have proposed incorporating the direction of solar rays into neural radiance fields (NeRF) to model the amount of sunlight reaching each point in the scene. However, this approach fails to effectively account for seasonal variations and suffers from a long training time and slow rendering speeds due to the need to evaluate numerous samples from the radiance field for each pixel. To achieve fast, efficient, and high-quality novel view synthesis for multi-temporal satellite scenes, we propose SatGS, a novel method that leverages 3D Gaussian points for scene reconstruction with an appearance-adaptive adjustment strategy. This strategy enables our model to adaptively adjust the seasonal appearance features and shadow regions of the rendered images based on the appearance characteristics of the training images and solar angles. Additionally, the impact of transient objects is mitigated through the use of visibility maps and uncertainty optimization. Experiments conducted on WorldView-3 images demonstrate that SatGS not only renders superior image quality compared to existing State-of-the-Art methods but also surpasses them in rendering speed, showcasing its potential for practical applications in remote sensing.
APA, Harvard, Vancouver, ISO, and other styles
7

Salili-James, Arianna, Ben Scott, Laurence Livermore, et al. "AI-Accelerated Digitisation of Insect Collections: The next generation of Angled Label Image Capture Equipment (ALICE)." Biodiversity Information Science and Standards 7 (September 15, 2023): e112742. https://doi.org/10.3897/biss.7.112742.

Full text
Abstract:
The digitisation of natural science specimens is a shared ambition of many of the largest collections, but the scale of these collections, estimated at at least 1.1 billion specimens (Johnson et al. 2023), continues to challenge even the most resource-rich organisations.The Natural History Museum, London (NHM) has been pioneering work to accelerate the digitisation of its 80 million specimens. Since the inception of the NHM Digital Collection Programme in 2014, more than 5.5 million specimen records have been made digitally accessible. This has enabled the museum to deliver a tenfold increase in digitisation, compared to when rates were first measured by the NHM in 2008. Even with this investment, it will take circa 150 years to digitise its remaining collections, leading the museum to pursue technology-led solutions alongside increased funding to deliver the next increase in digitisation rate. Insects comprise approximately half of all described species and, at the NHM, represent more than one-third (c. 30 million specimens) of the NHM's overall collection. Their most common preservation method, attached to a pin alongside a series of labels with metadata, makes insect specimens challenging to digitise. Early Artificial Intelligence (AI)-led innovations (Price et al. 2018) resulted in the development of ALICE, the museum's Angled Label Image Capture Equipment, in which a pinned specimen is placed inside a multi-camera setup, which captures a series of partial views of a specimen and its labels. Centred around the pin, these images can be digitally combined and reconstructed, using the accompanying ALICE software, to provide a clean image of each label. To do this, a Convolutional Neural Network (CNN) model is incorporated, to locate all labels within the images. This is followed by various image processing tools to transform the labels into a two-dimensional viewpoint, align the associated label images together, and merge them into one label. This allows users to manually, or computationally (e.g., using Optical Character Recognition [OCR] tools) extract label data from the processed label images (Salili-James et al. 2022). With the ALICE setup, a user might average imaging 800 digitised specimens per day, and exceptionally, up to 1,300. This compares with an average of 250 specimens or fewer daily, using more traditional methods involving separating the labels and photographing them off of the pin. Despite this, our original version of ALICE was only suited to a small subset of the collection. In situations when the specimen is very large, there are too many labels, or these labels are too close together, ALICE fails (Dupont and Price 2019).Using a combination of updated AI processing tools, we hereby present ALICE version 2. This new version of ALICE provides faster rates, improved software accuracy, and a more streamlined pipeline. It includes the following updates:Hardware: after conducting various tests, we have optimised the camera setup. Further hardware updates include a Light-Emitting Diode (LED) ring light, as well as modifications to the camera mounting.Software: our latest software incorporates machine learning and other computer vision tools to segment labels from ALICE images and stitch them together more quickly and with a higher level of accuracy, significantly reducing the image processing failure rate. These processed label images can be combined with the latest OCR tools for automatic transcription and data segmentation.Buildkit: we aim to provide a toolkit that any individual or institution can incorporate into their digitisation pipeline. This includes hardware instructions, an extensive guide detailing the pipeline, and new software code accessible via Github.We provide test data and workflows to demonstrate the potential of ALICE version 2 as an effective, accessible, and cost-saving solution to digitising pinned insect specimens. We also describe potential modifications, enabling it to work with other types of specimens.
APA, Harvard, Vancouver, ISO, and other styles
8

Hereld, Mark, and Nicola Ferrier. "LightningBug ONE: An experiment in high-throughput digitization of pinned insects." Biodiversity Information Science and Standards 3 (June 19, 2019): e37228. https://doi.org/10.3897/biss.3.37228.

Full text
Abstract:
Digital technology presents us with new and compelling opportunities for discovery when focused on the world's natural history collections. The outstanding barrier to applying existing and forthcoming computational methods for large-scale study of this important resource is that it is (largely) not yet in the digital realm. Without development of new and much faster methods for digitizing objects in these collections, it will be a long time before these data are available in digital form. For example, methods that are currently employed for capturing, cataloguing, and indexing pinned insect specimen data will require many tens of years or more to process collections with millions of dry specimens, and so we need to develop a much faster pipeline. In this paper we describe a capture system capable of collecting and archiving the imagery necessary to digitize a collection of circa 4.5 million specimens in one or two years of production operation. To minimize the time required to digitize each specimen, we have proposed (Hereld et al. 2017) developing multi-camera systems to capture the pinned insect and its accompanying labels from many angles in a single exposure. Using a sampling (21 randomly drawn drawers, totalling 5178 insects) of the 4.5 million specimens in the collection at the Field Museum of Natural History, we estimated that a large fraction of that collection (97.6% +/- 2.2%) consists of pinned insects with labels that are visible from one angle or another without requiring adjustment or removal of elements on the pin. In this situation a multi-camera system with enough angular coverage could provide imagery for reconstructing virtual labels from fragmentary views taken from different directions. Agarwal et al. (2018) demonstrated a method for combining these multiple views into a virtual label that could be transcribed by automated optical character recognition software. We have now designed, built and tested a prototype snapshot 3D digitization station to allow rapid capture of multi-view imagery for automated capture of pinned insect specimens and labels. It consists of twelve very small and light 8-megapixel cameras (Fig. 1), each controlled by a small dedicated computer. The cameras are arrayed around the target volume, six on each side of the sample feed path. Their positions and orientations are fixed by a 3D-printed scaffolding designed for the purpose. The twelve camera controllers and a master computer are connected to a dedicated high-speed data network over which all of the coordinating control signals and returning images and metadata are passed. The system is integrated with a high-performance object store that includes a database for metadata and the archived images comprising each snapshot. The system is designed so that it can be readily extended to include additional or different sensors. The station is meant to be fed with specimens by a conveyor belt whose motion is coordinated with the exposure of the multi-view snapshots. In order to test the performance of the system we added a recirculating specimen feeder designed expressly for this experiment. With it integrated into the system in place of a conventional conveyor belt we are able to provide a continuous stream of targets for the digitization system to facilitate long tests of its performance and robustness. We demonstrated the ability to capture data at a peak rate of 1400 specimens per hour and an average rate of 1000 specimens per hour over the course of a sustained 6 hour run. The dataset (Hereld and Ferrier 2018) collected in this experiment provides fodder for the further development of algorithms for the offline reconstruction and automatic transcription of the label contents.
APA, Harvard, Vancouver, ISO, and other styles
9

Jin, Naiyun, Tingting Hu, Lei Shu, et al. "A Crop Growth Information Collection System Based on a Solar Insecticidal Lamp." Electronics 14, no. 2 (2025): 370. https://doi.org/10.3390/electronics14020370.

Full text
Abstract:
To overcome the challenges during the crop growth process, e.g., pest infestation, inadequate environmental monitoring, and poor intelligence, this study proposes a crop growth information collection system based on a solar insecticidal lamp. The system comprises two primary modules: (1) an environmental information collection module, and (2) a multi-view image collection module. The environmental information collection module acquires crucial parameters, e.g., temperature, relative humidity, light intensity, soil conductivity, nitrogen, phosphorus, potassium content, and pH, by means of various sensors. Simultaneously, the multi-view image collection module employs three industrial cameras to capture images of the crop from the top, left, and right perspectives. The system is developed on the ESP32-S3 platform. WiFi-Mesh wireless communication technology is adopted to achieve high-frequency, real-time data transmission. Additionally, visualization software has been developed for real-time data display, data storage, and dynamic curve plotting. Field verification indicates that the proposed system effectively meets the requirements of pest control and crop growth information collection, which provides substantial support for the advancement of smart agriculture.
APA, Harvard, Vancouver, ISO, and other styles
10

Malinovski, I., I. B. Couceiro, A. D. Alvarenga, et al. "Digital enhancement of OCT images toward better diagnostic of precursor cancer lesions in the uterine colon." Journal of Physics: Conference Series 2606, no. 1 (2023): 012021. http://dx.doi.org/10.1088/1742-6596/2606/1/012021.

Full text
Abstract:
Abstract We present a combination of image processing operations, which improve the quality of OCT images. The sequence is: (1) Gaussian blur, (2) Enhance Local Contrast (CLAHE), (3) FFT based band-pass filter, (4) Background Subtraction, (5) adjustment of the contrast of the image with light saturation (Enhance Contrast) and finally (6) reassignment of the palette such as to enhance the bright part of the images. The multi-step treatment reduces pixel-to-pixel noise, improves the homogeneity of the contrast, supresses vertical strips, reduces cosmetic distractions, equalizes contrast across a collection of images, and uses the pallet to emphasise areas of greater intensity. The enhanced images show a significant improvement in their quality, with details that do not appear in the original OCT image. The processing of these images and the comparison with untreated OCT images are shown throughout the paper.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multi-Light Image Collections"

1

Dulecha, Tinsae Gebrechristos. "Surface analysis and visualization from multi-light image collections." Doctoral thesis, 2021. http://hdl.handle.net/11562/1043402.

Full text
Abstract:
Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-Light Image Collections"

1

Fattal, Raanan, Maneesh Agrawala, and Szymon Rusinkiewicz. "Multiscale shape and detail enhancement from multi-light image collections." In ACM SIGGRAPH 2007 papers. ACM Press, 2007. http://dx.doi.org/10.1145/1275808.1276441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maxey, L. C., and B. J. Hilson. "A Deterministic Method for Aligning Multiple Optical Waveguides to a Paraboloidal Collector." In ASME 2003 International Solar Energy Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/isec2003-44017.

Full text
Abstract:
For solar lighting systems employing fiber optic waveguides1,2, to conduct the collected light, paraboloidal mirrors are the preferred reflector choice. To achieve optimum performance in systems with relatively small collection apertures, both the quality of the mirror and the quality of optical system alignment must be well controlled. In systems employing multiple waveguides with a single paraboloid, the focus of the paraboloid must be segmented into several separate focal points directed into individual fibers. Each waveguide entrance aperture must be accurately co-located with its designated focal point so that the image that is formed on the fiber will have the fewest possible aberrations and thus, the smallest possible focused spot size. Two methods for aligning individual optical waveguides in a multi-aperture paraboloidal collection system are described. The first method employs a commercially available collimation tester to incrementally improve the alignment. The second, a deterministic method, employs a cube corner retro-reflector and an easily constructed imaging system to reliably align the fibers to their respective segments of the parent paraboloid. The image of the focused spot formed by the light that is returned from the retro-reflector reveals alignment information that is easily interpreted to enable pitch, yaw and focus errors to be systematically removed. This ensures that the alignment of the system is optimized to reduce aberrations prior to final adjustment of the system “on-sun”.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Chen, Nan Li, and Xiaoyu Xiao. "Inclusive design strategies based on a human factors perspective: integrating urban color in the regional cultural diversity of multi-ethnic cities." In 16th International Conference on Applied Human Factors and Ergonomics (AHFE 2025). AHFE International, 2025. https://doi.org/10.54941/ahfe1006619.

Full text
Abstract:
The national culture of a city is a symbol of its unique identity, which profoundly affects the residents' sense of identity and belonging. Colour, as a visual expression of urban cultural landscape, not only shapes the visual image of the city, but also guides the residents' emotional experience and behavioural patterns in an invisible way. With the rapid development of cities, the phenomenon of serious loss of urban characteristics and one-sidedness of a thousand cities has become more and more prominent. This poses a challenge to the psychological feelings of urban residents and the diversity of social culture. Through the collection and research of ethnic colours in Hohhot City, it is found that how to make the city colours can reflect the cultural characteristics of multi-ethnic integration, and at the same time meet the psychological needs and aesthetic preferences of the residents, is a problem that needs to be solved by the current urban colour planning. This thesis aims to study the development profile of Hohhot and extract the current status of urban colour through literature, questionnaire survey, field research, etc., and quantitatively analyse the colour information by using the NCS colour system and the self-developed Java NCS colour analysis data program to determine the overall positioning of urban colour in Hohhot. In the planning process, we fully consider the human factor, from the residents' emotional experience, cognitive habits and behavioural patterns, and combine the urban colour planning with the city's functional layout, architectural style, landscape design, etc., to form a colour planning system from macro to micro, and from general to specific. Through the research method and based on the research of multi-ethnic urban architectural planning, the regional urban colour characteristics of Hohhot are finally summarized. The results of the study analyzed the minority architectural colours in the main urban area of Hohhot and obtained the chromatogram of the ethnic city of Hohhot, which is composed of greenish grey, light grey, white and other colours, from the grey landscape area in the western part with low luminance and low purity to the light grey landscape area in the eastern part with high luminance and low purity, which forms the rhythmic change of the urban colours in luminance and purity, which not only conforms to the aesthetic preference of the residents, but also can be used psychologically and psychologically. These colours not only meet the residents' aesthetic preferences, but also give them a sense of tranquility, comfort and belonging psychologically. Urban colour planning from the perspective of human factors can not only shape the unique image of the city, but also improve the quality of life of the residents and social and cultural diversity. In the future urban development, we should pay more attention to the combination of national cultural characteristics and modernism elements, to achieve the organic integration of traditional national culture and modern urban culture and innovative development, to provide urban residents with a more colorful and culturally rich urban colour environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Farahani, Saba A., Jae Yun Lee, Hunjae Kim, and Yoonjin Won. "Predictive Machine Learning Models for LiDAR Sensor Reliability in Autonomous Vehicles." In ASME 2024 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/ipack2024-141038.

Full text
Abstract:
Abstract The emergence of autonomous vehicles marks a transformative moment in the transportation sector, significantly propelled by the integration of Light Detection and Ranging (LiDAR) technology. LiDAR revolutionizes how vehicles perceive their environment, emitting laser beams to measure distances to objects and creating highly accurate three-dimensional maps. This innovation is pivotal for enhancing the operational efficiency and safety of autonomous vehicles by providing instantaneous and detailed environmental mappings. However, the efficacy of LiDAR sensors is compromised by environmental factors such as dust, dirt, snow, and rain, which can severely affect their accuracy and, consequently, the safety of the vehicles they guide. Despite the importance of this issue, the research field has shown a notable lack of investigation into predicting LiDAR sensor contamination and its reliability. This gap underscores a critical need for dedicated research efforts to ensure the reliability and safety of autonomous driving technologies, making it a pressing challenge for researchers and developers alike. Therefore, it is imperative to develop a novel way to detect LiDAR sensors’ reliability like contamination level. In response to the urgent need to address LiDAR sensor contamination, our team have initiated a comprehensive data collection journey, spanning from California’s diverse climates to Michigan’s variable weather conditions. This dataset contains multi-level features such as contamination levels on multiple sensors, environmental factors, and sensor images across varying weather conditions and geographical locales. The dataset is a groundbreaking contribution to the field, playing a vital role in developing our machine learning models and offering novel insights that could dramatically advance research and practical applications. In this work, our framework employs machine learning methods across two distinct but complementary strategies: sensor data analysis and image data analysis. The first strategy indirectly predicts the contamination levels by leveraging environmental variables, such as temperature, humidity, precipitation, and vehicle speed. This indirect method represents a novel approach to assessing sensor contamination, offering a significant advantage by allowing for the prediction of contamination risks before they impact sensor performance. The second strategy directly predicts the contamination levels by visually assessing LiDAR sensors under various weather conditions. The integrated strategy better performs in terms of the contamination level predictions, which is meaningful to ensure safer navigation in complex environments. Together, by introducing a novel dataset and predictive machine learning models, our framework shows a potential to address a critical gap in existing research.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-Light Image Collections"

1

Letcher, Theodore, Julie Parno, Zoe Courville, Lauren Farnsworth, and Jason Olivier. A generalized photon-tracking approach to simulate spectral snow albedo and transmittance using X-ray microtomography and geometric optics. Engineer Research and Development Center (U.S.), 2023. http://dx.doi.org/10.21079/11681/47122.

Full text
Abstract:
A majority of snow radiative transfer models (RTMs) treat snow as a collection of idealized grains rather than an organized ice–air matrix. Here we present a generalized multi-layer photon-tracking RTM that simulates light reflectance and transmittance of snow based on X-ray micro- tomography images, treating snow as a coherent 3D structure rather than a collection of grains. The model uses a blended approach to expand ray-tracing techniques applied to sub-1 cm3 snow samples to snowpacks of arbitrary depths. While this framework has many potential applications, this study’s effort is focused on simulating reflectance and transmittance in the visible and near infrared (NIR) through thin snow- packs as this is relevant for surface energy balance and remote sensing applications. We demonstrate that this framework fits well within the context of previous work and capably reproduces many known optical properties of a snow surface, including the dependence of spectral reflectance on the snow specific surface area and incident zenith angle as well as the surface bidirectional reflectance distribution function (BRDF). To evaluate the model, we compare it against reflectance data collected with a spectroradiometer at a field site in east-central Vermont. In this experiment, painted panels were inserted at various depths beneath the snow to emulate thin snow. The model compares remarkably well against the reflectance measured with a spectroradiometer, with an average RMSE of 0.03 in the 400–1600 nm range. Sensitivity simulations using this model indicate that snow transmittance is greatest in the visible wavelengths, limiting light penetration to the top 6 cm of the snowpack for fine-grain snow but increasing to 12 cm for coarse-grain snow. These results suggest that the 5% transmission depth in snow can vary by over 6 cm according to the snow type.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!