Gotowa bibliografia na temat „Low light enhancement”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Low light enhancement”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Low light enhancement"

1

Hao, Shijie, Xu Han, Yanrong Guo, and Meng Wang. "Decoupled Low-Light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (2022): 1–19. http://dx.doi.org/10.1145/3498341.

Pełny tekst źródła
Streszczenie:
The visual quality of photographs taken under imperfect lightness conditions can be degenerated by multiple factors, e.g., low lightness, imaging noise, color distortion, and so on. Current low-light image enhancement models focus on the improvement of low lightness only, or simply deal with all the degeneration factors as a whole, therefore leading to sub-optimal results. In this article, we propose to decouple the enhancement model into two sequential stages. The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping. The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors. The decoupled model facilitates the enhancement in two aspects. On the one hand, the whole low-light enhancement can be divided into two easier subtasks. The first one only aims to enhance the visibility. It also helps to bridge the large intensity gap between the low-light and normal-light images. In this way, the second subtask can be described as the local appearance adjustment. On the other hand, since the parameter matrix learned from the first stage is aware of the lightness distribution and the scene structure, it can be incorporated into the second stage as the complementary information. In the experiments, our model demonstrates the state-of-the-art performance in both qualitative and quantitative comparisons, compared with other low-light image enhancement models. In addition, the ablation studies also validate the effectiveness of our model in multiple aspects, such as model structure and loss function.
Style APA, Harvard, Vancouver, ISO itp.
2

SANTHIYA, S., S. NANDHINI, M. MOGANA PRIYA, and K. SELVA BHUVANESWARI. "LOW-LIGHT IMAGE ENHANCEMENT USING INVERTED ATMOSPHERIC LIGHT." i-manager’s Journal on Software Engineering 15, no. 4 (2021): 8. http://dx.doi.org/10.26634/jse.15.4.18142.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Seonhee, Kiyeon Kim, Soohwan Yu, and Joonki Paik. "Contrast Enhancement for Low-light Image Enhancement: A Survey." IEIE Transactions on Smart Processing & Computing 7, no. 1 (2018): 36–48. http://dx.doi.org/10.5573/ieiespc.2018.7.1.036.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Kang, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, and Xiaopeng Hu. "Continuous detail enhancement framework for low-light image enhancement." Displays 88 (July 2025): 103040. https://doi.org/10.1016/j.displa.2025.103040.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Dabas, Megha. "Low Light Image Enhancement Using Python." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–8. https://doi.org/10.55041/ijsrem39588.

Pełny tekst źródła
Streszczenie:
ABSTRACT----The poor signal-to-noise ratio (SNR) in low-light photos frequently results in significant sensor noise. Moreover, the noise is non-Gaussian and signal-dependent. We propose a novel denoising technique to tackle the issue by combining weighted total variation (TV) regularization with a Poisson noise model. The weighted Total Variation (T V) regularization effectively eliminates noise while preserving details, whereas the Poisson noise model retains the nature of the noise. Our suggested strategy performs better on NIQE scores than the most advanced techniques. KEYWORDS----COOPERATIVE Intelligent Transport Systems (C-ITS), Convolutional Neural Network (CNN), Image Collection, CNN+Pyramid Model, Signal to Noise Ratio (SNR), Total Variation (TV) Image Segmentation Histogram Analysis, Image Processing, Image Filtering, Image Enhancement
Style APA, Harvard, Vancouver, ISO itp.
6

Journal, IJSREM, Dr S. Babu, Dr R. Rajmohan, et al. "MONOCHROME AUGMENTED LOW-LIGHT IMAGE ENHANCEMENT." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 10 (2024): 1–8. http://dx.doi.org/10.55041/ijsrem37853.

Pełny tekst źródła
Streszczenie:
Low light short exposure photography is challenging, but an important factor in capturing images in temporarily dynamic scenes avoiding unwanted effects such as ghosting, motion blur, camera shakes, image artifacts, etc. Monochrome augmented low-light image enhancement aims to get improved low-light short-exposure images by using an additional monochrome sensor and its data. Monochrome images typically possess a higher SNR (Signal-to-Noise Ratio) and better luma information, since it avoids the attenuation by the Bayer Filter. The objective here is to develop a deep learning based approach to enhance low-light short exposure images from the main sensor by using an additional low resolution monochrome sensor.
Style APA, Harvard, Vancouver, ISO itp.
7

Xie, Junyi, Hao Bian, Yuanhang Wu, Yu Zhao, Linmin Shan, and Shijie Hao. "Semantically-guided low-light image enhancement." Pattern Recognition Letters 138 (October 2020): 308–14. http://dx.doi.org/10.1016/j.patrec.2020.07.041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhou, Chu, Minggui Teng, Youwei Lyu, Si Li, Chao Xu, and Boxin Shi. "Polarization-Aware Low-Light Image Enhancement." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (2023): 3742–50. http://dx.doi.org/10.1609/aaai.v37i3.25486.

Pełny tekst źródła
Streszczenie:
Polarization-based vision algorithms have found uses in various applications since polarization provides additional physical constraints. However, in low-light conditions, their performance would be severely degenerated since the captured polarized images could be noisy, leading to noticeable degradation in the degree of polarization (DoP) and the angle of polarization (AoP). Existing low-light image enhancement methods cannot handle the polarized images well since they operate in the intensity domain, without effectively exploiting the information provided by polarization. In this paper, we propose a Stokes-domain enhancement pipeline along with a dual-branch neural network to handle the problem in a polarization-aware manner. Two application scenarios (reflection removal and shape from polarization) are presented to show how our enhancement can improve their results.
Style APA, Harvard, Vancouver, ISO itp.
9

Liang, Xiwen, and Xiaoyan Chen. "Enhancement methodology for low light image." Proceedings of International Conference on Artificial Life and Robotics 28 (February 9, 2023): 12–19. http://dx.doi.org/10.5954/icarob.2023.ps3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhai, Guangtao, Wei Sun, Xiongkuo Min, and Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (2021): 1–24. http://dx.doi.org/10.1145/3457905.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Low light enhancement"

1

Dalasari, Venkata Gopi Krishna, and Sri Krishna Jayanty. "Low Light Video Enhancement along with Objective and Subjective Quality Assessment." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13500.

Pełny tekst źródła
Streszczenie:
Enhancing low light videos has been quite a challenge over the years. A video taken in low light always has the issues of low dynamic range and high noise. This master thesis presents contribution within the field of low light video enhancement. Three models are proposed with different tone mapping algorithms for extremely low light low quality video enhancement. For temporal noise removal, a motion compensated kalman structure is presented. Dynamic range of the low light video is stretched using three different methods. In Model 1, dynamic range is increased by adjustment of RGB histograms using gamma correction with a modified version of adaptive clipping thresholds. In Model 2, a shape preserving dynamic range stretch of the RGB histogram is applied using SMQT. In Model 3, contrast enhancement is done using CLAHE. In the final stage, the residual noise is removed using an efficient NLM. The performance of the models are compared on various Objective VQA metrics like NIQE, GCF and SSIM. To evaluate the actual performance of the models subjective tests are conducted, due to the large number of applications that target humans as the end user of the video.The performance of the three models are compared for a total of ten real time input videos taken in extremely low light environment. A total of 25 human observers subjectively evaluated the performance of the three models based on the parameters: contrast, visibility, visually pleasing, amount of noise and overall quality. A detailed statistical evaluation of the relative performance of the three models is also provided.
Style APA, Harvard, Vancouver, ISO itp.
2

CASULA, ESTER ANNA RITA. "Low mass dimuon production with the ALICE muon spectrometer." Doctoral thesis, Università degli Studi di Cagliari, 2014. http://hdl.handle.net/11584/266451.

Pełny tekst źródła
Streszczenie:
Low mass vector meson (ρ, ω,Φ ) production provides key information on the hot and dense state of strongly interacting matter produced in high-energy heavy ion collisions (called Quark Gluon Plasma). Strangeness enhancement is one of the possible signatures of the Quark Gluon Plasma formation and can be accessed through the measurement of Φ meson production with respect to ρ and Φ mesons, while the measurement of the Φ nuclear modification factor provides a powerful tool to probe the production dynamics and hadronization process in relativistic heavy ion collisions. Vector mesons can be detected through their decays into muon pairs with the ALICE muon spectrometer. This thesis presents the results on the measurement of the Φ differential cross section, as a function of the transverse momentum, in pp collisions at √s = 2.76 TeV; the measurement of the Φ fyield and of the nuclear modification factor RpA at forward and backward rapidity, as a function of the transverse momentum, in p-Pb collisions at √s = 5.02 TeV; the measurement of the Φ/ (ρ+ω) ratio, as well as of the Φ nuclear modification factors RAA and RCP , as a function of the number of participating nucleons, in Pb-Pb collisions at √sNN = 2.76 TeV.
Style APA, Harvard, Vancouver, ISO itp.
3

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Pełny tekst źródła
Streszczenie:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.<br>Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Szu-Chieh, and 王思傑. "Extreme Low Light Image Enhancement with Generative Adversarial Networks." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cz8pqb.

Pełny tekst źródła
Streszczenie:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>107<br>Taking photos under low light environments is always a challenge for current imaging pipelines. Image noise and artifacts corrupt the image. Tak- ing the great success of deep learning into consideration recently, it may be straightforward to train a deep convolutional network to perform enhance- ment on such images to restore the underlying clean image. However, the large number of parameters in deep models may require a large amount of data to train. For the low light image enhancement task, paired data requires a short exposure image and a long exposure image to be taken with perfect alignment, which may not be achievable in every scene, thus limiting the choice of possible scenes to capture paired data and increasing the effort to collect training data. Also, data-driven solutions tend to replace the entire camera pipeline and cannot be easily integrated to existing pipelines. There- fore, we propose to handle the task with our 2-stage pipeline, consisting of an imperfect denoise network, and a bias correction net BC-UNet. Our method only requires noisy bursts of short exposure images and unpaired long expo- sure images, relaxing the effort of collecting training data. Also, our method works in raw domain and is capable of being easily integrated into the ex- isting camera pipeline. Our method achieves comparable improvements to other methods under the same settings.
Style APA, Harvard, Vancouver, ISO itp.
5

Chen, Hsueh-I., and 陳學儀. "Deep Burst Low Light Image Enhancement with Alignment, Denoising and Blending." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/sfk685.

Pełny tekst źródła
Streszczenie:
碩士<br>國立臺灣大學<br>資訊網路與多媒體研究所<br>106<br>Taking photos under low light environment is always a challenge for most camera. In this thesis, we propose a neural network pipeline for processing burst short-exposure raw data. Our method contains alignment, denoising and blending. First, we use FlowNet2.0 to predict the optical flow between burst images and align these burst images. And then, we feed the aligned burst raw data into a DenoiseUNet, which includes denoise-part and color-part, to generate an RGB image. Finally, we use a MaskUNet to generate a mask that can distinguish misalignment. We blend the outputs from single raw image and from burst raw images by the mask. Our method proves that using burst inputs has significantly improvement than single input.
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Chih-Ming, and 陳知名. "FPGA-based Real-time Low-Light Image Enhancement for Side-Mirror System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/75th2h.

Pełny tekst źródła
Streszczenie:
碩士<br>國立臺北科技大學<br>電子工程系<br>106<br>In recent years, camera and display are widely used on vehicle. Because the camera wide angle is greater than the field of the lens-view, traditional side-mirror gradually replaced by camera and display. When driving at night, the images always suffer from low visibility when captures in low-light conditions, so driver and pedestrians are in danger. In this paper, design of PCB circuit to connect two motor control modules and side-mirror control lines to integrate FPGA, and presents a high-speed method to enhanced low-light image. The proposed brightnss enhanced algorithm is based on YUV space using non-linear transfers function and implemented on hardware for Xilinx ZedBoard to achieve the requirements of 25fps. The software algorithm execution time and brightness enhanced image LOE (Lightness-Order-Error) are used as performance evaluation. Compared with the proposed brightnss enhanced algorithm and others enhanced algorithms, the parameter of execution time and LOE can reduce by up to 97.5%, and 77%, respectively.
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Yi-Jun, and 陳怡均. "The Study on Video Enhancement in the Low-Light Environment by Spatio-Temporal Filtering." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/75377479563860423821.

Pełny tekst źródła
Streszczenie:
碩士<br>國立高雄應用科技大學<br>電子與資訊工程研究所碩士班<br>96<br>As science and technology developing, the digital media appearance increasingly presents diversified structure and form. People more and more depend on the digital media to acquire the information and knowledge. In practical video systems, the source image is easily interfered by noise during acquisition process, especially in a low light environment. Noise causes the significant degradation of video quality because the noise remains as large residual errors and results in poor compression efficiency. Therefore noise reduction is an important research area in video processing. In this paper, we propose a video noise reduction method in low light environment. The basic idea is to analyze the picture characteristic of the image. Divide the image into four kinds of regions and filtering these regions in different ways. The main advantage of the technique is that it is able to reduce the noise variance in smooth areas but at the same time retains the sharpness of edges in object boundaries. Experimental results show that our method is superior in the objective evaluation and subjective evaluation of human looking to other previous methods. The filter can achieve higher video quality and improve compression ratio. This filter not only can reduce video noise but also keep real-time process.
Style APA, Harvard, Vancouver, ISO itp.
8

Malik, Sameer. "Low Light Image Restoration: Models, Algorithms and Learning with Limited Data." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6120.

Pełny tekst źródła
Streszczenie:
The ability to capture high quality images under low-light conditions is an important feature of most hand-held devices and surveillance cameras. Images captured under such conditions often suffer from multiple distortions such as poor contrast, low brightness, color-cast and severe noise. While adjusting camera hardware settings such as aperture width, ISO level and exposure time can improve the contrast and brightness levels in the captured image, they often introduce artifacts including shallow depth-of-field, noise and motion blur. Thus, it is important to study image processing approaches to improve the quality of low-light images. In this thesis, we study the problem of low-light image restoration. In particular, we study the design of low-light image restoration algorithms based on statistical models, deep learning architectures and learning approaches when only limited labelled training data is available. In our statistical model approach, the low-light natural image in the band pass domain is modelled by statistically relating a Gaussian scale mixture model for the pristine image, with the low-light image, through a detail loss coefficient and Gaussian noise. The detail loss coefficient in turn is statistically described using a posterior distribution with respect to its estimate based on a prior contrast enhancement algorithm. We then design our low-light enhancement and denoising (LLEAD) method by computing the minimum mean squared error estimate of the pristine image band pass coefficients. We create the Indian Institute of Science low-light image dataset of well-lit and low-light image pairs to learn the model parameters and evaluate our enhancement method. We show through extensive experiments on multiple datasets that our method helps better enhance the contrast while simultaneously controlling the noise when compared to other classical joint contrast enhancement and denoising methods. Deep convolutional neural networks (CNNs) based on residual learning and end-to-end multiscale learning have been successful in achieving state of the art performance in image restoration. However, their application to joint contrast enhancement and denoising under low-light conditions is challenging owing to the complex nature of the distortion process involving both loss of details and noise. We address this challenge through two lines of approaches, one which exploits the statistics of natural images and the other which exploits the structure of the distortion process. We first propose a multiscale learning approach by learning the subbands obtained in a Laplacian pyramid decomposition. We refer to our framework as low-light restoration network (LLRNet). Our approach consists of a bank of CNNs where each CNN is trained to learn to explicitly predict different subbands of the Laplacian pyramid of the well exposed image. We show through extensive experiments on multiple datasets that our approach produces better quality restored images when compared to other low-light restoration methods. In our second line of approach, we learn a distortion model that relates a noisy low- light and ground truth image pair. The low-light image is modeled to suffer from contrast distortion and additive noise. We model the loss of contrast through a parametric function, which enables the estimation of the underlying noise. We then use a pair of CNN models to learn the noise and the parameters of a function to achieve contrast enhancement. This contrast enhancement function is modeled as a linear combination of multiple gamma transforms. We show through extensive evaluations that our low-light Image Model for Enhancement Network (LLIMENet) achieves superior restoration performance when compared to other methods on several publicly available datasets. While CNN models are fairly successful in low-light image restoration, such approaches require a large number of paired low-light and ground truth image pairs for training. Thus, we study the problem of semi-supervised learning for low-light image restoration when limited low-light images have ground truth labels. Our main contributions in this work are twofold. We first deploy an ensemble of low-light restoration networks to restore the unlabeled images and generate a set of potential pseudo-labels. We model the contrast distortions in the labeled set to generate different sets of training data and create the ensemble of networks. We then design a contrastive self-supervised learning based image quality measure to obtain the pseudo-label among the images restored by the ensemble. We show that training the restoration network with the pseudo-labels allows us to achieve excellent restoration performance even with very few labeled pairs. Our extensive experiments on multiple datasets show the superior performance of our semi-supervised low-light image restoration compared to other approaches. Finally, we study an even more constrained problem setting when only very few labelled image pairs are available for training. To address this challenge, we augment the available labelled data with large number of low-light and ground-truth image pairs through a CNN based model that generates low-light images. In particular, we introduce a contrast distortion auto-encoder framework that learns to disentangle the contrast distortion and content features from a low-light image. The contrast distortion features from a low-light image are then fused with the content features from another pristine image to create a low-light version of the pristine image. We achieve the disentanglement of distortion from image content through the novel use of a contrastive loss to constrain the training. We then use the generated data to train low-light restoration models. We evaluate our data generation method in the 5-shot and 10-shot labelled data settings to show the effectiveness of our models.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Low light enhancement"

1

Hong, M. H. Laser applications in nanotechnology. Edited by A. V. Narlikar and Y. Y. Fu. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199533060.013.24.

Pełny tekst źródła
Streszczenie:
This article discusses a variety of laser applications in nanotechnology. The laser has proven to be one of many mature and reliable manufacturing tools, with applications in modern industries, from surface cleaning to thin-film deposition. Laser nanoengineering has several advantages over electron-beam and focused ion beam processing. For example, it is a low-cost, high-speed process in air, vacuum or chemical environments and also has the capability to fulfill flexible integration control. This article considers laser nanotechnology in the following areas: pulsed laser ablation for nanomaterials synthesis; laser nanoprocessing to make nanobumps for disk media nanotribology and anneal ultrashort PN junctions; surface nanopatterning with near-field, and light-enhancement effects; and large-area parallel laser nanopatterning by laser interference lithography and laser irradiation through a microlens array. Based on these applications, the article argues that the laser will continue to be one of the highly potential nanoengineering means in next-generation manufacturing.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Low light enhancement"

1

Wang, Haodian, Yang Wang, Yang Cao, and Zheng-Jun Zha. "Fusion-Based Low-Light Image Enhancement." In MultiMedia Modeling. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, William Y., Lisa Liu, and Pingping Cai. "Adversarially Regularized Low-Light Image Enhancement." In MultiMedia Modeling. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53305-1_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Deyang, Zhengqu Li, Xin Zheng, Jian Ma, and Yuming Fang. "Low-Light Light-Field Image Enhancement With Geometry Consistency." In Lecture Notes in Computer Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-8685-5_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fotiadou, Konstantina, Grigorios Tsagkatakis, and Panagiotis Tsakalides. "Low Light Image Enhancement via Sparse Representations." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11758-4_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hershkovitch Neiterman, Evgeny, Michael Klyuchka, and Gil Ben-Artzi. "Adaptive Enhancement of Extreme Low-Light Images." In Advanced Concepts for Intelligent Vision Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45382-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Vinh, Truong Quang, Tran Quang Duy, and Nguyen Quang Luc. "Low-Light Image Enhancement Using Quaternion CNN." In Intelligence of Things: Technologies and Applications. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46573-4_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

He, Wenchao, and Yutao Liu. "Low-Light Image Enhancement via Unsupervised Learning." In Artificial Intelligence. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8850-1_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kavya, Avvaru Greeshma, Uruguti Aparna, and Pallikonda Sarah Suhasini. "Enhancement of Low-Light Images Using CNN." In Emerging Research in Computing, Information, Communication and Applications. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Zhixiang, and Shan Jiang. "Curve Enhancement: A No-Reference Method for Low-Light Image Enhancement." In Communications in Computer and Information Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8148-9_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Song, Juan, Liang Zhang, Peiyi Shen, Xilu Peng, and Guangming Zhu. "Single Low-Light Image Enhancement Using Luminance Map." In Communications in Computer and Information Science. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3005-5_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Low light enhancement"

1

Florin, Toadere. "Low light images enhancement." In 2024 Advanced Topics on Measurement and Simulation (ATOMS). IEEE, 2024. https://doi.org/10.1109/atoms60779.2024.10921532.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Gengchen, Yulun Zhang, Xin Yuan, and Ying Fu. "Binarized Low-Light Raw Video Enhancement." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02433.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Saini, Saurabh, and P. J. Narayanan. "Specularity Factorization for Low-Light Enhancement." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kapoor, Shalini, Yojna Arora, Nidhi Bansal, Khushwant Virdi, Firdous Sadaf M. Ismail, and Suraj Malik. "Low Light Image Enhancement: A Special Click." In 2025 2nd International Conference on Computational Intelligence, Communication Technology and Networking (CICTN). IEEE, 2025. https://doi.org/10.1109/cictn64563.2025.10932613.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zheng, Zhihao, and Mooi Choo Chuah. "Latent Disentanglement for Low Light Image Enhancement." In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024. https://doi.org/10.1109/iros58592.2024.10802761.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Ethan, Robail Yasrab, and Pramit Saha. "Using MIRNet for Low Light Image Enhancement." In 12th International Conference on Bioimaging. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013099600003911.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sun, Yao, and Qingfeng Hu. "Low-Light Image Enhancement Based on Light Effect Region Suppression." In 2025 7th International Conference on Software Engineering and Computer Science (CSECS). IEEE, 2025. https://doi.org/10.1109/csecs64665.2025.11009235.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Liu, Ziyang, Tianjiao Zeng, Qifan He, Xu Zhan, Wensi Zhang, and XiaoLing Zhang. "Low-light lensless image enhancement via diffusion model." In Holography, Diffractive Optics, and Applications XIV, edited by Changhe Zhou, Liangcai Cao, Ting-Chung Poon, and Hiroshi Yoshikawa. SPIE, 2024. http://dx.doi.org/10.1117/12.3035654.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Yin, Hao Li, Chao Yan, et al. "Low-Light Image Enhancement via Camera Response Function." In 2024 9th International Conference on Robotics and Automation Engineering (ICRAE). IEEE, 2024. https://doi.org/10.1109/icrae64368.2024.10851690.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Zeyu, and Yan Qi. "Low-light Image Enhancement Algorithm Based on Transformer." In 2024 2nd International Conference on Signal Processing and Intelligent Computing (SPIC). IEEE, 2024. http://dx.doi.org/10.1109/spic62469.2024.10691514.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Low light enhancement"

1

Birkmire, Robert, Juejun Hu, and Kathleen Richardson. Beyond the Lambertian limit: Novel low-symmetry gratings for ultimate light trapping enhancement in next-generation photovoltaics. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1419008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Norelli, John L., Moshe Flaishman, Herb Aldwinckle, and David Gidoni. Regulated expression of site-specific DNA recombination for precision genetic engineering of apple. United States Department of Agriculture, 2005. http://dx.doi.org/10.32747/2005.7587214.bard.

Pełny tekst źródła
Streszczenie:
Objectives: The original objectives of this project were to: 1) evaluate inducible promoters for the expression of recombinase in apple (USDA-ARS); 2) develop alternative selectable markers for use in apple to facilitate the positive selection of gene excision by recombinase (Cornell University); 3) compare the activity of three different recombinase systems (Cre/lox, FLP/FRT, and R/RS)in apple using a rapid transient assay (ARO); and 4) evaluate the use of recombinase systems in apple using the best promoters, selectable markers and recombinase systems identified in 1, 2 and 3 above (Collaboratively). Objective 2 was revised from the development alternative selectable markers, to the development of a marker-free selection system for apple. This change in approach was taken due to the inefficiency of the alternative markers initially evaluated in apple, phosphomannose-isomerase and 2-deoxyglucose-6-phosphate phosphatase, and the regulatory advantages of a marker-free system. Objective 3 was revised to focus primarily on the FLP/FRT recombinase system, due to the initial success obtained with this recombinase system. Based upon cooperation between researchers (see Achievements below), research to evaluate the use of the FLP recombinase system under light-inducible expression in apple was then conducted at the ARO (Objective 4). Background: Genomic research and genetic engineering have tremendous potential to enhance crop performance, improve food quality and increase farm profits. However, implementing the knowledge of genomics through genetically engineered fruit crops has many hurdles to be overcome before it can become a reality in the orchard. Among the most important hurdles are consumer concerns regarding the safety of transgenics and the impact this may have on marketing. The goal of this project was to develop plant transformation technologies to mitigate these concerns. Major achievements: Our results indicate activity of the FLP\FRTsite-specific recombination system for the first time in apple, and additionally, we show light- inducible activation of the recombinase in trees. Initial selection of apple transformation events is conducted under dark conditions, and tissue cultures are then moved to light conditions to promote marker excision and plant development. As trees are perennial and - cross-fertilization is not practical, the light-induced FLP-mediated recombination approach shown here provides an alternative to previously reported chemically induced recombinase approaches. In addition, a method was developed to transform apple without the use of herbicide or antibiotic resistance marker genes (marker free). Both light and chemically inducible promoters were developed to allow controlled gene expression in fruit crops. Implications: The research supported by this grant has demonstrated the feasibility of "marker excision" and "marker free" transformation technologies in apple. The use of these safer technologies for the genetic enhancement of apple varieties and rootstocks for various traits will serve to mitigate many of the consumer and environmental concerns facing the commercialization of these improved varieties.
Style APA, Harvard, Vancouver, ISO itp.
3

Technical Guidelines to Facilitate the Implementation of Security Council Resolution 2370 (2017) and Related International Standards and good Practices on Preventing Terrorists from Acquiring Weapons. UNIDIR, 2022. http://dx.doi.org/10.37559/caap/22/pacav/03.

Pełny tekst źródła
Streszczenie:
Terrorist acquisition of different types of weapons, including Small Arms and Light Weapons (SALW), their corresponding ammunition, improvised explosive device (IED) components, and uncrewed aerial systems (UAS) and components, poses a global threat to international peace and security. Preventing such acquisitions by terrorists presents States and the international community as well as communities of practitioners with a set of complex and multifaceted challenges. In March 2022, the UN Counter-Terrorism Committee Executive Directorate (CTED), United Nations Counter-Terrorism Centre (UNCCT) of the UN Office of Counter-Terrorism (UNOCT) and UNIDIR launched the “Technical guidelines to facilitate the implementation of Security Council resolution 2370 (2017) and related international standards and good practices on preventing terrorists from acquiring weapons”. The technical guidelines have been developed under a joint project implemented by CTED, working on behalf of the UN Global Counter-Terrorism Coordination Compact Working Group on Border Management and Law Enforcement relating to Counter-Terrorism, funded by UNCCT and co-implemented by UNCCT and UNIDIR. With the adoption by the Security Council of its resolution 2370 (2017), the Council reaffirmed its previous decision in resolution 1373 (2001) that all States should refrain from providing any form of support to those involved in terrorist acts, including by eliminating the supply of weapons – including SALW, military equipment, UAS and their components, and IED components – to those involved in terrorist acts. The Security Council urged Member States to act cooperatively to prevent terrorists from acquiring weapons and called upon them to become party to related international and regional instruments. Resolution 2370 is the first Security Council resolution specifically dedicated to preventing terrorists from acquiring weapons. The technical guidelines have been developed as part of a broader project that seeks to facilitate and support the implementation of resolution 2370 (2017), relevant subsequent resolutions, good practices, and international standards. The technical guidelines aim at contributing to the enhancement of Member States’ legislative, strategic, and operational capacities to prevent, detect and counter the acquisition, illicit trafficking and use of different weapons, systems, and components. These technical guidelines are non-binding and should be considered living working reference document. They are also expected to form a basis for dialogue at different levels, including among regional and national stakeholders in their efforts to assess, develop, review, and refine regional and national measures to prevent terrorist acquisition of weapons. Following roll-out, application and use, the document will be subject to modifications, revisions, and updates, based on feedback received from States and the technical communities of practice.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii