Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Low light enhancement.

Artykuły w czasopismach na temat „Low light enhancement”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Low light enhancement”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hao, Shijie, Xu Han, Yanrong Guo, and Meng Wang. "Decoupled Low-Light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (2022): 1–19. http://dx.doi.org/10.1145/3498341.

Pełny tekst źródła
Streszczenie:
The visual quality of photographs taken under imperfect lightness conditions can be degenerated by multiple factors, e.g., low lightness, imaging noise, color distortion, and so on. Current low-light image enhancement models focus on the improvement of low lightness only, or simply deal with all the degeneration factors as a whole, therefore leading to sub-optimal results. In this article, we propose to decouple the enhancement model into two sequential stages. The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping. The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors. The decoupled model facilitates the enhancement in two aspects. On the one hand, the whole low-light enhancement can be divided into two easier subtasks. The first one only aims to enhance the visibility. It also helps to bridge the large intensity gap between the low-light and normal-light images. In this way, the second subtask can be described as the local appearance adjustment. On the other hand, since the parameter matrix learned from the first stage is aware of the lightness distribution and the scene structure, it can be incorporated into the second stage as the complementary information. In the experiments, our model demonstrates the state-of-the-art performance in both qualitative and quantitative comparisons, compared with other low-light image enhancement models. In addition, the ablation studies also validate the effectiveness of our model in multiple aspects, such as model structure and loss function.
Style APA, Harvard, Vancouver, ISO itp.
2

SANTHIYA, S., S. NANDHINI, M. MOGANA PRIYA, and K. SELVA BHUVANESWARI. "LOW-LIGHT IMAGE ENHANCEMENT USING INVERTED ATMOSPHERIC LIGHT." i-manager’s Journal on Software Engineering 15, no. 4 (2021): 8. http://dx.doi.org/10.26634/jse.15.4.18142.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Seonhee, Kiyeon Kim, Soohwan Yu, and Joonki Paik. "Contrast Enhancement for Low-light Image Enhancement: A Survey." IEIE Transactions on Smart Processing & Computing 7, no. 1 (2018): 36–48. http://dx.doi.org/10.5573/ieiespc.2018.7.1.036.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Kang, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, and Xiaopeng Hu. "Continuous detail enhancement framework for low-light image enhancement." Displays 88 (July 2025): 103040. https://doi.org/10.1016/j.displa.2025.103040.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Dabas, Megha. "Low Light Image Enhancement Using Python." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–8. https://doi.org/10.55041/ijsrem39588.

Pełny tekst źródła
Streszczenie:
ABSTRACT----The poor signal-to-noise ratio (SNR) in low-light photos frequently results in significant sensor noise. Moreover, the noise is non-Gaussian and signal-dependent. We propose a novel denoising technique to tackle the issue by combining weighted total variation (TV) regularization with a Poisson noise model. The weighted Total Variation (T V) regularization effectively eliminates noise while preserving details, whereas the Poisson noise model retains the nature of the noise. Our suggested strategy performs better on NIQE scores than the most advanced techniques. KEYWORDS----COOPERATIVE Intelligent Transport Systems (C-ITS), Convolutional Neural Network (CNN), Image Collection, CNN+Pyramid Model, Signal to Noise Ratio (SNR), Total Variation (TV) Image Segmentation Histogram Analysis, Image Processing, Image Filtering, Image Enhancement
Style APA, Harvard, Vancouver, ISO itp.
6

Journal, IJSREM, Dr S. Babu, Dr R. Rajmohan, et al. "MONOCHROME AUGMENTED LOW-LIGHT IMAGE ENHANCEMENT." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 10 (2024): 1–8. http://dx.doi.org/10.55041/ijsrem37853.

Pełny tekst źródła
Streszczenie:
Low light short exposure photography is challenging, but an important factor in capturing images in temporarily dynamic scenes avoiding unwanted effects such as ghosting, motion blur, camera shakes, image artifacts, etc. Monochrome augmented low-light image enhancement aims to get improved low-light short-exposure images by using an additional monochrome sensor and its data. Monochrome images typically possess a higher SNR (Signal-to-Noise Ratio) and better luma information, since it avoids the attenuation by the Bayer Filter. The objective here is to develop a deep learning based approach to enhance low-light short exposure images from the main sensor by using an additional low resolution monochrome sensor.
Style APA, Harvard, Vancouver, ISO itp.
7

Xie, Junyi, Hao Bian, Yuanhang Wu, Yu Zhao, Linmin Shan, and Shijie Hao. "Semantically-guided low-light image enhancement." Pattern Recognition Letters 138 (October 2020): 308–14. http://dx.doi.org/10.1016/j.patrec.2020.07.041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhou, Chu, Minggui Teng, Youwei Lyu, Si Li, Chao Xu, and Boxin Shi. "Polarization-Aware Low-Light Image Enhancement." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (2023): 3742–50. http://dx.doi.org/10.1609/aaai.v37i3.25486.

Pełny tekst źródła
Streszczenie:
Polarization-based vision algorithms have found uses in various applications since polarization provides additional physical constraints. However, in low-light conditions, their performance would be severely degenerated since the captured polarized images could be noisy, leading to noticeable degradation in the degree of polarization (DoP) and the angle of polarization (AoP). Existing low-light image enhancement methods cannot handle the polarized images well since they operate in the intensity domain, without effectively exploiting the information provided by polarization. In this paper, we propose a Stokes-domain enhancement pipeline along with a dual-branch neural network to handle the problem in a polarization-aware manner. Two application scenarios (reflection removal and shape from polarization) are presented to show how our enhancement can improve their results.
Style APA, Harvard, Vancouver, ISO itp.
9

Liang, Xiwen, and Xiaoyan Chen. "Enhancement methodology for low light image." Proceedings of International Conference on Artificial Life and Robotics 28 (February 9, 2023): 12–19. http://dx.doi.org/10.5954/icarob.2023.ps3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhai, Guangtao, Wei Sun, Xiongkuo Min, and Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (2021): 1–24. http://dx.doi.org/10.1145/3457905.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
Style APA, Harvard, Vancouver, ISO itp.
11

Gu, Wenjuan, Xin Li, Yuhanke Hu, Junxiang Peng, and Xiaobao Liu. "DT-Retinex: low-light enhancement network based on diffuse denoising and light enhancement." Digital Signal Processing 166 (November 2025): 105416. https://doi.org/10.1016/j.dsp.2025.105416.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Liu, Weiqiang, Peng Zhao, Xiangying Song, and Bo Zhang. "A Survey of Low-light Image Enhancement." Frontiers in Computing and Intelligent Systems 1, no. 3 (2022): 88–92. http://dx.doi.org/10.54097/fcis.v1i3.2242.

Pełny tekst źródła
Streszczenie:
With the higher requirements of computer vision image enhancement of low-light image has become an important research content of computer vision. Traditional low-light image enhancement algorithms can improve image brightness and detailed visibility to varying degrees, but due to their strict mathematical derivation, such methods have bottlenecks and are difficult to break through their limits. With the development of deep learning and the birth of large-scale data sets, low-light image enhancement based on deep learning has become the mainstream trend. In this paper, first of all, the traditional low-light image enhancement algorithms are classified, summarized the improvement process of the traditional method, then the image enhancement method based on the deep learning are introduced, at the same time on the network structure and is suitable for the method of combing the network part, after the introduction to the experiment database and enhance image evaluation criteria. Based on the discussion of the above situation, combined with the actual situation, this paper points out the limitations of the current technology, and predicts its development trend.
Style APA, Harvard, Vancouver, ISO itp.
13

KOJIMA, Seiichi, Noriaki SUETAKE, and Eiji UCHINO. "A Contrast Enhancement of Low-light Image Suppressing Over-enhancement." Japanese Journal of Ergonomics 56, Supplement (2020): 2B3–03–2B3–03. http://dx.doi.org/10.5100/jje.56.2b3-03.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

B., Mrs Rajeswari. "Night Time Image Enhancement." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem29951.

Pełny tekst źródła
Streszczenie:
Night time image enhancement plays a crucial role in various applications such as surveillance, autonomous driving, and photography. However, capturing high-quality images in low-light conditions remains challenging due to limited visibility and increased noise levels. In this project, we propose a novel approach for enhancing nighttime images using MIRNet, a state-of-the-art deep learning architecture specifically designed for low-light image enhancement tasks. We collect a dataset of low-light images paired with their corresponding well-exposed counterparts and train the MIRNet model to learn the mapping between the two modalities. The architecture of MIRNet incorporates convolutional layers with residual connections to effectively capture low-light image features and generate visually pleasing enhancements. We evaluate the performance of our approach on a diverse range of nighttime scenes and compare the results against existing methods. Our experiments demonstrate that MIRNet produces superior results in enhancing nighttime images, significantly improving visibility, reducing noise, and preserving image details. The proposed approach holds promise for real-world applications where high-quality nighttime imagery is essential for decision-making and visual analysis. Keywords: Night time image enhancement, MIRNet, Deep learning, Low-light imaging, Image Processing,Convolutional neural networks (CNNs),Residual connections, Supervised learning, Dataset preparation,Imagequalityimprovement,Noisereduction,Visibilityenhancement,Surveillance,Autonomousdriving,Photography.
Style APA, Harvard, Vancouver, ISO itp.
15

Liu, Lin, Junfeng An, Jianzhuang Liu, et al. "Low-Light Video Enhancement with Synthetic Event Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (2023): 1692–700. http://dx.doi.org/10.1609/aaai.v37i2.25257.

Pełny tekst źródła
Streszczenie:
Low-light video enhancement (LLVE) is an important yet challenging task with many applications such as photographing and autonomous driving. Unlike single image low-light enhancement, most LLVE methods utilize temporal information from adjacent frames to restore the color and remove the noise of the target frame. However, these algorithms, based on the framework of multi-frame alignment and enhancement, may produce multi-frame fusion artifacts when encountering extreme low light or fast motion. In this paper, inspired by the low latency and high dynamic range of events, we use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos. Our method contains three stages: 1) event synthesis and enhancement, 2) event and image fusion, and 3) low-light enhancement. In this framework, we design two novel modules (event-image fusion transform and event-guided dual branch) for the second and third stages, respectively. Extensive experiments show that our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets. Our code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/LLVE-SEG.
Style APA, Harvard, Vancouver, ISO itp.
16

Zhou, Han, Wei Dong, Xiaohong Liu, Yulun Zhang, Guangtao Zhai, and Jun Chen. "Low-Light Image Enhancement via Generative Perceptual Priors." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10752–60. https://doi.org/10.1609/aaai.v39i10.33168.

Pełny tekst źródła
Streszczenie:
Although significant progress has been made in enhancing visibility, retrieving texture details, and mitigating noise in Low-Light (LL) images, the challenge persists in applying current Low-Light Image Enhancement (LLIE) methods to real-world scenarios, primarily due to the diverse illumination conditions encountered. Furthermore, the quest for generating enhancements that are visually realistic and attractive remains an underexplored realm. In response to these challenges, we present a novel LLIE framework with the guidance of Generative Perceptual Priors (GPP-LLIE) derived from vision-language models (VLMs). Specifically, we first propose a pipeline that guides VLMs to assess multiple visual attributes of the LL image and quantify the assessment to output the global and local perceptual priors. Subsequently, to incorporate these generative perceptual priors to benefit LLIE, we introduce a transformer-based backbone in the diffusion process, and develop a new layer normalization (GPP-LN) and an attention mechanism (LPP-Attn) guided by global and local perceptual priors. Extensive experiments demonstrate that our model outperforms current SOTA methods on paired LL datasets and exhibits superior generalization on real-world data.
Style APA, Harvard, Vancouver, ISO itp.
17

Liu, Jiaying, Dejia Xu, Wenhan Yang, Minhao Fan, and Haofeng Huang. "Benchmarking Low-Light Image Enhancement and Beyond." International Journal of Computer Vision 129, no. 4 (2021): 1153–84. http://dx.doi.org/10.1007/s11263-020-01418-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Wang, Yufei, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, and Alex Kot. "Low-Light Image Enhancement with Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 2604–12. http://dx.doi.org/10.1609/aaai.v36i3.20162.

Pełny tekst źródła
Streszczenie:
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
Style APA, Harvard, Vancouver, ISO itp.
19

Liang, Hong, Ankang Yu, Mingwen Shao, and Yuru Tian. "Multi-Feature Guided Low-Light Image Enhancement." Applied Sciences 11, no. 11 (2021): 5055. http://dx.doi.org/10.3390/app11115055.

Pełny tekst źródła
Streszczenie:
Due to the characteristics of low signal-to-noise ratio and low contrast, low-light images will have problems such as color distortion, low visibility, and accompanying noise, which will cause the accuracy of the target detection problem to drop or even miss the detection target. However, recalibrating the dataset for this type of image will face problems such as increased cost or reduced model robustness. To solve this kind of problem, we propose a low-light image enhancement model based on deep learning. In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. Through these methods, our network can effectively denoise and enhance images. We have conducted extensive experiments on the LOL datasets, and the results show that, compared with traditional image enhancement algorithms, the model is superior to traditional methods in image quality and speed.
Style APA, Harvard, Vancouver, ISO itp.
20

Huang, Haofeng, Wenhan Yang, Yueyu Hu, Jiaying Liu, and Ling-Yu Duan. "Towards Low Light Enhancement With RAW Images." IEEE Transactions on Image Processing 31 (2022): 1391–405. http://dx.doi.org/10.1109/tip.2022.3140610.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Wang, Li-Wen, Zhi-Song Liu, Wan-Chi Siu, and Daniel P. K. Lun. "Lightening Network for Low-Light Image Enhancement." IEEE Transactions on Image Processing 29 (2020): 7984–96. http://dx.doi.org/10.1109/tip.2020.3008396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Ma, Long, Tengyu Ma, and Risheng Liu. "The review of low-light image enhancement." Journal of Image and Graphics 27, no. 5 (2022): 1392–409. http://dx.doi.org/10.11834/jig.210852.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Li, Jinfeng. "Low-light image enhancement with contrast regularization." Frontiers in Computing and Intelligent Systems 1, no. 3 (2022): 25–28. http://dx.doi.org/10.54097/fcis.v1i3.2022.

Pełny tekst źródła
Streszczenie:
Because the processing of existing low-light images undergoes multiple sampling processing, there is serious information degradation, and only clear images are used as positive samples to guide network training, low-light image enhancement processing is still a challenging and unsettled problem. Therefore, a multi-scale contrast learning low-light image enhancement network is proposed. First, the image generates rich features through the input module, and then the features are imported into a multi-scale enhancement network with dense residual blocks, using positive and negative samples to guide the network training, and finally using the refinement module to enrich the image details. Experimental results on the dataset show that this method can reduce noise and artifacts in low-light images, and can improve contrast and brightness, demonstrating its advantages.
Style APA, Harvard, Vancouver, ISO itp.
24

Patil, Akshay, Tejas Chaudhari, Ketan Deo, Kalpesh Sonawane, and Rupali Bora. "Low Light Image Enhancement for Dark Images." International Journal of Data Science and Analysis 6, no. 4 (2020): 99. http://dx.doi.org/10.11648/j.ijdsa.20200604.11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Jin, Yutao, Xiaoyan Chen, and Xiwen Liang. "A lightweight low-light image enhancement network." Proceedings of International Conference on Artificial Life and Robotics 28 (February 9, 2023): 808–12. http://dx.doi.org/10.5954/icarob.2023.os31-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Patel, Kartik. "Low-Light Image Enhancement Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 3390–96. https://doi.org/10.22214/ijraset.2025.68073.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement is a critical task in computer vision, aimed at improving the visibility and perceptual quality of images captured under poor lighting conditions. Traditional methods often suffer from over-enhancement, noise amplification, and loss of fine details. In this paper, we propose a lightweight Convolutional Neural Network (CNN)-based model that leverages a novel loss function combining Mean Absolute Error (MAE) and Contrast Consistency Loss (CCL). Our method focuses on preserving contrast and structural details while minimizing computational overhead. Experiments conducted on the LOL (Low-Light) dataset from Kaggle demonstrate that our model outperforms traditional methods in terms of both qualitative and quantitative metrics, achieving superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).
Style APA, Harvard, Vancouver, ISO itp.
27

ZHANG, Jianqiang, and Qiusheng HE. "Context aware low-light image enhancement algorithm." Chinese Journal of Liquid Crystals and Displays 40, no. 5 (2025): 751–60. https://doi.org/10.37188/cjlcd.2024-0277.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Wang, Yun-Fei, He-Ming Liu, and Zhao-Wang Fu. "Low-Light Image Enhancement via the Absorption Light Scattering Model." IEEE Transactions on Image Processing 28, no. 11 (2019): 5679–90. http://dx.doi.org/10.1109/tip.2019.2922106.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Sun, Yanpeng, Zhanyou Chang, Yong Zhao, Zhengxu Hua, and Sirui Li. "Progressive Two-Stage Network for Low-Light Image Enhancement." Micromachines 12, no. 12 (2021): 1458. http://dx.doi.org/10.3390/mi12121458.

Pełny tekst źródła
Streszczenie:
At night, visual quality is reduced due to insufficient illumination so that it is difficult to conduct high-level visual tasks effectively. Existing image enhancement methods only focus on brightness improvement, however, improving image quality in low-light environments still remains a challenging task. In order to overcome the limitations of existing enhancement algorithms with insufficient enhancement, a progressive two-stage image enhancement network is proposed in this paper. The low-light image enhancement problem is innovatively divided into two stages. The first stage of the network extracts the multi-scale features of the image through an encoder and decoder structure. The second stage of the network refines the results after enhancement to further improve output brightness. Experimental results and data analysis show that our method can achieve state-of-the-art performance on synthetic and real data sets, with both subjective and objective capability superior to other approaches.
Style APA, Harvard, Vancouver, ISO itp.
30

V.deepika, Nivedha C., Sai roshini P.S., and S. Arun Kumar Guide:. "Variance Reduction in Low Light Image Enhancement Model." International Journal of Recent Technology and Engineering (IJRTE) 9, no. 4 (2020): 139–42. https://doi.org/10.35940/ijrte.D4723.119420.

Pełny tekst źródła
Streszczenie:
In image processing, enhancement of images taken in low light is considered to be a tricky and intricate process, especially for the images captured at nighttime. It is because various factors of the image such as contrast, sharpness and color coordination should be handled simultaneously and effectively. To reduce the blurs or noises on the low-light images, many papers have contributed by proposing different techniques. One such technique addresses this problem using a pipeline neural network. Due to some irregularity in the working of the pipeline neural networks model [1], a hidden layer is added to the model which results in a decrease in irregularity.
Style APA, Harvard, Vancouver, ISO itp.
31

Shi, Yangming, Xiaopo Wu, and Ming Zhu. "Interactive and Fast Low-Light Image Enhancement Algo-rithm and Application." Journal of Physics: Conference Series 2258, no. 1 (2022): 012003. http://dx.doi.org/10.1088/1742-6596/2258/1/012003.

Pełny tekst źródła
Streszczenie:
Abstract To obtain personalized outcomes for the low-light image enhancement, a novel interactive algorithm based on the well-designed Gamma Curve is proposed to enrich the enhancement techniques. Different from the previous works trying to enhance the image in solely brightness or naturalness by a specific designed deep network, the proposed method is capable of controlling the output results according to the user’s preferences by the same framework with different parameters. There would be three main advantages brought by the proposed network: 1) Interactivity, which allows to generate enhancements results according to users’ preferences in human-interactive manners; 2) Convenience, wherein the model only needs to train for once without using any reference images, and then can obtain results with different brightness during testing by adjusting the hyper-parameter. 3) Fastness, which results from the lightweight network and the excellent properties of the Gamma Curve to make the network operate in extraordinary high speed. Experiments demonstrate the superiority of our algorithm relative to the previous work. In addition, a multi-platform low-illumination enhancement software is explored to facilitate its application for the public.
Style APA, Harvard, Vancouver, ISO itp.
32

Wang, Hua, Jianzhong Cao, Lei Yang, and Jijiang Huang. "DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement." Computers 13, no. 6 (2024): 134. http://dx.doi.org/10.3390/computers13060134.

Pełny tekst źródła
Streszczenie:
The enhancement of images captured under low-light conditions plays a vitally important role in the area of image processing and can significantly affect the performance of following operations. In recent years, deep learning techniques have been leveraged in the area of low-light image enhancement tasks, and deep-learning-based low-light image enhancement methods have been the mainstream for low-light image enhancement tasks. However, due to the inability of existing methods to effectively maintain the color distribution of the original input image and to effectively handle feature descriptions at different scales, the final enhanced image exhibits color distortion and local blurring phenomena. So, in this paper, a novel dual color-and-texture-enhancement-based low-light image enhancement method is proposed, which can effectively enhance low-light images. Firstly, a novel color enhancement block is leveraged to help maintain color distribution during the enhancement process, which can further eliminate the color distortion effect; after that, an attention-based multiscale texture enhancement block is proposed to help the network focus on multiscale local regions and extract more reliable texture representations automatically, and a fusion strategy is leveraged to fuse the multiscale feature representations automatically and finally generate the enhanced reflection component. The experimental results on public datasets and real-world low-light images established the effectiveness of the proposed method on low-light image enhancement tasks.
Style APA, Harvard, Vancouver, ISO itp.
33

Jiang, Yonglong, Liangliang Li, Jiahe Zhu, Yuan Xue, and Hongbing Ma. "DEANet: Decomposition Enhancement and Adjustment Network for Low-Light Image Enhancement." Tsinghua Science and Technology 28, no. 4 (2023): 743–53. http://dx.doi.org/10.26599/tst.2022.9010047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Li, Xiang, Zeyu Li, Lirong Zhou, and Zhao Huang. "FOLD: Low-Level Image Enhancement for Low-Light Object Detection Based on FPGA MPSoC." Electronics 13, no. 1 (2024): 230. http://dx.doi.org/10.3390/electronics13010230.

Pełny tekst źródła
Streszczenie:
Object detection has a wide range of applications as the most fundamental and challenging task in computer vision. However, the image quality problems such as low brightness, low contrast, and high noise in low-light scenes cause significant degradation of object detection performance. To address this, this paper focuses on object detection algorithms in low-light scenarios, carries out exploration and research from the aspects of low-light image enhancement and object detection, and proposes low-level image enhancement for low-light object detection based on the FPGA MPSoC method. On the one hand, the low-light dataset is expanded and the YOLOv3 object detection model is trained based on the low-order image enhancement technique, which improves the detection performance of the model in low-light scenarios; on the other hand, the model is deployed on the MPSoC board to achieve an edge object detection system, which improves the detection efficiency. Finally, validation experiments are conducted on the publicly available low-light object detection dataset and the ZU3EG-AXU3EGB MPSoC board, and the results show that the method in this paper can effectively improve the detection accuracy and efficiency.
Style APA, Harvard, Vancouver, ISO itp.
35

Rohima, Rohima, Wanayumini Wanayumini, and Rika Rosnelly. "ANALISIS PENGARUH LOW-LIGHT IMAGE ENHANCEMENT PADA PENGENALAN WAJAH." CSRID (Computer Science Research and Its Development Journal) 13, no. 2 (2021): 118. http://dx.doi.org/10.22303/csrid.13.2.2021.118-129.

Pełny tekst źródła
Streszczenie:
<p class="bodyabstract"><span lang="X-NONE">Sistem pengenalan wajah secara umum akan digunakan secara real time dalam mengenali individu, artinya noise tidak dapat terhindarkan. Salah satu masalah yang dianggap umum adalah kokndisi pencahayaan. Kondisi pencahayaan terjadi akibat pancaran yang diterima objek tidak mencukupi sehingga cenderung memiliki visibilat rendah, kontras berkurang, warna kabur, dan detail yang kabur. Maka low-light image enhancement dapat menjadi solusinya. Terdapat banyak sekali metode low-light image enhancement yang tersedia, namun mana teknik yang lebih baik dalam pengenalan wajah masih menjadi perdebatan. Untuk menemukan metode low light image enhancement yang baik maka pada penelitian ini dirancang beberapa sistem pengenalan wajah dengan PCA sebagai ekstraksi fitur serta menerapkan SSR, MSR, AMSR, Dong, HE dan BPDHE sebagai metode low-light image enhancement. Dataset SOF dipilih sebagai target pengujian dikarenakan berisi citra dengan kondisi pencahayaan berbeda. Sebagai tujuan, keseluruhan sistem pengenalan wajah akan dibandingkan tingkat pengenalannya untuk menemukan metode low-light image enhancement terbaik. Berdasarkan pengujian dan analisis, ditemukan bahwa mayoritas sistem mengalami peningkatan tingkat pengenalan dengan diterapkannya metode low-light image enhancement, dan sebagai metode terbaik HE (76,28866 %) menunjukkan hasil yang paling signifikan, disusul dengan AMSR (75,25773 %), MSR (74,2268 %), SSR (69,07216 %), BPDHE (67,01031 %) dan Dong (63,91753 %).</span></p>
Style APA, Harvard, Vancouver, ISO itp.
36

Rasheed, Muhammad Tahir, Guiyu Guo, Daming Shi, Hufsa Khan, and Xiaochun Cheng. "An Empirical Study on Retinex Methods for Low-Light Image Enhancement." Remote Sensing 14, no. 18 (2022): 4608. http://dx.doi.org/10.3390/rs14184608.

Pełny tekst źródła
Streszczenie:
A key part of interpreting, visualizing, and monitoring the surface conditions of remote-sensing images is enhancing the quality of low-light images. It aims to produce higher contrast, noise-suppressed, and better quality images from the low-light version. Recently, Retinex theory-based enhancement methods have gained a lot of attention because of their robustness. In this study, Retinex-based low-light enhancement methods are compared to other state-of-the-art low-light enhancement methods to determine their generalization ability and computational costs. Different commonly used test datasets covering different content and lighting conditions are used to compare the robustness of Retinex-based methods and other low-light enhancement techniques. Different evaluation metrics are used to compare the results, and an average ranking system is suggested to rank the enhancement methods.
Style APA, Harvard, Vancouver, ISO itp.
37

Lv, Feifan, Yu Li, and Feng Lu. "Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset." International Journal of Computer Vision 129, no. 7 (2021): 2175–93. http://dx.doi.org/10.1007/s11263-021-01466-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Tian, Zhen, Peixin Qu, Jielin Li, et al. "A Survey of Deep Learning-Based Low-Light Image Enhancement." Sensors 23, no. 18 (2023): 7763. http://dx.doi.org/10.3390/s23187763.

Pełny tekst źródła
Streszczenie:
Images captured under poor lighting conditions often suffer from low brightness, low contrast, color distortion, and noise. The function of low-light image enhancement is to improve the visual effect of such images for subsequent processing. Recently, deep learning has been used more and more widely in image processing with the development of artificial intelligence technology, and we provide a comprehensive review of the field of low-light image enhancement in terms of network structure, training data, and evaluation metrics. In this paper, we systematically introduce low-light image enhancement based on deep learning in four aspects. First, we introduce the related methods of low-light image enhancement based on deep learning. We then describe the low-light image quality evaluation methods, organize the low-light image dataset, and finally compare and analyze the advantages and disadvantages of the related methods and give an outlook on the future development direction.
Style APA, Harvard, Vancouver, ISO itp.
39

E., Okorie, Iloka B. C., Okoh C. C., and Ejikeme A. "Low Light Vision Enhancement Using the Hazing Algorithm." International Journal of Research 10, no. 8 (2023): 167–81. https://doi.org/10.5281/zenodo.8224041.

Pełny tekst źródła
Streszczenie:
<strong>This journal publication presents a novel low light vision enhancement technique utilizing the Hazing algorithm. Low light conditions often pose significant challenges in various applications, such as surveillance, security, and outdoor imaging. The proposed technique aims to improve visibility and enhance the quality of low light images, thereby enabling better analysis and interpretation of visual information. The Hazing algorithm is a state-of-the-art method designed to address the limitations of traditional enhancement techniques in low light scenarios. It utilizes a combination of image dehazing and contrast enhancement algorithms to reduce haze, suppress noise, and enhance details. By leveraging the inherent characteristics of low light images, the Hazing algorithm effectively enhances image contrast, sharpness, and overall visual quality. The implementation of the technique involves a series of steps, including image acquisition under low light conditions, preprocessing to reduce noise and artifacts, application of the Hazing algorithm for enhancement, and post-processing to further refine the image quality. The proposed technique has been implemented and evaluated using a diverse set of low light images obtained from real-world scenarios. To assess the performance of the technique, several objective metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and visual quality assessment have been employed. The experimental results demonstrate that the Hazing algorithm effectively enhances low light images, yielding significant improvements in visibility and image quality. The proposed system offers a solution for processing and enhancing night vision images during criminal investigations. By utilizing the hazing algorithm and image processing techniques, the system improves visibility, enhances facial features, and facilitates accurate facial recognition. The implementation using MATLAB ensures efficient processing and analysis of low light images, thereby aiding law enforcement agencies in their efforts to investigate and solve criminal cases effectively.</strong>
Style APA, Harvard, Vancouver, ISO itp.
40

Yao, Zhuo. "Low-Light Image Enhancement and Target Detection Based on Deep Learning." Traitement du Signal 39, no. 4 (2022): 1213–20. http://dx.doi.org/10.18280/ts.390413.

Pełny tekst źródła
Streszczenie:
Most computer vision applications demand input images to meet their specific requirements. To complete different vision tasks, e.g., object detection, object recognition, and object retrieval, low-light images must be enhanced by different methods to achieve different processing effects. The existing image enhancement methods, which are based on non-physical imaging models, and image generation methods, which are based on deep learning, are not ideal for low-light image processing. To solve the problem, this paper explores low-light image enhancement and target detection based on deep learning. Firstly, a simplified expression was constructed for the optical imaging model of low-light images, and a Haze-line was proposed for color correction of low-light images, which can effectively enhance low-light images based on the global background light and medium transmission rate of the optical imaging model of such images. Next, network framework adopted by the proposed low-light image enhancement model was introduced in detail: the framework includes two deep domain adaptation modules that realize domain transformation and image enhancement, respectively, and the loss functions of the model were presented. To detect targets based on the output enhanced image, a joint enhancement and target detection method was proposed for low-light images. The effectiveness of the constructed model was demonstrated through experiments.
Style APA, Harvard, Vancouver, ISO itp.
41

Ming, Feng, Zhihui Wei, and Jun Zhang. "Unsupervised Low-Light Image Enhancement in the Fourier Transform Domain." Applied Sciences 14, no. 1 (2023): 332. http://dx.doi.org/10.3390/app14010332.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement is an important task in computer vision. Deep learning-based low-light image enhancement has made significant progress. But the current methods also face the challenge of relying on a wide variety of low-light/normal-light paired images and amplifying noise while enhancing brightness. Based on existing experimental observation that most luminance information concentrates on amplitudes while noise is closely related to phases, an unsupervised low-light image enhancement method in the Fourier transform domain is proposed. In our method, the low-light image is firstly transformed into the amplitude component and phase component via Fourier transform. The luminance of low-light image is enhanced by CycleGAN in the amplitude domain, and the phase component is denoising. The cycle consistency losses both in the Fourier transform domain and spatial domain are used in training. The proposed method has been validated on publicly available test sets and shows that our method achieves superior results than other approaches in low-light image enhancement and noise suppression.
Style APA, Harvard, Vancouver, ISO itp.
42

Vishnu, Choundur. "Low Light Image Enhancement using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 3463–72. http://dx.doi.org/10.22214/ijraset.2021.35787.

Pełny tekst źródła
Streszczenie:
Great quality images and pictures are remarkable for some perceptions. Nonetheless, not each and every images are in acceptable features and quality as they are capture in non-identical light atmosphere. At the point when an image is capture in a low light state the pixel esteems are in a low-esteem range, which will cause image quality to decrease evidently. Since the entire image shows up dull, it's difficult to recognize items or surfaces clearly. Thus, it is vital to improve the nature of low-light images. Low light image enhancement is required in numerous PC vision undertakings for object location and scene understanding. In some cases there is a condition when image caught in low light consistently experience the ill effects of low difference and splendor which builds the trouble of resulting undeniable level undertaking in incredible degree. Low light image improvement utilizing convolutional neural network framework accepts dull or dark images as information and creates brilliant images as a yield without upsetting the substance of the image. So understanding the scene caught through image becomes simpler task.
Style APA, Harvard, Vancouver, ISO itp.
43

Garg, Atik, Xin-Wen Pan, and Lan-Rong Dung. "LiCENt: Low-Light Image Enhancement Using the Light Channel of HSL." IEEE Access 10 (2022): 33547–60. http://dx.doi.org/10.1109/access.2022.3161527.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Jeon, Jong Ju, and Il Kyu Eom. "Low-light image enhancement using inverted image normalized by atmospheric light." Signal Processing 196 (July 2022): 108523. http://dx.doi.org/10.1016/j.sigpro.2022.108523.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

SHI, Jihao, Yuzhong ZHONG, Xiujuan ZHENG, and Songyi DIAN. "Low-light image enhancement algorithm based on light scattering attenuation model." Optics and Precision Engineering 31, no. 8 (2023): 1244–55. http://dx.doi.org/10.37188/ope.20233108.1244.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

YAN, Guanghui, Baijing WU, and Long MA. "LightDiffu DCE: low light image enhancement based on light intensity diffusion." Optics and Precision Engineering 33, no. 7 (2025): 1114–29. https://doi.org/10.37188/ope.20253307.1114.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Li, Fei, Jiangbin Zheng, and Yuan‐fang Zhang. "Generative adversarial network for low‐light image enhancement." IET Image Processing 15, no. 7 (2021): 1542–52. http://dx.doi.org/10.1049/ipr2.12124.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Liang, Dong, Ling Li, Mingqiang Wei, et al. "Semantically Contrastive Learning for Low-Light Image Enhancement." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 1555–63. http://dx.doi.org/10.1609/aaai.v36i2.20046.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images. In this paper, we respond to the intriguing learning-related question -- if leveraging both accessible unpaired over/underexposed images and high-level semantic guidance, can improve the performance of cutting-edge LLE models? Here, we propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE). Beyond the existing LLE wisdom, it casts the image enhancement task as multi-task joint learning, where LLE is converted into three constraints of contrastive learning, semantic brightness consistency, and feature preservation for simultaneously ensuring the exposure, texture, and color consistency. SCL-LLE allows the LLE model to learn from unpaired positives (normal-light)/negatives (over/underexposed), and enables it to interact with the scene semantics to regularize the image enhancement network, yet the interaction of high-level semantic knowledge and the low-level signal prior is seldom investigated in previous methods. Training on readily available open data, extensive experiments demonstrate that our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets. Moreover, SCL-LLE's potential to benefit the downstream semantic segmentation under extremely dark conditions is discussed. Source Code: https://github.com/LingLIx/SCL-LLE.
Style APA, Harvard, Vancouver, ISO itp.
49

Yu, Yabin. "Feature Fusion Network for Low-Light Image Enhancement." Journal of Physics: Conference Series 2010, no. 1 (2021): 012117. http://dx.doi.org/10.1088/1742-6596/2010/1/012117.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Lu, Yucheng, Dong-Wook Kim, and Seung-Won Jung. "DeepSelfie: Single-Shot Low-Light Enhancement for Selfies." IEEE Access 8 (2020): 121424–36. http://dx.doi.org/10.1109/access.2020.3006525.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii