Enhancement of Low Light Videos Using Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

11 V May 2023

https://doi.org/10.22214/ijraset.2023.52903
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Enhancement of Low Light Videos using Artificial


Intelligence
Halaharvi Keerthi1, Sreepathi B2
1
Assistant Professor, Dept. of CSE, Dayananda sagar University, Banglore, India
2
Professor, Head of the Dept. of ISE, RYMEC, Bellary, India

Abstract: Enhancing night videos can be challenging due to low light conditions and the resulting noise and loss of details. One
way to enhance night videos is by using a combination of fusion and color enhancement techniques followed by preprocessing.
One popular fusion technique for night videos is called Exposure Fusion, which involves blending multiple images of different
exposure levels to create a final image that is well-lit and has good contrast. Color enhancement techniques can also be used to
improve the quality of night videos. This involves adjusting the color balance and saturation to create a more pleasing and
natural-looking image. Super resolution is a technique used to enhance the resolution of an image or video. It involves
generating a high-resolution image or video from a low-resolution input, often using deep learning-based approaches. In the
context of enhancing a night video, super resolution can be used to improve the visibility of details and reduce noise in low-light
conditions.
Keywords: Color enhancement, Deep learning, Exposure fusion, Pre-processing, Super resolution and Surveillance system.

I. INTRODUCTION
Enhancement of night video refers to the process of improving the visual quality of video footage captured in low-light or nighttime
conditions. This can involve various techniques and technologies, such as increasing the brightness and contrast of the video,
reducing noise and artifacts, adjusting the color balance, and enhancing the details and sharpness of the image. The need for
enhancement of night video arises because cameras and sensors often struggle to capture clear and detailed images in low-light
conditions. This can result in grainy, blurry, and hard-to-see footage that can be difficult to analyze or use for surveillance or other
purposes. By enhancing night video, it is possible to improve the visibility and clarity of important details, such as faces, license
plates, and other objects of interest. This can be especially useful for law enforcement, security, and surveillance applications, as
well as for entertainment and artistic purposes. Overall, the enhancement of night video is an important area of research and
development that can help to improve the quality and usefulness of video footage captured in low-light conditions.
The existing techniques of video enhancement can be classified into two main methods, namely spatial- based domain and
frequency- based domain. Spatial based refers to the image plane itself and direct manipulation of pixels in an image. Frequency-
based domain processing techniques are based on modifying the spatial frequency spectrum of the image as obtained by transform.
The main advantage of spatial- based domain technique is that they are conceptually simple to understand and the time complexity
of these techniques is low which favors real time implementations. But these techniques generally lack in providing adequate
robustness and imperceptibility requirements.

II. LITERATURE SURVEY


Low-light image enhancement methods have been widely developed to obtain high-contrast images in the field of consumer
imaging devices. Recently, the performance of low-light image enhancement has become an important quality measure of recent
consumer imaging devices. To enhance the contrast of a low-light image, customized algorithm is demonstrated to accomplish high
quality enhanced image sequences. Color constancy approaches are also used to increase the overall luminance in the image.
Although color constancy algorithms have been originally developed to estimate the color of a light source by discarding the
illuminant from the scene, they also improve the chromatic content. Some works have explored the use of color constancy
algorithms for color image enhancement purposes [1]. It was oriented to local contrast enhancement using the White-Patch and the
Gray-World algorithms in combination with an automatic color equalization technique.
Most of classical methods that have been used over decades for dark image enhancement are histogram-based; for example,
histogram stretching, histogram equalization, brightness preserving bi-histogram equalization, contrast limited adaptive histogram
equalization [2], to mention a few.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5933
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

However, the performance of such histogram-based algorithms is very limited with color images because these methods change the
correlation between the color components of the original scene. Most recently, several methods for improving the contrast using a
global mapping from feature analysis have been published specially oriented to video enhancement.
One traditional approach to dark image enhancement is to use monochromatic representations and ignore the color features.
However, color images have numerous benefits over monochromatic images for surveillance and security applications. Moreover, a
color representation may facilitate the recognition of night vision imagery and its interpretation [4]. Dark images, or images taken
under low light conditions, are problematic because of their narrow dynamic range. Under these conditions, a regular camera sensor
introduces significant amounts of noise, further reducing the information content in the image. Because of these limitations, dark
image enhancement algorithms occasionally produce artifacts in the processed images [6].
Image fusion is another approach used to enhance dark images. This technique increases the visual information in an image by
combining different bands or images into the RGB space. In image fusion, generally two monochromatic images from different
spectral bands are used. A near-infrared or a visible image is considered as the R component, and a thermal image is designated as
the G component [17]. This combination of bands is used to build a look-up table (LUT) to transfer colors to other images.
However, this scheme may produce images with false colors. These false colors could also diminish the scene comprehension.

III. ENHANCEMENT STEPS


Enhancement of an image is needed to identify the objects and activities of interest in the videos captured under dim light.
Enhancement methods have been divided as direct and indirect methods. Direct methods consist of improving image contrast by
optimizing an objective contrast measure. Indirect methods exploit the dynamic range without using a contrast measure. The
proposed enhancement algorithm consists of following steps.

A. Preprocessing
De-noising is used as image pre-processing or post-processing to make the processed image clearer for subsequent image analysis
and understanding. One of the fundamental challenges in the field of image processing and computer vision is image de-noising,
where the underlying goal is to estimate the original image by suppressing noise from a noise-contaminated version of the image.
De-noising includes taking complement of the image, complemented low light images have their own unique characteristics and
hence the direct application of these de-noising methods is not ideal for image enhancement even though it takes less computational
complexity. In order remove the unwanted data which leads to get error outputs such as distortion noises contained images we have
used commonly and worldwide wildly used Salt and Pepper Noise and Median Filter.
Resizing the image is another preprocessing step. Not all our images are the exact size we need them to be, so it's important to
understand how to properly resize an image and how resizing works. When an image is resized, its pixel information is changed. For
example, an image is reduced in size, any unneeded pixel information will be discarded by the photo editor. When an image is
enlarged, the photo editor must create and add new pixel information based on its best guesses to achieve a larger size which
typically results in either a very pixelated or very soft and blurry looking image. Therefore, it is much easier to downsize an image
than it is to enlarge an image. If an image is needed for high-quality or large format prints, be sure that it is captured using the
highest resolution and quality possible because of the difficulty in enlarging.

B. Fusion
Fusion techniques involve combining multiple images or videos of the same scene to create a composite image that has improved
contrast and detail. This is particularly effective in low light conditions where noise can obscure important details. Proposed method
uses exposure fusion technique. Exposure fusion and Mertens' fusion are both techniques used in High Dynamic Range (HDR)
imaging to combine multiple images of the same scene taken at different exposure levels into a single image with a wider range of
brightness and detail. Exposure Fusion is a technique developed by Tom Mertens, Jan Kautz, and Frank Van Reeth, which combines
multiple exposures of an image by selecting the best pixel value for each location from the different input images. This technique
produces a natural-looking image by considering the contrast and saturation of the different exposures. Mertens' Fusion is a similar
technique that was also developed by Tom Mertens, which uses a weighted average of the input images to create a single image with
a high dynamic range. This technique produces a more artistic and surreal image by amplifying the contrast between the different
exposures. Both techniques have their advantages and disadvantages, and the choice of which to use will depend on the specific
requirements of the image being processed. Exposure Fusion is often used for creating natural-looking images with a wide dynamic
range, while Mertens' Fusion is used for creating more dramatic and artistic images with enhanced contrast and saturation.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5934
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

C. Color Enhancement
Color enhancement methods are based on the Retinex methods. Histogram equalization methods use distribution function that uses
probability density function to effectively enhance the dark regions of the image. It uses the mean brightness of the image while
enhancing contrast of the image. Enhanced image resulting in color distortion that leads to loss of color fidelity. Unlike Histogram
equalization methods, Retinex methods replace the current pixel values with the mean of surrounding pixels. Many effective image
enhancement methods are proposed based on Retinex method. Subsequently many methods are proposed such as Single Scale
Retinex(SSR)method, Multiscale Retinex(MSR) and Multiscale Retinex with color restoration (MSRCR) models. The resulting
enhanced image using these methods might be too bright resulting in halo effects. Proposed Methodology uses super resolution for
color enhancement. Super-resolution refers to the process of improving the resolution of a low-resolution image to a higher
resolution. This is typically achieved by using algorithms that extrapolate the missing information and create new pixels in a way
that is consistent with the overall image structure.
There are two main types of super-resolution techniques:
1) Single-image super-resolution: This technique involves using a single low-resolution image to create a higher resolution image.
This is typically achieved by using algorithms that extrapolate the missing information and create new pixels in a way that is
consistent with the overall image structure.

2) Multi-image super-resolution: This technique involves using multiple low-resolution images of the same scene to create a
higher resolution image. This is typically achieved by using algorithms that align and merge the low-resolution images to create
a single high-resolution image.

Super-resolution techniques are widely used in various fields such as medical imaging, remote sensing, and video processing. These
techniques can help improve the accuracy of image analysis, enhance image quality, and enable applications that require high-
resolution images. However, it's important to note that super-resolution techniques are not able to add new information to an image
that does not exist in the original low-resolution image.

IV. METHODOLOGY
The proposed conceptual framework adheres to the novel techniques which improves the visual appearance of the objects in the
video. Noise is attenuated by using the proposed framework and produces the result by providing finest details in the image. It is
important to note that enhancing night videos using these techniques can be time-consuming and computationally intensive, so it
may be necessary to use a powerful computer or cloud-based processing service. Additionally, the quality of the final image will
depend on the quality of the original videos and the effectiveness of the fusion and color enhancement techniques used. The flow
chart of proposed methodology is shown in figure 1.

Preprocess Exposure Super OUTPUT:


INPUT: Resolution Enhanced
video Fusion
Dark video video

Fig.1. Flow chart for Enhancement Methodology

The first step in the proposed method is to preprocess the input dark video to work with noise free data. The second step is to apply
fusion on the preprocesses images. Fusion is the process of combining multiple images into single image. Image fusion for
enhancement can be based on single image or multiple images. Fusion based on single image uses different enhancement methods
on one image and combines all the images for better results. Enhancement of dark images can use day reference image as input and
can combine with enhanced dark image to improve performance. Use an exposure fusion algorithm, such as Exposure Fusion to
blend the videos into a single, well-lit image. This technique combines multiple exposures of an image by selecting the best pixel
value for each location from the different input images. Exposure fusion is a process of generating a tone mapped-like HDR image
directly by fusing a series of bracketed images. The last step is to apply color enhancement techniques, such as super resolution, to
improve the color balance and saturation of the image. Adjust any parameters as needed to achieve the desired result. Enhancing a
dark video using super resolution can be a challenging task as it requires both enhancing the brightness of the video and increasing
its resolution.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5935
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Super resolution refers to the process of upscaling a low-resolution video to a higher resolution while preserving as much detail as
possible. It's important to note that while super resolution can improve the resolution of a video, it cannot create new details that
were not present in the original video. Additionally, enhancing a dark video can result in increased noise or artifacts in the final
output, so it's important to balance the amount of enhancement applied to avoid these issues.

V. RESULTS
The proposed method provided the significantly enhanced result without flickering, color distortion, and saturation problems using a
cost-effective implementation It can provide better results with a simple framework, easy to implement with better solution as
shown in the experimental results. This section demonstrates the performance of the proposed method. The proposed algorithm is
implemented using python. Preprocessing is the first step of implementation. Because clear and noise free data plays a vital role in
any of the research work. At times when we acquire the data either from the internet source or by pre-captured information it
consists noises, dimensions problems, visualization problems etc. these problems effects on the feature extractions phase, training
phase and verification and analysis phases. The algorithm was tested on four video data sets. The following images show the
experimental results of original image on the left side and the enhanced image on the right side as shown in figure 2. In future, the
idea is to use day image as a reference image to diminish the production of unnatural colors.

Original Image Enhanced Image

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5936
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

Fig.2. Comparison of enhanced image with original dark image

VI. CONCLUSION
Different types of methods and technologies that have been used for dark video enhancement. But the low contrast and noise
remains a barrier to visually pleasing videos in low light conditions. In that condition, to find out a more accuracy in video
enhancement process there is a need to detect and measure the intensity level of individual pixel channel as well as must present an
appropriate enhancement factor for enhancement purpose, so that effective and efficient video enhancement process is been
proposed. Preprocess the input dark video and then applying the exposure fusion and super resolution techniques for better results.
In future, the 3D video enhancement process will measure the intensity level of individual pixel channels and decide the best
enhancement factor which might be random or constant depends on the requirement of video enhancement algorithm and measure
the performance parameters.

REFERENCES
[1] E. Provenzi, C. Gatta, M. Fierro, and A. Rizzi, “A spatially variant white-patch and gray-world method for color image enhancement driven by local contrast,”
IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1757–1770, 2008.
[2] T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Transactions on
image processing, vol. 18, no. 9, pp. 1921–1935, 2009.
[3] Jing Wu, Ziwu Wang and Zhixia Fang ,”Application of Retinex in Color Restoration of Image Enhancement to Night Image “,978-1-42444131-0/09 ©2009
IEEE
[4] M. A. Hogervorst and A. Toet, “Fast natural color mapping for nighttime imagery,” Information Fusion, vol. 11, no. 2, pp. 69–77, 2010.
[5] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, “Nonrigid dense correspondence with applications for image enhancement,” in ACM
Transactions on Graphics (TOG), vol. 30, no. 4, 2011, pp. 70:1–70:9.
[6] A. R. Rivera, B. Ryu, and O. Chae, “Content-aware dark image enhancement through channel division,” IEEE Transactions on Image Processing, vol. 21, no.
9, pp. 3967–3980, 2012.
[7] Huiyuan Fu, Huadong Ma, Shixin Wu, “Night Removal by Color Estimation and Sparse Representation”, 21st International Conference on Pattern Recognition
(ICPR 2012) November 11-15, 2012. Tsukuba, Japan

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5937
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com

[8] Xuesong Jiang, Hongxun Yao, Shengping Zhang, Xiusheng Lu and Wei Zeng, “Night Video Enhancement Using Improved Dark Channel Prior”, 978-1-4799-
2341-0/13 ©2013 IEEE
[9] Abdullah Nazib, Chi-Min Oh and Chil Woo Lee, “Object Detection and Tracking in Night Time Video Surveillance”, 2013 10th International Conference on
Ubiquitous Robots and Ambient Intelligence (URAI) October 31-November 2, 2013 / Ramada Plaza Jeju Hotel, Jeju, Korea
[10] R. Vijayarajan and S. Muttan, “Fuzzy C-Means Clustering Based Principal Component Averaging Fusion”, International Journal of Fuzzy Systems, Vol.
16, No. 2, June 2014
[11] Cheng-Chang Lien, Wen-Kai Yu, Chang-Hsing Lee, Chin-Chuan Han, “Night Video Surveillance Based on the Second-Order Statistics Features”, 2014 Tenth
International Conference on Intelligent Information Hiding and Multimedia Signal Processing , 978-1-4799-5390-5/14 © 2014 IEEE DOI 10.1109/IIH-
MSP.2014.94
[12] Q.Xu, H.jiang, R.scopigno, M.Sbert, “A novel approach for enhancing very dark image sequences”, corpus ID: 34260140, 10.1016/j.sigpro.2014 .02.013, 2014
[13] Ms. Anjali A. Dhanve, Mrs. Gyankamal J. Chhajed , “Review on Color Transfer Between Images”, International Journal of Engineering Research and General
Science Volume 2, Issue 6, October, 2014, ISSN 2091-2730
[14] Soumya T, Sabu M Thampi , “Day Color Transfer Based Night Video Enhancement for Surveillance System “, 978-1-4799-1823-2/15 © 2015 IEEE
[15] Soumya T, Sabu M Thampi , “Night time visual refinement techniques for surveillance video: a review“, corpus ID:199370248, 10.1007/s11042-019-07944-z,
2019
[16] R. Debnath, Anu singha, B.saha, M.K. Bhowmik, “ A comparative study of background segmentation approaches in detection of person with gun under adverse
weather conditions”, corpus ID: 224777333, 10.1109/ICCCNT49239.2020.9225409, 2020
[17] J.Li, Yuanxi peng, Minghui song, L.Lui, “ Image fusion based on guided filter and online robust dictionary learning”, corpus ID: 213976477,
10.1016/j.infrared.2019.103171, 2020
[18] http://www.mhhe.com/jayaraman/dip, “Digital Image Processing” by S Jayaraman ,S Esakkirajan and T Veerakumar

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5938

You might also like