1 s2.0 S0924271617302903 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169

Contents lists available at ScienceDirect

ISPRS Journal of Photogrammetry and Remote Sensing


journal homepage: www.elsevier.com/locate/isprsjprs

A novel scene-based non-uniformity correction method for SWIR


push-broom hyperspectral sensors
Bin-Lin Hu a,b, Shi-Jing Hao a, De-Xin Sun a,b,c, Yin-Nian Liu a,b,c,⇑
a
Key Laboratory of Infrared System Detection and Imaging Technologies, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
b
University of Chinese Academy of Sciences, Beijing 100049, China
c
Qidong Optoelectronic Remote Sensing Center, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Qidong 226200, China

a r t i c l e i n f o a b s t r a c t

Article history: A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR)
Received 29 March 2017 push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that
Received in revised form 5 July 2017 for each band there will be ground objects with similar reflectance to form uniform regions when a suf-
Accepted 11 August 2017
ficient number of scanning lines are acquired. The uniform regions are extracted automatically through a
Available online 17 August 2017
sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data
from airborne experiment are used to verify and evaluate the proposed method, and results show that
Keywords:
stripes in the scenes have been well corrected without any significant information loss, and the non-
Hyperspectral
SWIR
uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods,
Non-uniformity correction and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and
Scene-based spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and
Remote sensing efficiency.
Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier
B.V. All rights reserved.

1. Introduction et al., 2015), PRISMA (Stefano et al., 2013), HyspIRI (Green et al.,
2008), will unleash an unprecedented data stream for a variety of
Hyperspectral imaging collects a wealth of geometric and radio- ecological and agricultural applications (Suarea et al., 2016; Lu
metric information as well as abundance spectral information at et al., 2009), greatly increasing the benefits for land utilization,
the same time, which greatly improves our ability to better under- geology, monitoring of atmosphere, quality of inland waters and
stand a large number of environmental processes and has therefore coastal zones. Comparing to Hyperion, EnMAP resolves 228 spec-
attracted growing interest worldwide. Hyperspectral data are tral bands from 0.42 to 2.45 lm with a 30 m ground resolution
mainly obtained from airborne and spaceborne instruments. The and a 30 km swath width. With the boost of swath width, the data
typical airborne instruments include AVIRIS (Vane et al., 1993), rate of EnMAP is up to 866 Mbps, and the daily volume (5000 km
OMIS (Liu et al., 2002), HyMAP (Cocks et al., 1998), APEX (Itten swath length/day) is reaching approximately 650Gbit uncom-
et al., 2008), and the typical spaceborne instruments are Hyperion pressed. In addition, when further improving the performance of
(Folkman et al., 2001) and CHRIS (Barnsley et al., 2004), which have hyperspectral imagers in spectral range, spectral resolution, spatial
provided large amount of hyperspectral data over the last few dec- resolution and swath width, the burden on data processing will be
ades. Hyperion, launched in 2000, is capable of resolving 220 spec- enhanced more rapidly.
tral bands from 0.4 to 2.5 lm with a 30 m ground resolution and a Infrared focal plane arrays (IRFPA) are widely used in hyper-
7.5 km swath width. With hundreds of spectral bands and fine spectral imaging. However, due to the inhomogeneity of individual
characteristic spectral signatures (Marshall and Thenkabail, pixels, IRFPA has a common non-uniformity problem (Mouzali
2015), the data rate of Hyperion is about 160 Mbps. Still, the forth- et al., 2015; Arslan et al., 2015). Therefore, without exception, all
coming hyperspectral satellite missions, such as EnMAP (Guanter these hyperspectral instruments suffered from NUC problems in
the infrared bands, which is a major limitation of imaging perfor-
⇑ Corresponding author at: Key Laboratory of Infrared System Detection and mance. For push-broom hyperspectral sensors, the two dimensions
Imaging Technologies, Shanghai Institute of Technical Physics, Chinese Academy of of the plane array are used for spectral and spatial dimensions,
Sciences, Shanghai 200083, China. respectively. And they include a long and narrow slit that controls
E-mail address: [email protected] (Y.-N. Liu).

http://dx.doi.org/10.1016/j.isprsjprs.2017.08.004
0924-2716/Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169 161

a strip of area being imaged along the cross-track direction. Unlike nous ground objects with similar spectral reflectance. The uniform
an aerial digital frame camera that captures an entire scene nearly regions must cover the whole cross-track pixels (or one need to
instantaneously, the operation of a push-broom hyperspectral sen- select a number of small pieces of uniform areas to cover the entire
sor captures a single cross-track line at one time for all bands while cross-track pixels) in order to generate a NUC for the entire scene,
the other spatial dimension along the flight direction is obtained making the two-point method not suitable for fragmental and
through the moving of the aircraft, which is illustrated in Fig. 1. heterogeneous scenes where the uniform areas are small and scat-
Therefore, the non-uniformities among cross-track pixels will tered. Another NUC method is median-spatial ratio (MSR)
cause along-track striping in the image, which seriously deterio- (Leathers et al., 2005). This method assumes that the median value
rates the quality of the image. In order to facilitate the subsequent over the ratios of the scene irradiation of neighboring samples is
processing of hyperspectral data, and considering the big challenge approximately unity, i.e.
resulted from the huge amount of hyperspectral data, an accurate,
medianðIsþ1;b =Is;b Þ  1 ð1Þ
robust and fast NUC processing method is in highly demand (Ratliff
et al., 2003; Naratanan et al., 2005). where Isþ1;b ¼ v sþ1;b  ðDN sþ1;b  dsþ1;b Þ and Is;b ¼ v s;b  ðDN s;b  ds;b Þ
In recent years, there are many studies discussing the NUC are the scene irradiation for sample s + 1 and s at band b, respec-
problem for infrared imagery (Vera et al., 2011; Liu et al., 2016). tively. DN sþ1;b , DN s;b are the output digital values and the corre-
Traditionally, NUC was performed with calibration method using sponding NUCs (v sþ1;b , dsþ1;b , v s;b , ds;b ) can be solved by
a uniform, diffuse light source. The calibration methods include minimizing the function:
  
polynomial fitting method (Li et al., 2005; Huang and An, 2011),  v sþ1;b  ðDNsþ1;b  dsþ1;b Þ 
f s;b ¼ 1  median  ð2Þ
cubic spline interpolation method (Li et al., 2009), and their varia-
v s;b  ðDNs;b  ds;b Þ
tions (Zhu and Liu, 2007). Although the calibration method has low
computation complexity and is straight-forward to implement, it is MSR can be used in the fragmental and heterogeneous scenes.
easily affected by the changes in optical efficiency, detector’s However, the corresponding NUC coefficients can only be solved
response characteristics, and circuit performance, which makes by the numerical optimization, which is quite time consuming.
the NUC coefficients calculated through the laboratory calibration Sometimes, MSR even shows limited effectiveness, which will be
not appropriate for the direct usage. On the other hand, the described in more detail in Section 4. Therefore, a truly practical
scene-based NUC methods can update the NUC coefficients directly method for the correction of SWIR push-broom hyperspectral data
from a scene, and track the drift of a sensor’s sensitivity. Due to its is quite necessary.
flexible adaptability, this approach is broadly used in infrared In this study, we propose a novel scene-based NUC method,
applications. It mainly includes the filtering method (Torres and which solves the two-point method’s poor adaptability to various
Hayat, 2003; Rakwatin et al., 2007; Shi et al., 2008), neural network ground scenes and the dependency on human identification to
algorithm (Scribner et al., 1991; Fan et al., 2010), and constant locate relatively bright and dark uniform regions in the image. It
statistic method (Harris and Chiang, 1997; Leathers et al., 2005; can provide reliable and timely results for large amount of hyper-
Torres et al., 2003a,b). However, the filtering method and neural spectral data, while MSR is time consuming and may not provide
network algorithm are commonly used in infrared imageries with- satisfying correction results. In order to evaluate the performance
out spectroscopic imaging, which are not appropriate for the direct of the proposed method, an airborne experiment was conducted
usage in SWIR push-broom hyperspectral sensors. and the collected SWIR data were used for evaluation. And Table 1
Specifically, there are two typical scene-based NUC methods for shows the main parameters of the airborne compact hyperspectral
sufficient linear push-broom hyperspectral sensors. One is the imager (ACHI). The manuscript is organized as follows. We will
commonly used two-point method (Wang et al., 2003). It is illustrate the details of our correction method in Section 2. In Sec-
assumed that the average spectral response of each of cross-track tion 3, we will demonstrate the airborne experimental data. In Sec-
pixels within the same uniform region should be approximately tion 4, we will give a full evaluation of our method both
equal to those of other pixels in the regions. The two-point method qualitatively and quantitatively, comparing with the two-point
required visual interpretation to locate relatively bright and dark method and MSR method. And finally, we will give the correspond-
uniform regions in the image, which correspond to the homoge- ing conclusion in Section 5.

Spatial
dimension 2. Methods
Spectral
dimension
Plane arrary For sufficiently linear sensors, the respond DN values and its
input irradiation meet a linear relationship, which is given by
(Torres et al., 2003a,b)
Convergent lens
DNði; jÞ ¼ gði; jÞ  Iði; jÞ þ dði; jÞ ð3Þ
Grating
where Iði; jÞ is the input irradiation, DNði; jÞ is the output digital
value, gði; jÞ is the gain factor and dði; jÞ is the dark current signal.
Collimation lens
Table 1
Slit Parameters of ACHI.

Parameter Specification
Objective lens Detector type InGaAs FPA
Spectral range 0.9–1.7 lm
Spectral resolution 4.8 nm
Spectral bands 180
Spatial resolution@1 km 0.74 m
Ground track Spatial pixels 320
F/# 2.0
Frame rate 50 Hz
Detector cooling TE1
Fig. 1. Diagram of the imaging principle of push-broom hyperspectral sensors.
162 B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169

Actually, gði; jÞ and dði; jÞ of each pixel is quite different during the pixel is stable. Since the outputted DN values are positively linear
manufacturing process, which is the major reason of the non- to the irradiation reflected by ground objects, then sorting the
uniformity of IRFPA. The NUC eliminates the non-uniformity by acquired DN values is to sort the reflectance of ground objects.
seeking a single multiplicative gain coefficient aði; jÞ and an offset When a sufficient number of scanning lines are collected, and after
coefficient bði; jÞ for each pixel, and the NUC is then given by sorting the DN values over the scanning lines of each sample in
ascending order, several tens of the high or low DN values should
DN NUC ði; jÞ ¼ aði; jÞ  DNði; jÞ þ bði; jÞ ð4Þ
correspond to a set of ground objects with similar reflectance. For
From Eq. (4), only two data sets consisted of relatively dark and different bands, the uniform regions extracted can be different, but
bright uniform radiance is required to calculate the gain and offset required to be cross-track homogeneous, which is guaranteed by a
coefficients of each pixel. sufficient number of scanning lines. The proposed method combi-
Push-broom hyperspectral sensors capture a single cross-track nes the response characteristic of the sensors and the statistical
line at a time for all bands while the other spatial dimension in characteristic of the ground objects over a sufficient number of
the along-track direction is obtained through the moving of the air- scanning lines, making the NUC process not limited by specific
craft. Here, the cross-track pixels and wavelength channels are rep- scenes.
resented as samples and bands, respectively. The number of The implementation process of the proposed method is shown
samples is denoted as K, and the number of bands is denoted as in Fig. 3. Firstly, for the image of band j (j ¼ 1; 2; . . . ; NÞ, sorting
N. K and N are limited by the dimensions of IRFPA, whereas the the DNs over the scanning lines of each sample in ascending order.
number of lines in the image cube corresponds to the number of Secondly, in order to reduce the influence of some singular DNs
scanning lines during the flight. (such as saturation response and bad pixels), both the top and
Assume that we have confirmed two uniform regions in the the bottom 5–10th percentile of the DNs are discarded and the rest
image for band j, as shown in Fig. 2. Both the dark uniform region 80–90th percentile of the DNs are kept as the valid data of each
and the bright uniform region cover K samples and include L1 lines sample. Thirdly, for each sample, we take 30–50 DNs at the bottom
and L2 lines, respectively. end to form a relatively dark uniform region, and take another 30–
The average recovered DN values of two uniform regions in the 50 DNs at the top end to form a relatively bright uniform region.
image of band j are P 1 ðjÞ and P 2 ðjÞ, j ¼ 1; 2; . . . ; N, respectively. Fourthly, P1 ðjÞ, P2 ðjÞ, Q 1 ði; jÞ and Q 2 ði; jÞ (i ¼ 1; 2; . . . ; KÞ, are calcu-
lated respectively. Finally, the NUC coefficients are calculated by
1 X K X 1 L
solving Eq. (9). For each band of the hyperspectral data cube, the
P1 ðjÞ ¼ DNði; lÞ ð5Þ
KL1 i¼1 l¼1 above steps are repeated until the NUC process completed.
The complexity of the proposed method is analyzed as follow.
Assume the number of scanning lines collected during the flight
1 X K X 2 L
P2 ðjÞ ¼ DNði; lÞ ð6Þ is L. Firstly, sorting the DNs over the scanning lines of each sample
KL2 i¼1 l¼1 is OðLlog2 LÞ, and the complexity is OðNKLlog2 LÞ for all K samples
The average DN values over the lines in the along-track direc- and N bands. Secondly, the complexity of Eqs. (5)–(9) is OðNKÞ.
tion for the two uniform regions are Q 1 ði; jÞ and Q 2 ði; jÞ, Thirdly, the complexity of Eq. (4) is OðNKLÞ. Since
respectively. NK < NKL < NKLlog2 L, the time complexity of the proposed
method is about OðNKLlog2 LÞ:
1X
L 1

Q 1 ði; jÞ ¼ DNði; lÞ ð7Þ


L1 l¼1 3. Experimental data

In order to investigate the effectiveness of the proposed


1X
L 2

Q 2 ði; jÞ ¼ DNði; lÞ ð8Þ method, the SWIR hyperspectral data sets collected in ACHI
L2 l¼1

wherei ¼ 1; 2; . . . ; K, j ¼ 1; 2; . . . ; N. Then, the NUC coefficients of Hyperspectral Data Cube


each pixel can be calculated from the following equations: band j evaluated
 Sorting the DNs over ththe scanning
i
aði; jÞ  Q 1 ði; jÞ þ bði; jÞ ¼ P1 ðjÞ lines of each sample
ð9Þ
aði; jÞ  Q 2 ði; jÞ þ bði; jÞ ¼ P2 ðjÞ Take the 80th to 90th percentile of
The
next the DNs for the valid data
The critical step of the proposed method is the automatic band,
extraction of uniform regions. For one specific band, to extract j=j+1
Take 30 to 50 DNs at both the lower end and
the uniform region is essentially to find the ground objects with the higher end to form two uniform regions
similar reflectance. For ACHI, the time required to collect 1000
scanning lines is less than half a minute. In such a short time, Calculate P1(j), P2(j), Q1(i,j), Q2(i,j)
the illumination condition and the response characteristic of each

The NUC coefficients are computed


L1 L2

NUC is processed for this band

Dark Flight Bright


No Is it the final
uniform direction uniform
K

band?
region region
Yes
NUC
complete

Fig. 2. Diagram of two uniform regions. Fig. 3. The flow chart of the proposed method.
B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169 163

(Table 1) from an airborne experiment in 2015 were analyzed, ning lines, as shown in Fig. 6, and the main coverage area are plants
which include various ground scenes. We choose three representa- and buildings in an urban area. Compared to scene 1 and scene 2,
tive scenes. The first scene is a simple one which can be directly this scene is much more complex, the dark and bright uniform
corrected by the two-point method, and the other two scenes are regions are both difficult to identify. Therefore, the two-point
a bit tricky that cannot be corrected by the two-point method. method is again not applicable.
Scene 1 consists of 1000 scanning lines, as shown in Fig. 4, hor-
izontal stripes are noticeably across the image. The main ground 4. Experimental results and discussion
objects are open water and bare land, and we can visually identify
the dark and bright ground uniform regions, as illustrated in the 4.1. Methods’ adaptability to scene complexity
figure (red outline), where the bright uniform region is spliced
from two parts. Scene 2 consists of 1000 scanning lines, as shown For scene 1, the two-point method and the proposed method
in Fig. 5, the main coverage are open water and land covers. are used for NUC, and the results are shown in Fig. 7(a) and (b),
Although we can visually identify the dark uniform region (open respectively. As the two figures show, stripes have been well
water), but the bright uniform region is difficult to find, making removed. It is worth to be noted that the open water body has been
the two-point method not applicable. Scene 3 consists of 700 scan- visually corrected. Since the reflectivity of water is quite low at

Dark Flight direction


uniform
region Bright
uniform
region

Fig. 4. False-color RGB image of scene 1. (R: 1565 nm, G: 1280 nm, B: 990 nm). (For interpretation of the references to color in this figure legend, the reader is referred to the
web version of this article.)

Fig. 5. False-color RGB image of scene2. (R:1565 nm, G: 1340 nm, B: 950 nm). (For interpretation of the references to color in this figure legend, the reader is referred to the
web version of this article.)

Fig. 6. False-color RGB image of scene 3. (R: 1545 nm, G: 1325 nm, B: 970 nm). (For interpretation of the references to color in this figure legend, the reader is referred to the
web version of this article.)
164 B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169

(a)

(b)
Fig. 7. NUC results of scene 1 from (a) two-point method and (b) proposed method.

(a)

(b)
Fig. 8. NUC results of scene 2 from (a) MSR method and (b) proposed method.

SWIR band, the sensitivity of the detector is likely to enter nonlin- For scene 2, the MSR method and the proposed method are used
ear response zone. Therefore, it is usually not easy to eliminate the for NUC, and the results are shown in Fig. 8(a) and (b), respectively.
stripes for the open water. Although the two-point method is not applicable, both the MSR
B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169 165

Present a
brightness
gradient

(a)

No obvious
brightness
gradient

(b)
Fig. 9. NUC results of scene 3 from (a) MSR method and (b) proposed method.

method and the proposed method provide decent NUC results. ples are solved sequentially. Therefore, when there are noisy values
However, when we display the two NUC images by stretching for ratios, v s;b in Eq. (2) will slowly ‘‘drift” from one side to the
the colorscale in a linear way to highlight the water, and as the other across the track. In comparison, the proposed method solved
insets in Fig. 8(a) and (b) show, the open water corrected by the all the NUC coefficients at one time, so it will not suffer the ‘‘drift”
MSR method is still suffered from obvious horizontal stripes. In problem.
comparison, our proposed method offers a satisfactory result.
For scene 3, the MSR method and the proposed method are used 4.2. Methods’ adaptability to signal intensity
for NUC, and the results are shown in Fig. 9(a) and (b), respectively.
For this much more complex scene, the proposed method still pro- In order to evaluate the methods’ adaptability to the weak-
vides a reasonable result, while the MSR method leaves an obvious signal spectral bands, the NUC results from the two-point method,
drift from one side to the other side in the cross-track direction. For the MSR method and the proposed method on the water absorp-
the MSR method, when it starts to solve the NUC coefficients, sam- tion band (1450 nm) are compared.

(a) (b) (c)


Fig. 10. Comparison of the NUC results of scene 1. (a) Before correction, (b) Two-point and (c) Proposed.
166 B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169

Fig. 10 shows the results of the water absorption band in 4.3. Quantitative evaluation
scene 1, including an image before NUC (Fig. 10(a)), and the
results from the two-point method (Fig. 10(b)) and that from In order to have a quantitatively evaluation of the results, non-
the proposed method (Fig. 10(c)). Both the two-point method uniformity and ‘‘roughness” are used to evaluate the performance
and the proposed method can eliminate the stripes effectively. of the NUC methods. Non-uniformity r is frequently used when
However, the result from the two-point method has an annoying there is a ground uniform region, and it is calculated as:
dividing line, as shown in the red box in Fig. 10(b). This is rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 1 XK XL  2
because the bright uniform region used in the two-point method r¼ xi;j  x ð10Þ
is spliced from two parts as illustrated in Fig. 2, and the unifor- x KL i¼1 j¼1

mity between the two parts is actually inhomogeneous. In our


proposed method, the algorithm can automatically extract the 1 X K X L
x ¼ xi;j ð11Þ
uniform regions, which is much more effective and reliable than KL i¼1 j¼1
the visual identification, and the correction result is much more
satisfactory. where K is the number of samples, L is the number of scanning lines
Figs. 11 and 12 show the correction results on the water absorp- of the evaluated uniform region, xi;j is the DN value of pixel (i, j), x is
tion band in scene 2 and scene 3, respectively. As the figures show, the average DN value.
the images are severely damaged by non-uniformity before correc- The ‘‘roughness” q is frequently used when the ground true uni-
tion. After being corrected by the MSR method, the image quality form region is not available, and it is calculated as (Torres et al.,
has been greatly improved. However, there are still two obvious 2003a,b):
defects: (1) it cannot completely eliminate the stripes on the open kh1  f k1 þ kh2  f k1
water, and (2) there is a significant ‘‘drifting” effect along the cross- q¼ ð12Þ
kf k1
track direction in the corrected image. After the NUC based on our
proposed method, the image quality has been improved signifi- where h1 ði; jÞ ¼ di1;j  di;j ; and h2 ði; jÞ ¼ di;j1  di;j ; respectively, di;j
cantly, where the open water is free of stripes and the land covers is the Kronecker delta. f is the evaluated image, k  k1 is L1 norm,
can be clearly distinguished. and  represents the operation of discrete convolution. On one

(a) (b) (c)


Fig. 11. Comparison of the NUC results of scene 2. (a) Before correction, (b) MSR and (c) Proposed.

Show brightness gradiant No brightness gradiant

(a) (b) (c)


Fig. 12. Comparison of the NUC results of scene 3. (a) Before correction, (b) MSR and (c) Proposed.
B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169 167

Fig. 13. Comparison of the NUC results of open water. (a) Before correction, (b) Two-point and (c) Proposed.

hand, ‘‘roughness” is related to the uniformity of the image, and a The non-uniformity (at all bands) is calculated, and the result is
smaller roughness indicates a more uniform image. On the other shown in Fig. 14(a). Before correction, r is above 4%, while after
hand, it is also related to the details of the image, and it would the NUC, r from the two-point method and the proposed method
induce the risk of over-correction if it is a small value. Therefore, are very close to each other and both are no more than 0.5%.
when the non-uniformity r meets the requirement, the larger The roughness before and after NUC are calculated from the
roughness indicates a better reservation of the image details. 1000 scanning-lines image of scene 1, and the results are shown
For hyperspectral data, aside from the concern of non- in Fig. 14(b). The roughness from the proposed method is slightly
uniformity, the fidelity of spectral information is of most impor- larger than that from the two-point method. With similar r;we
tance. Therefore, the influences of different NUC methods on the conclude that the proposed method better reserve the details of
spectral curves should also be considered. the original image than the two-point method. The uniform
regions extracted by visual identification in the two-point method
are usually not truly uniform. When it is assumed to be uniform,
4.3.1. The two-point method and the proposed method there is a risk of over-correction and will introduce biases into
In the SWIR band, the reflectance of water is rather low, which the NUC coefficients. Based on a much more reliable assumption,
makes the open water a reasonable uniform ‘‘dark object” for non- the proposed method extracts the uniform regions automatically,
uniformity evaluation. A total of 200 scanning lines of the open and improves the adaptability to various scenes. Therefore, the risk
water image taken from scene 1 is shown in Fig. 13. of over-correction will be greatly avoided.

Fig. 14. Comparison of non-uniformity parameters. (a) Non-uniformity and (b) Roughness.

Fig. 15. Comparison of DN responding curves before and after NUC. (a) Dark target and (b) Bright target.
168 B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169

Fig. 16. Comparison of roughness before and after NUC. (a) Scene 2 and (b) Scene 3.

Fig. 17. Comparison of DN response before and after correction. (a) Scene 2 and (b) Scene 3.

Fig. 15 shows the comparison of the DN responding curve of a As Table 2 shows, the MSR method is much more time consum-
dark target and a bright target in scene 1 before and after the ing than the two-point method and the proposed method.
NUC process. The spectral responding curve shows high frequency Although the two-point method takes less time than the proposed
spikes or noise before the correction. After the NUC from the two- one, it is less efficient than the proposed method when considering
point method and the proposed method, the spikes are largely its dependency on human identification to locate relatively bright
removed. In addition, the spectral shape before and after NUC and dark uniform regions in the image.
shows reasonable consistency. From the above quantitative evaluation of the three NUC meth-
ods, we find that our proposed method shows similar non-
4.3.2. The MSR method and the proposed method uniformity but better roughness (better reservation of the details
Fig. 16 shows the results of ‘‘roughness” before and after NUC of of the original image) comparing to the two-point method. And
scene 2 and scene 3, respectively. Both the MSR method and the our method shows similar roughness but better reservation of the
proposed method have greatly improved the ‘‘roughness” of the spectral curve comparing to the MSR method. Besides, our method
image of around 0.02. is considered to be most efficient among the three NUC methods.
Fig. 17 shows the DN responding curve before and after NUC of
a target in scene 2 and scene 3. After NUC based on the MSR
5. Conclusions
method, the spectral response of the ground object is still noisy,
and these spikes are inconsistent with those before NUC. Also, as
In this study, an effective scene-based NUC method is proposed.
shown in Fig. 17(b), the spectral response after MSR is largely devi-
Comparing to the most frequently used two-point method and
ated from that before MSR. After NUC based on the proposed
MSR method, our method is more suitable for various scenes,
method, the spikes are eliminated, and the spectral curves before
and can remarkably reduce the computation burden of NUC pro-
and after NUC are reasonably consistent.
cess. Both the two-point method and the proposed method can
greatly improve the non-uniformity of the simple-scene image,
4.3.3. Comparison of time complexity
and have good maintenance for the spectral shape. However, the
Efficiency is an important issue for NUC correction method.
two-point method is slightly worse in the reservation of image
Table 2 shows the average processing time of different NUC
details, and has a poor adaptability to various scenes. Both the
method for 1000 scanning lines.
MSR method and the proposed method can improve the roughness
of the image, but the MSR method shows a ‘‘drift” effect in the
Table 2 cross-track direction and has less fidelity to the spectral shape. In
Comparison of processing time. summary, the proposed method shows flexible adaptability, low
Methods Two-point MSR Proposed computation burden, good reservation of image details, and high
fidelity to the spectral shape, which is potentially applicable in
Processing time 2.3 s 120.2 s 4.4 s
highly automated image processing chains.
B.-L. Hu et al. / ISPRS Journal of Photogrammetry and Remote Sensing 131 (2017) 160–169 169

Acknowledgements Lu, S., Shimizu, Y., Ishii, J., Funakoshi, S., Washitani, I., Omasa, K., 2009. Estimation of
abundance and distribution of two moist tall grasses in the Watarase wetland,
Japan, using hyperspectral imagery. ISPRS J. Photogram. Remote Sens. 64, 674–
This work was financially supported by the National Key R&D 682.
Program of China (2016YFB0500400), the National High Technol- Marshall, M., Thenkabail, P., 2015. Advantage of hyperspectral EO-1 Hyperion over
multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM plus, and MODIS
ogy Research and Development Program of China (863 Program;
vegetation indices in crop biomass estimation. ISPRS J. Photogram. Remote
2014AA123202), Major Project of National High Resolution Earth Sens. 108, 205–218.
Observation System of China (A0106/1112). Mouzali, S., Lefebvre S., Rommeluere, S., Ferrec, Y., Primot, J., 2015. Modeling of
HgCdTe focal plane array spectral inhomogeneities. In: Proc. SPIE9520,
Integrated Photonics: Materials, Devices, and Applications III, 95200S, pp. 1–7.
References Naratanan, B., Hardie, R.C., Muse, R.A., 2005. Scene-based nonuniformity correction
technique that exploits knowledge of the focal-plane array readout
Arslan, Y., Oguz, F., Besikci, C., 2015. Extended wavelength SWIR InGaAs focal plane architecture. Appl. Opt. 44 (17), 3482–3491.
array: characteristics and limitations. Infrared Phys. Technol. 70, 134–137. Rakwatin, P., Takeuchi, W., Yasuoka, Y., 2007. Stripe noise reduction in MODIS data
Barnsley, M.J., Settle, J.J., Cutter, M., Lobb, D., Teston, F., 2004. The PROBA/CHRIS by combining histogram matching with facet filter. IEEE Trans. Geosci. Remote
mission: a low-cost smallsat for hyperspectral, multi-angle, observations of the Sens. 45 (6), 1844–1856.
Earth surface and atmosphere. IEEE Trans. Geosci. Remote Sens. 42, 1512–1520. Ratliff, B.M., Hayat, M.M., Tyo, J.S., 2003. Radiometrically accurate scene-based
Cocks, T., Jenssen, R., Stewart, A., Wilson, I., Shields, T., 1998. The HyMap airborne nonuniformity correction for array sensors. J. Opt. Soc. Am. A 20 (10), 1890–
hyperspectral sensor: the system, calibration and performance. In: Proceedings 1899.
of the First EARSeL Workshop on Imaging Spectroscopy, pp. 37–42. Scribner, D.A., Sarkady, K.A., Kruer, M.R., Caulfield, J.T., 1991. Adaptive
Fan, X.M., Wei, Z., Yan, F.R., Wang, S.Y., 2010. Nonuniformity correction of focal nonuniformity correction for IR focal-plane arrays using neural networks. In:
array system based on BP Neural network. J. Tianjin Univ. Technol. 26 (6), 75– Proc. SPIE 1541, Infrared Sensors: Detectors, Electronics, and Signal Processing,
78. pp. 100–109.
Folkman, M.A., Pearlman, J., Liao, L.B., Jarecke, P.J., 2001. EO-1/hyperion Shi, G.M., Wang, X.T., Zhang, L., Liu, Z., 2008. Removal of random stripe noises in
hyperspectral imager design, development, characterization, and calibration. remote sensing images by directional filter. J. Infrared Millimeter Waves 27 (3),
In: Proc. SPIE 4151, Hyperspectral Remote Sensing of the Land and Atmosphere, 214–218.
pp. 40–51. Stefano, P., Angelo, P., Simone, P., Filomena, R., Federico, S., Tiziana, S., et al., 2013.
Green, R., Asner, G., Ungar, S., Knox, R., 2008. NASA mission to measure global plant The PRISMA hyperspectral mission: Science activities and opportunities for
physiology and functional types. In: Proceedings of the 2008 IEEE Aerospace agriculture and land monitoring. In: International Geoscience and Remote
Conference, pp. 1–7. Sensing Symposium (IGARSS), pp. 4558–4561.
Guanter, L., Kaufmann, H., Segl, K., Foerster, S., Rogass, C., Chabrillat, et al., 2015. The Suarea, L.A., Apan, A., Werth, J., 2016. Hyperspectral sensing to detect the impact of
EnMAP spaceborne imaging spectroscopy mission for earth observation. herbicide drift on cotton growth and yield. ISPRS J. Photogram. Remote Sens.
Remote Sens. 7, 8830–8857. 120, 65–76.
Harris, J.G., Chiang, Y.M., 1997. Nonuniformity correction using the constant- Torres, S.N., Hayat, M.M., 2003. Kalman filtering for adaptive nonuniformity
statistics constrain: analog and digital implementations. In: Proc. SPIE 3061, correction in infrared focal plane arrays. Opt. Soc. Am. 20 (3), 470–480.
Infrared Technology and Applications XXIII, 3061, pp. 895–905. Torres, S.N., Vera, E.M., Reeves, R.A., Sobarzo, S.K., 2003a. Adaptive scene-based non-
Huang, Y.D., An, J.B., 2011. A nonuniformity correction algorithm for IRFPA based on uniformity correction method for infrared-focal plane arrays. Proc. SPIE 5076,
improved polynomial fitting. Infrared 32 (3), 29–33. 130–139.
Itten, K.I., Dell’Endice, F., Hueni, A., Kneubühler, M., Schläpfer, D., Odermatt, D., Torres, S.N., Pezoa, J.E., Hayat, M.M., 2003b. Scene-based nonuniformity correction
Seidel, F., Huber, S., Schopfer, J., Kellenberger, T., Bühler, Y., D’Odorico, P., Nieke, for focal plane arrays by the method of the inverse covariance form. Appl. Opt.
J., Alberti, E., Meuleman, K., 2008. APEX - the Hyperspectral ESA Airborne Prism 42, 5872–5881.
Experiment. Sensors 8, 6235–6259. Vane, G., Green, R.O., Chrien, T.G., Enmark, H.T., Hansen, E.G., Porter, W.M., 1993.
Leathers, R.A., Downes, T.V., Priest, R.G., 2005. Scene-based nonuniformity The airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens.
corrections for optical and SWIR push-broom sensors. Opt. Express 13, 5136– Environ. 44, 127–143.
5150. Vera, E., Meza, P., Torres, S., 2011. Total variation approach for adaptive
Li, Y.X., Sun, D.X., Liu, Y.N., 2005. Non-uniformity correction of infrared focal plane nonuniformity correction in focal-plane arrays. Opt. Lett. 36 (2), 172–174.
based on the polynomial fitting. Laser Infrared 35 (2), 105–107. Wang, Y.M., Chen, J.X., Liu, Y.N., Xue, Y.Q., 2003. Study on two-point multi-section
Li, E., Liu, S.Q., Wang, B.J., Yin, S.M., 2009. Nonuniformity correction algorithm of IRFPA nonuniformity correction algorithm. J. Infrared Millimeter Waves 22 (6),
IRFPA based on cubic spline function. Acta Photonica Sinica 38 (11), 3016–3020. 415–418.
Liu, Y.N., Xue, Y.Q., Wang, J.Y., Shen, M.M., 2002. Operational modular imaging Zhu, J., Liu, G.X., 2007. An improved algorithm based on two-point linear correction.
spectrometer. J. Infrared Millim. Waves 21 (1), 9–13. Sci. Technol. Eng. 7 (6), 1180–1182.
Liu, Z., Ma, Y., Huang, J., Fan, F., Ma, J.Y., 2016. A registration based nonuniformity
correction algorithm for infrared line scanner. Infrared Phys. Technol. 76, 667–
675.

You might also like